Please see my code snippets below:
amqmdnet (this works but we prefer IBM.XMS because we can then do asynchronous consumption)
XMS with CCDT file
XMS with property for compression
We tried all possible ways to configure compression (in XMS). I really appreciate your help, if anyone could help...
It seems to me the possible values for the compression property are:
static int WMQ_COMPMSG_DEFAULT
static int WMQ_COMPMSG_NONE
static int WMQ_COMPMSG_RLE
static int WMQ_COMPMSG_ZLIBFAST
static int WMQ_COMPMSG_ZLIBHIGH
So something like this might work:
cf.SetIntProperty(XMSC.WMQ_MSG_COMP, XMSC.WMQ_COMPMSG_DEFAULT);
Edit:
Even got the actual values, if that helps.
public static final int WMQ_COMPMSG_DEFAULT 0
public static final int WMQ_COMPMSG_NONE 0
public static final int WMQ_COMPMSG_RLE 1
public static final int WMQ_COMPMSG_ZLIBFAST 2
public static final int WMQ_COMPMSG_ZLIBHIGH 4
Please remember, I'm trying to help, but won't set up a test environment just to post a possible solution. If it doesn't help feel free to vote it down.
Without APAR IJ12614, you cannot set channel compression programmatically in XMS .NET. In order to use channel compression you must use it via a CCDT.
The WMQ_CCDTURL is also not supported as best I understand. Use the standard environment variables and add multiple records with different QMNAME field values into a single CCDT. Have the application connect to the appropriate queue manager name in order to select the correct record from the CCDT. Remember that in this case the queue manager name can be a logical one if an asterisk is also used, and does not have to be a physical one.
Ensure all channel names are unique in the CCDT. Good practice anyway. Don't use SYSTEM.DEF.SVRCONN for example, but instead APP1.QM2.SVRCONN or some such, e.g. application reference and QMgr name reference contained in the channel name.
Related
I want to try some frameworks features that required names of parameter in runtime, so I need to compile my app with -parameters and it will store the names of parameter in JVM byte code.
Which drawbacks except the size of jar/war exist of this parameter usage?
The addition of parameter names to the class file format is covered by JEP 118, which was delivered in Java 8. There was a little bit of discussion about why inclusion of parameter names was made optional in OpenJDK email threads here and here. Briefly, the stated reasons to make parameter names optional are concerns about class file size, compatibility surface, and exposure of sensitive information.
The issue of compatibility surface deserves some additional discussion. One of the threads linked above says that changing a parameter name is a binary compatible change. This is true, but only in the strict context of the JVM's notion of binary compatibility. That is, changing a parameter name of a method will never change whether or not that method can be linked by the JVM. But the statement doesn't hold for compatibility in general.
Historically, parameter names have been treated like local variable names. (They are, after all, local in scope.) You could change them at will and nothing outside the method will be affected. But if you enable reflective access to parameter names, suddenly you can't change a name without thinking about what other parts of the program might be using it. Worse, there's nothing that can tell you unless you have strict test cases for all uses of parameter names, or you have a really good static analyzer that can find these cases (I'm not aware of one).
The comments linked to a question about using Jackson (a JSON processing library) which has a feature that maps method parameter names to JSON property names. This may be quite convenient, but it also means that if you change a parameter name, JSON binding might break. Worse, if the program is generating JSON structures based on Java method parameter names, then changing a method parameter name might silently change a data format or wire protocol. Clearly, in such an environment, using this feature reliably means that you have to have very good tests, plus comments sprinkled around the code indicating what parameter names mustn't be changed.
The only thing is the size of the .class that will change, since the bytecode will now contain more information:
public class DeleteMe3 {
public static void main(String[] args) {
}
private static void go(String s) {
}
}
For example this will contain information about paramter names like so:
private static void go(java.lang.String);
descriptor: (Ljava/lang/String;)V
flags: ACC_PRIVATE, ACC_STATIC
Code:
stack=0, locals=1, args_size=1
0: return
LineNumberTable:
line 11: 0
MethodParameters:
Name Flags
s
This MethodParameters will simply not be present without -parameters.
There might be frameworks that don't play nice with this. Here is one Spring Data JPA Issue. I know about it because we hit it some time ago and had to upgrade (I have not been faced with any other from there on).
As it stands out MultipleTextOutputFormat have not been migrated to the new API. So if we need to choose an output directory and output fiename based on the key-value being written on the fly, then what's the alternative we have with new mapreduce API ?
I'm using AWS EMR Hadoop 1.0.3, and it is possible to specify different directories and files based on k/v pairs. Use either of the following functions from the MultipleOutputs class:
public void write(KEYOUT key, VALUEOUT value, String baseOutputPath)
or
public <K,V> void write(String namedOutput, K key, V value,
String baseOutputPath)
The former write method requires the key to be the same type as the map output key (in case you are using this in the mapper) or the same type as the reduce output key (in case you are using this in the reducer). The value must also be typed in similar fashion.
The latter write method requires the key/value types to match the types specified when you setup the MultipleObjects static properties using the addNamedOutput function:
public static void addNamedOutput(Job job,
String namedOutput,
Class<? extends OutputFormat> outputFormatClass,
Class<?> keyClass,
Class<?> valueClass)
So if you need different output types than the Context is using, you must use the latter write method.
The trick to getting different output directories is to pass a baseOutputPath that contains a directory separator, like this:
multipleOutputs.write("output1", key, value, "dir1/part");
In my case, this created files named "dir1/part-r-00000".
I was not successful in using a baseOutputPath that contains the .. directory, so all baseOutputPaths are strictly contained in the path passed to the -output parameter.
For more details on how to setup and properly use MultipleOutputs, see this code I found (not mine, but I found it very helpful; does not use different output directories). https://github.com/rystsov/learning-hadoop/blob/master/src/main/java/com/twitter/rystsov/mr/MultipulOutputExample.java
Similar to: Hadoop Reducer: How can I output to multiple directories using speculative execution?
Basically you can write to HDFS directly from your reducer - you'll just need to be wary of speculative execution and name your files uniquely, then you'll need to implement you own OutputCommitter to clean up the aborted attempts (this is the most difficult part if you have truely dynamic output folders - you'll need to step through each folder and delete the attemps associated with aborted / failed tasks). A simple solution to this is to turn off speculative execution
For the best answer,turn to Hadoop - definitive guide 3rd Ed.(starting pg. 253.)
An Excerpt from the HDG book -
"In the old MapReduce API, there are two classes for producing multiple outputs: MultipleOutputFormat and MultipleOutputs. In a nutshell, MultipleOutputs is more fully featured, but MultipleOutputFormat has more control over the output directory structure and file naming. MultipleOutputs in the new API combines the best features of the two multiple output classes in the old API."
It has an example on how you can control directory structure,file naming and output format using MultipleOutputs API.
HTH.
I am developing a Windows Phone 7 silverlight application but, i can't use the session values to "navigate" between different pages on windows phone 7.
I also used "Isolated Storage" but i couldn't get the values.
This sample shows some persistence mechanisms:
http://www.scottlogic.co.uk/blog/colin/2011/05/a-simple-windows-phone-7-mvvm-tombstoning-example/
You can also use Query Strings to pass information between two pages. The values that make up a query string are appended to the URI.
Personally, I have a centralised controller class that gets instantiated with the main App class. Any values that need passing are placed in here, in one way or another.
Thanks Adam Houldsworth for your response, it really helped me. However i found a simpler solution.
We can create a Global Variables Class in "App.xaml.cs" file and put the variables in it. The class is accessible from everywhere.
Example:
public static class GlobalVariables
{
public static string my_string = "";
public static int my_int = "";
}
Then we access the Global Variables class like this:
project_Name.GlobalVariables.variable_name;
I've recently encountered an issue with the multi-threaded nature of the BizTalk Mapper and how it handles external assemblies.
As this quote from MSDN indicates:
Important Any code written in an
external assembly for use in a
scripting functoid needs to be thread
safe. This is required because
multiple instances of a map can use
these .NET instances at run time under
stress conditions.
The Mapper will reuse instances of external assemblies.
In a utility assembly my team was using we had the following code:
public class MapUtil
{
private string _storeReference;
public void SetStoreReference(string ref)
{
_storeReference = ref;
}
public string GetStoreReference()
{
return _storeReference;
}
}
This was causing storereferences from one file to be mapped to different files.
I (appear) to have fixed this by decorating the private field with [ThreadStatic]
[ThreadStatic]
private static string _storeReference;
My question is - does anyone know of any issues with this in the BizTalk Mapper? I'm aware that there are issues using [ThreadStatic] in Asp.Net for examble, due to threads being reused, but can find no documentation on the way the BizTalk mapper deals with threads.
I have used ThreadStatic to set a variable is custom receive pipeline and then access its value within BizTalk Map (through a helper class). have not got any problem so far - tested with ~50 invocations in parallel.
I've still not found a definitive statement along the lines of 'The threading behaviour within the BizTalk Mapper is xyz, so you should take care you use method abc' and I'm not sure that such an answer is going to come from anywhere outside the BizTalk product team.
My one colleague with direct contacts to the product team is on extended Christmas leave (lucky dog) so until he returns I just thought I'd note that with the change made to our code we have not seen a single recurrence of the threading issues on a high volume production server.
Well - that isn't quite true, I managed to miss the static keyword from one property on my helper class and for that property we did still see the threading issues. I'll take that as proof of ThreadStatic being the right way to go for now.
0 What's the difference between the following?
public class MyClass
{
public bool MyProperty;
}
public class MyClass
{
public bool MyProperty { get; set; }
}
Is it just semantics?
Fields and properties have many differences other than semantic.
Properties can be overridden to provide different implementations in descendants.
Properties can help alleviate versioning problems. I.e. Changing a field to a property in a library requires a recompile of anything depending on that library.
Properties can have different accessibility for the getter and setter.
"Just semantics" always seems like a contradiction in terms to me. Yes, it changes the meaning of the code. No, that's not something I'd use the word "just" about.
The first class has a public field. The second class has a public property, backed by a private field. They're not the same thing:
If you later change the implementation of the property, you maintain binary compatibility. If you change the field to a property, you lose both binary and source compatibility.
Fields aren't seen by data-binding; properties are
Field access can't be breakpointed in managed code (AFAIK)
Exposing a field exposes the implementation of your type - exposing a property just talks about the contract of your type.
See my article about the goodness of properties for slightly more detail on this.
In that case, yes it is mostly semantics. It makes a difference for reflection and so forth.
However, if you want to make a change so that when MyProperty is set you fire an event for example you can easily modify the latter to do that. The former you can't. You can also specify the latter in an interface.
As there is so little difference but several potential advantages to going down the property route, I figure that you should always go down the property route.
The first one is just a public field, the second one is a so-called automatic property. Automatic properties are changed to regular properties with a backing field by the C# compiler.
Public fields and properties are equal in C# syntax, but they are different in IL (read this on a German forum recently, can't give you the source, sorry).
Matthias
The biggest difference is that you can add access modifiers to properties, for example like this
public class MyClass
{
public bool MyProperty { get; protected set; }
}
For access to the CLR fields and properties are different too. So if you have a field and you want to change it to a property later (for example when you want to add code to the setter) the interface will change, you will need to recompile all code accessing that field. With an Autoproperty you don't have this problem.
I am assuming you are not writing code that will be called by 3rd party developers that can’t recompile their code when you change your code. (E.g. that you don’t work for Microsoft writing the .Net framework it’s self, or DevExpress writing a control toolkip). Remember that Microsoft’s .NET framework coding standard is for the people writing the framework and tries to avoid a lot of problems that are not even issues if you are not writing a framework for use of 3rd party developers.
The 2nd case the defined a propriety, the only true advantage of doing is that that data binding does not work with fields. There is however a big political advantage in using proprieties, you get a lot less invalid complaints from other developers that look at your code.
All the other advantages for proprieties (that are well explained in the other answers to your questions) are not of interest to you at present, as any programmer using your code can change the field to a propriety later if need be and just recompile your solution.
However you are not likely to get stacked for using proprieties, so you make as well always use public proprieties rather the fields.