I have this problem and I am not sure why it's happening and how to fix it. I have created an OSB peject. In the proxy service pipeline I am doing a Service Callout to a sync SOAP service in another application. The other service needs the request body as below:
<RequestSelectionValues xmlns="http://www.camstar.com/WebService/WSShopFloor">
<inputServiceData xmlns:q1="http://www.camstar.com/WebService/DataTypes" q1:type="OnlineQuery">
<OnlineQuerySetup>
<__CDOTypeName/>
<__name>xLot By FabLotNumber</__name>
</OnlineQuerySetup>
<Parameters>
<__listItem>
<Name>FabLotNumber</Name>
<DefaultValue>FAB_Lot_1</DefaultValue>
</__listItem>
<__listItem>
<Name>BLOCKOF200ROWS</Name>
<DefaultValue>1</DefaultValue>
</__listItem>
</Parameters>
</inputServiceData>
<queryOption xmlns:q2="http://www.camstar.com/WebService/DataTypes" q2:type="QueryOption">
<RowSetSize>1000</RowSetSize>
<StartRow>1</StartRow>
<QueryType>user</QueryType>
<ChangeCount>0</ChangeCount>
<RequestRecordCount>false</RequestRecordCount>
<RequestRecordSetAndCount>false</RequestRecordSetAndCount>
</queryOption>
<serviceInfo xmlns:q3="http://www.camstar.com/WebService/DataTypes" q3:type="OnlineQuery_Info">
<OnlineQuerySelection>
<RequestValue>false</RequestValue>
<RequestMetadata>false</RequestMetadata>
<RequestSubFieldValues>false</RequestSubFieldValues>
<RequestSelectionValues>true</RequestSelectionValues>
</OnlineQuerySelection>
</serviceInfo>
</RequestSelectionValues>
I am using an Assign to put the above expression in a variable.
Notice the line:
<serviceInfo xmlns:q3="http://www.camstar.com/WebService/DataTypes" q3:type="OnlineQuery_Info">
xmlns:q3="http://www.camstar.com/WebService/DataTypes" needs to be before q3:type="OnlineQuery_Info" for the other service to be called successfully otherwise the service call fails.
In the development it looks fine. I can test the assign of expression as well.
When I go to the OSB console to test the service I notice that in the Assign variable the namespace place switches and it becomes like this:
<serviceInfo q3:type="OnlineQuery_Info" xmlns:q3="http://www.camstar.com/WebService/DataTypes">
This makes the service calls to fail. I have tried putting the body payload in an xslt. Result is the same. I am not sure why it switches the type before namespace. The end result is that the service is not working as expected.
Any idea what I can do to fix this issue. How can I prevent the switching?
Thanks
I haven't found any settings in OSB that can prevent reordering of attributes for you. However, the above OSB behavior is completely XML standard compliant. In fact, the target service side should be XML compliant and treat the two variants mentioned above as the same, because according to XML standard, tow XML documents with only difference in attribute ordering should be treated as the same.
EDIT:
Please go here to download a modified config. My thoughts are:
Specify the business service to invoke in 'Text as Request' mode, as "CamstarLotQuery/business/CSWSShopFloor_Txt" shown below:
Manipulate messages as text, not XML, in your proxy service, as specified in "CamstarLotQuery/proxy/CamstarLotQueryTxt_Txt":
You might need to specify a SOAP Action in http header when calling a business service, depending on the target service.
One solution i can think of is to assign all the namespaces at the Parent Tag Level, and keep the attributes where they are applicable.
Example:
<RequestSelectionValues xmlns:q1="http://www.camstar.com/WebService/DataTypes" xmlns="http://www.camstar.com/WebService/WSShopFloor" xmlns:q2="http://www.camstar.com/WebService/DataTypes" xmlns:q3="http://www.camstar.com/WebService/DataTypes">
But the problem with this implementation is that since the namespace declaration is now Global, you have to declare your namespace prefixes (q1, q2, q3) to the blocks where the namespaces were previously defined.
Example:
<q3:serviceInfo q3:type="OnlineQuery_Info">
<q3:OnlineQuerySelection>
<q3:RequestValue>false</q3:RequestValue>
<q3:RequestMetadata>false</q3:RequestMetadata>
<q3:RequestSubFieldValues>false</q3:RequestSubFieldValues>
<q3:RequestSelectionValues>true</q3:RequestSelectionValues>
</q3:OnlineQuerySelection>
</q3:serviceInfo>
if this namespace prefix is not declared, then as per XML standards, the tag assume the 'default' namespace value - which will be the namespace of the parent.
However, even though this solution has a round-about way of implementation, this solution will definitely work.
Related
I am tryint to reuse the code in following documentation : https://geode.apache.org/docs/guide/11/developing/region_options/dynamic_region_creation.html
The first problem that i met is that
Cache cache = CacheFactory.getAnyInstance();
Region<String,RegionAttributes<?,?>> regionAttributesMetadataRegion = createRegionAttributesMetadataRegion(cache);
should not be executed in constructor. In case it is , the code is executed in client instance , it is failed on not server error.When this fixed i receive
[fatal 2021/02/15 16:38:24.915 EET <ServerConnection on port 40527 Thread 1> tid=81] Serialization filter is rejecting class org.restcomm.cache.geode.CreateRegionFunction
java.lang.Exception:
at org.apache.geode.internal.ObjectInputStreamFilterWrapper.lambda$createSerializationFilter$0(ObjectInputStreamFilterWrapper.java:233)
The problem is that code is getting executed on dunit MemberVM and the required class is actually the part of the package under which the test is getting executed.
So i guess i should somehow register the classes ( or may be jar ) separately to dunit MemberVM. How it can be done?
Another question is: currently the code is checking if the region exists and if not it calls the method. In both cases it also tries to create the clientRegion. The question is whether this is a correct approach?
Region<?,?> cache = instance.getRegion(name);
if(cache==null) {
Execution execution = FunctionService.onServers(instance);
ArrayList argList = new ArrayList();
argList.add(name);
Function function = new CreateRegionFunction();
execution.setArguments(argList).execute(function).getResult();
}
ClientRegionFactory<Object, Object> cf=this.instance.createClientRegionFactory(ClientRegionShortcut.CACHING_PROXY).addCacheListener(new ExtendedCacheListener());
this.cache = cf.create(name);
BR
Yulian Oifa
The first problem that i met is that
Cache cache = CacheFactory.getAnyInstance();
should not be executed in constructor. In case it is , the code is executed in client instance , it is failed on not server error.When this fixed i receive
Once the Function is registered on server side, you can execute it by ID instead of sending the object across the wire (so you won't need to instantiate the function on the client), in which case you'll also avoid the Serialization filter error. As an example, FunctionService.onServers(instance).execute(CreateRegionFunction.ID).
The problem is that code is getting executed on dunit MemberVM and the required class is actually the part of the package under which the test is getting executed. So i guess i should somehow register the classes ( or may be jar ) separately to dunit MemberVM. How it can be done?
Indeed, for security reasons Geode doesn't allow serializing / deserializing arbitrary classes. Internal Geode distributed tests use the MemberVM and set a special property (serializable-object-filter) to circumvent this problem. Here's an example of how you can achieve that within your own tests.
Another question is: currently the code is checking if the region exists and if not it calls the method. In both cases it also tries to create the clientRegion. The question is whether this is a correct approach?
If the dynamically created region is used by the client application then yes, you should create it, otherwise you won't be able to use it.
As a side note, there's a lot of internal logic implemented by Geode when creating a Region so I wouldn't advice to dynamically create regions on your own. Instead, it would be advisable to use the gfsh create region command directly, or look at how it works internally (see here) and try to re-use that.
I am trying to build a custom resource which would in turn use another of my custom resource as part of its action. The pseudo-code would look something like this
customResource A
property component_id String
action: doSomething do
component_id = 1 if component_id.nil?
node.default[component_details][component_id] = ''
customResource_b "Get me component details" do
comp_id component_id
action :get_component_details
end
Chef::log.info("See the output computed by my customResourceB")
Chef::log.info(node[component_details][component_id])
end
Thing to note:
1. The role of customResource_b is to make a PS call to a REST web service and store the JSON result in node[component_details][component_id] overriding its value. I am creating this attribute node on this resource since I know it will be used later one, hence avoiding compile time issues.
Issues I am facing:
1. When testing a simple recipe that calls this resource in chef-client, the code in the resource gets executed to the last log line and after that the call to customResource_b is made. Which is something I am not expecting to happen.
Any advice would be appreciated. I am also quite new to Chef so any design improvements are also welcome
there is no need to nest chef resources, rather use chef idompotance, guards and notification.
and as usualy, you can always use a condition to decide which cookbook\recipe to run.
I'm working on a custom controller for a custom resource using kubebuilder (version 1.0.8). I have a scenario where I need to get a list of all the instances of my custom resource so I can sync up with an external database.
All the examples I've seen for kubernetes controllers use either client-go or just call the api server directly over http. However, kubebuilder has also given me this client.Client object to get and list resources. So I'm trying to use that.
After creating a client instance by using the passed in Manager instance (i.e. do mgr.GetClient()), I then tried to write some code to get the list of all the Environment resources I created.
func syncClusterWithDatabase(c client.Client, db *dynamodb.DynamoDB) {
// Sync environments
// Step 1 - read all the environments the cluster knows about
clusterEnvironments := &cdsv1alpha1.EnvironmentList{}
c.List(context.Background(), /* what do I put here? */, clusterEnvironments)
}
The example in the documentation for the List method shows:
c.List(context.Background, &result);
which doesn't even compile.
I saw a few method in the client package to limit the search to particular labels, or for a specific field with a specific value, but nothing to limit the result to a specific resource kind.
Is there a way to do this via the Client object? Should I do something else entirely?
So figured it out - the answer is to pass nil for the second parameter. The type of the output pointer determines which sort of resource it actually retrieves.
According to the latest documentation, the List method is defined as follows,
List(ctx context.Context, list ObjectList, opts ...ListOption) error
If the List method you are calling has the same definition as above, your code should compile. As it has variadic options to set the namespace and field match, the mandatory arguments are Context and objectList.
Ref: KubeBuilder Book
in my simulation there is a mobile node composed by the following components from the inet framework:
Now I am working on the UdpApp which is UDPVideoStreamCli.cc which is also given by inet framework as one of example udp application.
Now as you can see from the image I had to access to the lisp module (which is an instance of LispRouting.cc) because I have to read some values or call some public methods of that class...how can I do that? All I know is that I have to start from
getParentModule()->getSubmodule();
but then I don't know how to go on...can you help?
(LispRouting *)getParentModule()->getSubmodule("lisp")
will do the trick. Be sure to check if the returned pointer is not null.
Generally this is bad design as it hard-codes the name and the relative position of the LispRouting module. Any change in naming/architecture will cause crashes.
A proper design would be to create a parameter that specifies the name/path of the lisp submodule (with default value) and then use
#include "inet/common/ModuleAccess.h"
...
LispRouting *lr = getModuleFromPar<LispRouting>(par("lispModule"), this);
and then add a parameter to the module's NED file:
string lispModule = default("^.lisp");
meaning the default place where you can find the lisp module is: go one level up and then find the submodule named "lisp". This is a much better pattern, because the user can later reconfigure the name/placement of the lisp module without breaking the code.
I have an orchestration that receives any document type in BizTalk (System.Xml.Document). It looks like Bizmonade always wants to use an orchestration that specifies a type of schema that is different from ANY.
OrchestrationSimulator.Test<Dummy__Simulated>()
.When(MessageReceived.FromFile<CanonicalInvoice>(
Path.Combine(AppDomain.CurrentDomain.BaseDirectory, #"Test.Files\CanonicalInvoice.xml")))
.ExpectCompleted<Dummy__Simulated>()
.ExecuteTest();
Any thoughts how to make it work with something similar to:
OrchestrationSimulator.Test<Dummy__Simulated>()
.When(MessageReceived.FromFile<XmlDocument__Simulated>( // or not to specify at all?
Path.Combine(AppDomain.CurrentDomain.BaseDirectory, #"Test.Files\CanonicalInvoice.xml")))
.ExpectCompleted<Dummy__Simulated>()
.ExecuteTest();
Version 1.0.0.2 (just released) should be able to do it.