Apache Geode - Creating region on DUnit Based Test Server/Remote Server with same code from client - client-server

I am tryint to reuse the code in following documentation : https://geode.apache.org/docs/guide/11/developing/region_options/dynamic_region_creation.html
The first problem that i met is that
Cache cache = CacheFactory.getAnyInstance();
Region<String,RegionAttributes<?,?>> regionAttributesMetadataRegion = createRegionAttributesMetadataRegion(cache);
should not be executed in constructor. In case it is , the code is executed in client instance , it is failed on not server error.When this fixed i receive
[fatal 2021/02/15 16:38:24.915 EET <ServerConnection on port 40527 Thread 1> tid=81] Serialization filter is rejecting class org.restcomm.cache.geode.CreateRegionFunction
java.lang.Exception:
at org.apache.geode.internal.ObjectInputStreamFilterWrapper.lambda$createSerializationFilter$0(ObjectInputStreamFilterWrapper.java:233)
The problem is that code is getting executed on dunit MemberVM and the required class is actually the part of the package under which the test is getting executed.
So i guess i should somehow register the classes ( or may be jar ) separately to dunit MemberVM. How it can be done?
Another question is: currently the code is checking if the region exists and if not it calls the method. In both cases it also tries to create the clientRegion. The question is whether this is a correct approach?
Region<?,?> cache = instance.getRegion(name);
if(cache==null) {
Execution execution = FunctionService.onServers(instance);
ArrayList argList = new ArrayList();
argList.add(name);
Function function = new CreateRegionFunction();
execution.setArguments(argList).execute(function).getResult();
}
ClientRegionFactory<Object, Object> cf=this.instance.createClientRegionFactory(ClientRegionShortcut.CACHING_PROXY).addCacheListener(new ExtendedCacheListener());
this.cache = cf.create(name);
BR
Yulian Oifa

The first problem that i met is that
Cache cache = CacheFactory.getAnyInstance();
should not be executed in constructor. In case it is , the code is executed in client instance , it is failed on not server error.When this fixed i receive
Once the Function is registered on server side, you can execute it by ID instead of sending the object across the wire (so you won't need to instantiate the function on the client), in which case you'll also avoid the Serialization filter error. As an example, FunctionService.onServers(instance).execute(CreateRegionFunction.ID).
The problem is that code is getting executed on dunit MemberVM and the required class is actually the part of the package under which the test is getting executed. So i guess i should somehow register the classes ( or may be jar ) separately to dunit MemberVM. How it can be done?
Indeed, for security reasons Geode doesn't allow serializing / deserializing arbitrary classes. Internal Geode distributed tests use the MemberVM and set a special property (serializable-object-filter) to circumvent this problem. Here's an example of how you can achieve that within your own tests.
Another question is: currently the code is checking if the region exists and if not it calls the method. In both cases it also tries to create the clientRegion. The question is whether this is a correct approach?
If the dynamically created region is used by the client application then yes, you should create it, otherwise you won't be able to use it.
As a side note, there's a lot of internal logic implemented by Geode when creating a Region so I wouldn't advice to dynamically create regions on your own. Instead, it would be advisable to use the gfsh create region command directly, or look at how it works internally (see here) and try to re-use that.

Related

Nesting custom resources in chef

I am trying to build a custom resource which would in turn use another of my custom resource as part of its action. The pseudo-code would look something like this
customResource A
property component_id String
action: doSomething do
component_id = 1 if component_id.nil?
node.default[component_details][component_id] = ''
customResource_b "Get me component details" do
comp_id component_id
action :get_component_details
end
Chef::log.info("See the output computed by my customResourceB")
Chef::log.info(node[component_details][component_id])
end
Thing to note:
1. The role of customResource_b is to make a PS call to a REST web service and store the JSON result in node[component_details][component_id] overriding its value. I am creating this attribute node on this resource since I know it will be used later one, hence avoiding compile time issues.
Issues I am facing:
1. When testing a simple recipe that calls this resource in chef-client, the code in the resource gets executed to the last log line and after that the call to customResource_b is made. Which is something I am not expecting to happen.
Any advice would be appreciated. I am also quite new to Chef so any design improvements are also welcome
there is no need to nest chef resources, rather use chef idompotance, guards and notification.
and as usualy, you can always use a condition to decide which cookbook\recipe to run.

How to use kubebuilder's client.List method?

I'm working on a custom controller for a custom resource using kubebuilder (version 1.0.8). I have a scenario where I need to get a list of all the instances of my custom resource so I can sync up with an external database.
All the examples I've seen for kubernetes controllers use either client-go or just call the api server directly over http. However, kubebuilder has also given me this client.Client object to get and list resources. So I'm trying to use that.
After creating a client instance by using the passed in Manager instance (i.e. do mgr.GetClient()), I then tried to write some code to get the list of all the Environment resources I created.
func syncClusterWithDatabase(c client.Client, db *dynamodb.DynamoDB) {
// Sync environments
// Step 1 - read all the environments the cluster knows about
clusterEnvironments := &cdsv1alpha1.EnvironmentList{}
c.List(context.Background(), /* what do I put here? */, clusterEnvironments)
}
The example in the documentation for the List method shows:
c.List(context.Background, &result);
which doesn't even compile.
I saw a few method in the client package to limit the search to particular labels, or for a specific field with a specific value, but nothing to limit the result to a specific resource kind.
Is there a way to do this via the Client object? Should I do something else entirely?
So figured it out - the answer is to pass nil for the second parameter. The type of the output pointer determines which sort of resource it actually retrieves.
According to the latest documentation, the List method is defined as follows,
List(ctx context.Context, list ObjectList, opts ...ListOption) error
If the List method you are calling has the same definition as above, your code should compile. As it has variadic options to set the namespace and field match, the mandatory arguments are Context and objectList.
Ref: KubeBuilder Book

Puppet: Making a custom function depend on a resource

I have a Puppet custom function that returns information about a user defined in OpenStack's Keystone identity service. Usage is something along the lines of:
$tenant_id = lookup_tenant_by_name($username, $password, "mytenant")
The problem is that the credentials used in this query ($username) are supposed to be created by another resource during the Puppet run (a Keystone_user resource from puppet-keystone). As far as I can tell, the call to the lookup_tenant_by_name function is being evaluated before any resource ordering happens, because no amount of dependencies in the calling code is able to force the credentials to be created prior to this function being executed.
In general, it is possible to write custom functions -- or place them appropriately in a manifest -- such that they will not be executed by Puppet until after some specified resource has been instantiated?
Short answer: You cannot make your manifest's behavior depend on resources declared inside of it.
Long answer: Parser functions are called during the compilation phase (on the master if you use one, or the agent if you use puppet apply). In neither case can it ever run before any resource is synced, because that will happen after the compiler has done all its work (including invocation of your functions).
To query information from the agent machine, you generally want to use custom facts. Still, those will be populated before even the compiler run.
Likely the best approach in this situation is to make the manifest tolerate the absence of the information, so that anything that depends on the value that your lookup_tenant_by_name function returns will only be evaluated if that value is available. This will usually be during the second Puppet run.
if $tenant_id == "" {
notify { "cannot yet find tenant $username": }
}
else {
# your code using the tenant ID
}

Websphere JYTHON Scripting - Get Active Spec ID

Problem:
Attempting to use the JYTHON command below and I cannot retrieve the id of my active specification defined at a node-server level in Websphere. I believe its a syntax issue but I'm not sure what.
Code:
AdminConfig.getid('/Cell:mycell/Node:mynode/Server:myserver/J2CActivationSpec:myActiveSpecName/')
Problem Notes:
I do not get a invalid object error so I believe I have the syntax right but it just cannot find the resource even though it exists.
I am using the AdminConfig.getid() as a way to check if the resource already exists in order to do a modify or a create.
If I use the following code: AdminConfig.getid('/J2CActivationSpec:myActiveSpecName/') it will find it but not if I use a more specific path listed above.
Reference Material:
IBM Documentation
Containment paths are always a little tricky. In my (limited) experience, even if you can trace the path by AdminConfig.parents, you may not always be able to use getid.
Are you restricted to using getid? If not, here are some alternatives that will get you an ActivationSpec at the /Cell/Node/Server level:
Querying using AdminConfig.list
This approach will list the Activation Specifications at the specified scope (in our case, the server), and grab the one that has it's name attribute equal to 'myActiveSpecName'.
server = AdminConfig.getid('/Cell:mycell/Node:mynode/Server:myserver')
activationSpec = ''
for as in AdminConfig.list('J2CActivationSpec', server).splitlines():
if AdminConfig.showAttribute(as, 'name') == 'myActiveSpecName'
activationSpec = as
print 'found it :)'
Using Wildcards
This approah uses AdminConfig.list as well, but with a pattern to narrow down your list. If you know your activation spec's configuration begins with myActiveSpecName, then you can do the following:
activationSpec = AdminConfig.list('J2CActivationSpec', 'myActiveSpecName*')

Oracle Service Bus - Assign expression

I have this problem and I am not sure why it's happening and how to fix it. I have created an OSB peject. In the proxy service pipeline I am doing a Service Callout to a sync SOAP service in another application. The other service needs the request body as below:
<RequestSelectionValues xmlns="http://www.camstar.com/WebService/WSShopFloor">
<inputServiceData xmlns:q1="http://www.camstar.com/WebService/DataTypes" q1:type="OnlineQuery">
<OnlineQuerySetup>
<__CDOTypeName/>
<__name>xLot By FabLotNumber</__name>
</OnlineQuerySetup>
<Parameters>
<__listItem>
<Name>FabLotNumber</Name>
<DefaultValue>FAB_Lot_1</DefaultValue>
</__listItem>
<__listItem>
<Name>BLOCKOF200ROWS</Name>
<DefaultValue>1</DefaultValue>
</__listItem>
</Parameters>
</inputServiceData>
<queryOption xmlns:q2="http://www.camstar.com/WebService/DataTypes" q2:type="QueryOption">
<RowSetSize>1000</RowSetSize>
<StartRow>1</StartRow>
<QueryType>user</QueryType>
<ChangeCount>0</ChangeCount>
<RequestRecordCount>false</RequestRecordCount>
<RequestRecordSetAndCount>false</RequestRecordSetAndCount>
</queryOption>
<serviceInfo xmlns:q3="http://www.camstar.com/WebService/DataTypes" q3:type="OnlineQuery_Info">
<OnlineQuerySelection>
<RequestValue>false</RequestValue>
<RequestMetadata>false</RequestMetadata>
<RequestSubFieldValues>false</RequestSubFieldValues>
<RequestSelectionValues>true</RequestSelectionValues>
</OnlineQuerySelection>
</serviceInfo>
</RequestSelectionValues>
I am using an Assign to put the above expression in a variable.
Notice the line:
<serviceInfo xmlns:q3="http://www.camstar.com/WebService/DataTypes" q3:type="OnlineQuery_Info">
xmlns:q3="http://www.camstar.com/WebService/DataTypes" needs to be before q3:type="OnlineQuery_Info" for the other service to be called successfully otherwise the service call fails.
In the development it looks fine. I can test the assign of expression as well.
When I go to the OSB console to test the service I notice that in the Assign variable the namespace place switches and it becomes like this:
<serviceInfo q3:type="OnlineQuery_Info" xmlns:q3="http://www.camstar.com/WebService/DataTypes">
This makes the service calls to fail. I have tried putting the body payload in an xslt. Result is the same. I am not sure why it switches the type before namespace. The end result is that the service is not working as expected.
Any idea what I can do to fix this issue. How can I prevent the switching?
Thanks
I haven't found any settings in OSB that can prevent reordering of attributes for you. However, the above OSB behavior is completely XML standard compliant. In fact, the target service side should be XML compliant and treat the two variants mentioned above as the same, because according to XML standard, tow XML documents with only difference in attribute ordering should be treated as the same.
EDIT:
Please go here to download a modified config. My thoughts are:
Specify the business service to invoke in 'Text as Request' mode, as "CamstarLotQuery/business/CSWSShopFloor_Txt" shown below:
Manipulate messages as text, not XML, in your proxy service, as specified in "CamstarLotQuery/proxy/CamstarLotQueryTxt_Txt":
You might need to specify a SOAP Action in http header when calling a business service, depending on the target service.
One solution i can think of is to assign all the namespaces at the Parent Tag Level, and keep the attributes where they are applicable.
Example:
<RequestSelectionValues xmlns:q1="http://www.camstar.com/WebService/DataTypes" xmlns="http://www.camstar.com/WebService/WSShopFloor" xmlns:q2="http://www.camstar.com/WebService/DataTypes" xmlns:q3="http://www.camstar.com/WebService/DataTypes">
But the problem with this implementation is that since the namespace declaration is now Global, you have to declare your namespace prefixes (q1, q2, q3) to the blocks where the namespaces were previously defined.
Example:
<q3:serviceInfo q3:type="OnlineQuery_Info">
<q3:OnlineQuerySelection>
<q3:RequestValue>false</q3:RequestValue>
<q3:RequestMetadata>false</q3:RequestMetadata>
<q3:RequestSubFieldValues>false</q3:RequestSubFieldValues>
<q3:RequestSelectionValues>true</q3:RequestSelectionValues>
</q3:OnlineQuerySelection>
</q3:serviceInfo>
if this namespace prefix is not declared, then as per XML standards, the tag assume the 'default' namespace value - which will be the namespace of the parent.
However, even though this solution has a round-about way of implementation, this solution will definitely work.

Resources