I am new to H2o. I am calling the predictBinomial method from a Java application and I am getting the correct results back, but it takes a long time to respond. Here is my scenario:
I am exposing a web service method where I receive the name of the class (modelName) and using ClassLoader to load it (the idea is to allow business users to upload their compiled models to the server and be able to call the service without any development required):
ClassLoader classLoader = new URLClassLoader(
new URL[]{new URL(String.format("%s%s/", H2O_MODELS_URI, modelName))}, this.getClass().getClassLoader());
then I instantiate the raw model:
hex.genmodel.GenModel rawModel = (hex.genmodel.GenModel) classLoader.loadClass(modelName).newInstance();
Instantiate the wrapper:
EasyPredictModelWrapper model = new EasyPredictModelWrapper(rawModel);
Prepare the RowData (modelParameters received as service parameters):
RowData row = _h20ParametersMapper.map(modelParameters);
Until here, everything goes smoothly, but when I call
BinomialModelPrediction binomialPrediction = model.predictBinomial(row);
It takes more than 10 seconds to return a response, which doesn't work for us. Do you see any way I could gain some performance? I am referencing the h2o-genmodel:3.18.0.3 library.
Thanks!
Related
I have this custom api called "pits_PostProjectLineEntries" which takes in a EntityCollection called Transactions. I am using the option to have the entities in that EntityCollection to be expando since I really not need to create a table to hold this entities. I am able to call the custom api via PostMan with some raw json, and it works perfectly. My issue is when I use C# Plugins, I am getting errors messages about the entities logical name within Transactions collection. Is there a way to tell the Xrm SDK that these entities are not tied to a physical table but are just expando.
Here is the code that calls the Custom API for ref.
_transactions = new List<Entity>();
Entity Transaction = new Entity("");
Transaction["date"] = userReport.GetAttributeValue<DateTime>("pits_date");
Transaction["project"] = userReport.GetAttributeValue<EntityReference>("pits_project");
Transaction["resource"] = userReport.GetAttributeValue<EntityReference>("pits_user");
Transaction["task"] = userReport.GetAttributeValue<EntityReference>("pits_task");
Transaction["worktype"] = response["WorkType"];
Transaction["shifttype"] = response["ShiftType"];
Transaction["quantity"] = RG;
Transaction["chargeable"] = true;
Transaction["tx_type"] = 361200000;
_transactions.Add(Transaction);
EntityCollection Transactions = new EntityCollection(_transactions);
OrganizationRequest request = new OrganizationRequest("pits_PostProjectLineEntries");
request.Parameters.Add("Transactions", Transactions);
var response = _context.OrganizationService.Execute(request);
The error message I get is:
An unexpected error occurred: The entity with a name = '' with namemapping = 'Logical' was not found in the MetadataCache.
I am trying to call this custom api and pass it an entity that is tied to a DV Table from the C# sdk.
Sorry if the formatting is bad, I have been using stack overflow for a long time, but first time posting
This is a known bug with Expando entities in the currently shipping SDK assemblies, The Dataverse ServiceClient version (.net / cross platform) of the SDK should be working correctly at this time. You can get that here: https://www.nuget.org/packages/Microsoft.PowerPlatform.Dataverse.Client/
The libraries from Microsoft.CrmSdk.CoreAssemblies will have the fix on next update (at the time of this writing)
I am looking for a place to store some request scoped attributes such as user id using a Quarkus request filter. I later want to retrieve these attributes in a Log handler and put them in the MDC logging context.
Is Vertx.currentContext() the right place to put such request attributes? Or can the properties I set on this context be read by other requests?
If this is not the right place to store such data, where would be the right place?
Yes ... and no :-D
Vertx.currentContext() can provide two type of objects:
root context shared between all the concurrent processing executed on this event loop (so do NOT share data)
duplicated contexts, which are local to the processing and its continuation (you can share in these)
In Quarkus 2.7.2, we have done a lot of work to improve our support of duplicated context. While before, they were only used for HTTP, they are now used for gRPC and #ConsumeEvent. Support for Kafka and AMQP is coming in Quarkus 2.8.
Also, in Quarkus 2.7.2, we introduced two new features that could be useful:
you cannot store data in a root context. We detect that for you and throw an UnsupportedOperationException. The reason is safety.
we introduced a new utility class ( io.smallrye.common.vertx.ContextLocals to access the context locals.
Here is a simple example:
AtomicInteger counter = new AtomicInteger();
public Uni<String> invoke() {
Context context = Vertx.currentContext();
ContextLocals.put("message", "hello");
ContextLocals.put("id", counter.incrementAndGet());
return invokeRemoteService()
// Switch back to our duplicated context:
.emitOn(runnable -> context.runOnContext(runnable))
.map(res -> {
// Can still access the context local data
String msg = ContextLocals.<String>get("message").orElseThrow();
Integer id = ContextLocals.<Integer>get("id").orElseThrow();
return "%s - %s - %d".formatted(res, msg, id);
});
}
I am tryint to reuse the code in following documentation : https://geode.apache.org/docs/guide/11/developing/region_options/dynamic_region_creation.html
The first problem that i met is that
Cache cache = CacheFactory.getAnyInstance();
Region<String,RegionAttributes<?,?>> regionAttributesMetadataRegion = createRegionAttributesMetadataRegion(cache);
should not be executed in constructor. In case it is , the code is executed in client instance , it is failed on not server error.When this fixed i receive
[fatal 2021/02/15 16:38:24.915 EET <ServerConnection on port 40527 Thread 1> tid=81] Serialization filter is rejecting class org.restcomm.cache.geode.CreateRegionFunction
java.lang.Exception:
at org.apache.geode.internal.ObjectInputStreamFilterWrapper.lambda$createSerializationFilter$0(ObjectInputStreamFilterWrapper.java:233)
The problem is that code is getting executed on dunit MemberVM and the required class is actually the part of the package under which the test is getting executed.
So i guess i should somehow register the classes ( or may be jar ) separately to dunit MemberVM. How it can be done?
Another question is: currently the code is checking if the region exists and if not it calls the method. In both cases it also tries to create the clientRegion. The question is whether this is a correct approach?
Region<?,?> cache = instance.getRegion(name);
if(cache==null) {
Execution execution = FunctionService.onServers(instance);
ArrayList argList = new ArrayList();
argList.add(name);
Function function = new CreateRegionFunction();
execution.setArguments(argList).execute(function).getResult();
}
ClientRegionFactory<Object, Object> cf=this.instance.createClientRegionFactory(ClientRegionShortcut.CACHING_PROXY).addCacheListener(new ExtendedCacheListener());
this.cache = cf.create(name);
BR
Yulian Oifa
The first problem that i met is that
Cache cache = CacheFactory.getAnyInstance();
should not be executed in constructor. In case it is , the code is executed in client instance , it is failed on not server error.When this fixed i receive
Once the Function is registered on server side, you can execute it by ID instead of sending the object across the wire (so you won't need to instantiate the function on the client), in which case you'll also avoid the Serialization filter error. As an example, FunctionService.onServers(instance).execute(CreateRegionFunction.ID).
The problem is that code is getting executed on dunit MemberVM and the required class is actually the part of the package under which the test is getting executed. So i guess i should somehow register the classes ( or may be jar ) separately to dunit MemberVM. How it can be done?
Indeed, for security reasons Geode doesn't allow serializing / deserializing arbitrary classes. Internal Geode distributed tests use the MemberVM and set a special property (serializable-object-filter) to circumvent this problem. Here's an example of how you can achieve that within your own tests.
Another question is: currently the code is checking if the region exists and if not it calls the method. In both cases it also tries to create the clientRegion. The question is whether this is a correct approach?
If the dynamically created region is used by the client application then yes, you should create it, otherwise you won't be able to use it.
As a side note, there's a lot of internal logic implemented by Geode when creating a Region so I wouldn't advice to dynamically create regions on your own. Instead, it would be advisable to use the gfsh create region command directly, or look at how it works internally (see here) and try to re-use that.
In my globals.asax.cs, I am creating a dictionary object that I want to use in my APIs to add and get data from in the life of the application.
I do the following:
Application["MyDictionary"] = myDictionary;
In my ApiController I get a handle of it, in this way:
MyDictionary myDictionary= (MyDictionary)HttpContext.Current.Application["MyDictionary"];
So far so good, the issue is that I need to do the same in the unit test, I need to add this dictionary into my Application as well, so when I call my controller, I will be able to retrieve it, how do I do that ?
doing this:
HttpContext.Current.Application.Add("MyDictionary", myDictionary);
enter code here
doesn't work, I get an exception:
An exception of type 'System.NullReferenceException' occurred in RESTServices.Tests.dll but was not handled in user code
My HttpContext.Current is null.
Any ideas how to work around this ?
The HttpContext.Current is created when a request is received, so you need to set the Current property, before using it.
See the example below:
var myDictionary = new Dictionary<string, string>();
HttpContext.Current = new HttpContext(new HttpRequest("default.aspx", "http://localhost", string.Empty), new HttpResponse(new StringWriter(CultureInfo.InvariantCulture)));
HttpContext.Current.Application.Add("MyDictionary", myDictionary);
I made a custom report in AX2012, to replace the WHS Shipping pick list. The custom report is RDP based. I have no trouble running it directly (with the parameters dialog), but when I try to use the controller (WHSPickListShippingController), I get an error saying "Pre-Processed RecId not found. Cannot process report. Indicates a development error."
The error is because in the class SrsReportProviderQueryBuilder (setArgs method), the map variable reportProviderParameters is empty. I have no idea why that is. The code in my Data provider runs okay. Here is my code for running the report :
WHSWorkId id = 'LAM-000052';
WHSPickListShippingController controller;
Args args;
WHSShipmentTable whsShipmentTable;
WHSWorkTable whsWorkTable;
clWHSPickListShippingContract contract; //My custom RDP Contract
whsShipmentTable = WHSShipmentTable::find(whsWorkTable.ShipmentId);
args = new Args(ssrsReportStr(WHSPickListShipping, Report));
args.record(whsShipmentTable);
args.parm(whsShipmentTable.LoadId);
contract = new clWHSPickListShippingContract();
controller = new WHSPickListShippingController();
controller.parmReportName(ssrsReportStr(WHSPickListShipping, Report));
controller.parmShowDialog(false);
controller.parmLoadFromSysLastValue(false);
controller.parmReportContract().parmRdpContract(contract);
controller.parmReportContract().parmRdpName(classStr(clWHSPickListShippingDP));
controller.parmReportContract().parmRdlContract().parmLanguageId(CompanyInfo::languageId());
controller.parmArgs(args);
controller.startOperation();
I don't know if I'm clear enough... But I've been looking for a fix for hours without success, so I thought I'd ask here. Is there a reason why this variable (which comes from the method parameter AifQueryBuilderArgs) would be empty?
I'm thinking your issue is with these lines (try removing):
controller.parmReportContract().parmRdpContract(contract);
controller.parmReportContract().parmRdpName(classStr(clWHSPickListShippingDP));
controller.parmReportContract().parmRdlContract().parmLanguageId(CompanyInfo::languageId());
The style I'd expect to see with your contract would be like this:
controller = new WHSPickListShippingController();
contract = controller.getDataContractObject();
contract.parmWhatever('ParametersHere');
controller.parmArgs(args);
And for the DataProvider clWHSPickListShippingDP, usually if a report is using a DataProvider, you don't manually set it, but the DP extends SRSReportDataProviderBase and has an attribute SRSReportParameterAttribute(...) decorating the class declaration in this style:
[SRSReportParameterAttribute(classstr(MyCustomContract))]
class MyCustomDP extends SRSReportDataProviderBase
{
// Vars
}
You are using controller.parmReportContract().parmRdpContract(contract); wrong, as this is more for run-time modifications. It's typically used for accessing the contract for preRunModifyContract overloads.
Build your CrossReference in a development environment then right click on \Classes\SrsReportDataContract\parmRdpContract and click Add-Ins>Cross-reference>Used By to see how that is generally used.
Ok, so now I feel very stupid for spending so much time on that error, when it's such a tiny thing...
The erronous line is that one :
controller.parmReportName(ssrsReportStr(WHSPickListShipping, Report));
Because WHSPickListShipping is the name of the AX report, but I renamed my custom report clWHSPickListShipping. What confused me was that my DataProvider class was executing as wanted.