I have a reactive quarkus app with hibernate-panache-reactive. The problem is it behaves differently when I run it as a Java app or a native app.
The app
loads a lot of data from a MySQL DB via hibernate-panache-reactive
builds a graph based on the data loaded
runs some time consuming algorithm on the graph
loads some more data from the DB based on the results returned from 3)
So initially the code looked something like this:
GraphProcessor graphProcessor = createInitialProcessor();
return Uni.createFrom().item(graphProcessor)
// 1) loading of initial data
.onItem().transformToUni(this::loadDataViaPanaceReactive1)
.onItem().transformToUni(this::loadDataViaPanaceReactive2)
.onItem().transformToUni(this::loadDataViaPanaceReactive3)
// 2) building of graph
.onItem().transform(graphProcessor::processLoadedData)
.onItem().invoke(graphProcessor::loadingComplete) //sync
// 3) running time consuming algorithm on graph
.onItem().transformToMulti(this::runTimeConsumingTask)
.onItem().invoke(this::prepareDBQueries)
// 4) load more data from DB
.onItem().transformToUniAndConcatenate(this::loadMoreData1)
.onItem().transformToUniAndConcatenate(this::loadMoreData2)
.onItem().transformToUniAndConcatenate(this::transformToPublicForm)
.onFailure().invoke(log::error);
That worked fine when run as a Java app but when I tried to run it as a native app it first complained that the computation in 2 and 3 were taking too long and this was blocking the calling thread.
I fixed that by using
.emitOn(Infrastructure.getDefaultWorkerPool())
Between 1 and 2
This time I got another error
java.lang.IllegalStateException: HR000069: Detected use of the
reactive Session from a different Thread than the one which was used
to open the reactive Session - this suggests an invalid integration;
original thread: 'vert.x-eventloop-thread-0' current Thread:
'vert.x-eventloop-thread-1'
I've fixed that by inserting
.emitOn(Infrastructure.getDefaultExecutor())
between 3 and 4.
GraphProcessor graphProcessor = createInitialProcessor();
return Uni.createFrom().item(graphProcessor)
// 1) loading of initial data
.onItem().transformToUni(this::loadDataViaPanaceReactive1)
.onItem().transformToUni(this::loadDataViaPanaceReactive2)
.onItem().transformToUni(this::loadDataViaPanaceReactive3)
// 2) building of graph
.emitOn(Infrastructure.getDefaultWorkerPool()) // Required for native mode
.onItem().transform(graphProcessor::processLoadedData)
.onItem().invoke(graphProcessor::loadingComplete)
// 3) running time consuming algorithm on graph
.onItem().transformToMulti(this::runTimeConsumingTask)
.onItem().invoke(this::prepareDBQueries)
.emitOn(Infrastructure.getDefaultExecutor()) // Required for native mode
// 4) load more data from DB
.onItem().transformToUniAndConcatenate(this::loadMoreData1)
.onItem().transformToUniAndConcatenate(this::loadMoreData2)
.onItem().transformToUniAndConcatenate(this::transformToPublicForm)
.onFailure().invoke(log::error);
That worked when run in native mode but now when I run it in Java I get the same exception (Detected use of the
reactive Session from a different Thread than the one which was used
to open the reactive Session)
The emitOn(Infrastructure.getDefaultExcecutor()) should have switched back to the original thread.
The odd thing is also that this exception is not thrown every time I hit the app.
So what am I doing wrong here? What is the best way to handle time consuming tasks and then having to do some more DB queries after?
You could use .runSubscriptionOn(Executor) but I would need to switch back to the original thread for part 4 again.
Thanks for you help.
I'm currently running batch predictions on Vertex AI with a custom FastAPI container and manualBatchTuningParameters set to {"batch_size": 2}. My JSONL file contains 646 predictions that mostly succeed except for the few that result in the following error:
('Post request fails. Cannot decode the prediction response
...<long and seemingly valid json>...
Error: Unterminated string starting at: line 1 column 97148 (char 97147)', 2)
Based on the common positioning (char 97147) of the character in the error, it seems like the response is being truncated before the stream is completely received by the batch "airflow worker." Given that TCP is a streaming protocol, I believe the batch interface is only receiving a portion of the buffers.
I've attempted to reproduce the error by deploying the same model as a vertex endpoint and requesting the same predictions that errored in batch mode.
Why am I occasionally getting this error?
I am using Jmeter to create an functional automation suite for our application under test (Right now this is only tool that i can think of which supports interaction with Active MQ , Database , Rest and SOAP API both which are our needs)
Down the line i will be having different test set and configuration files for the application under test.
Below is the process that i will follow to test:
1 Stop the application
2 Load a particular file
3 Start the application
4 Run the test test that match the loaded config
REPEAT THE SAME FOR OTHER CONFIGURATION.
Now every Test case comes with steps , liked.
1) Call a Rest API
2) Call a Rest API
3) Call DB
4) Validate the result from step 2
See the attached image for more details on how my i test case is organized.
Problem :
When the report is generated it is not generated on the thread group level but in sampler level , i.e. in the report i have lines , and there is no way to distinguish which TC (Or thread group) and Test Set they belong.
Can someone please suggest how do i achieve this ?
Please consider this is mind :
1 ) Down the line i will have multiple Test Set
2 ) I will also need to merge all this reports from multiple Test Set and create 1 single report that provides a clear picture of what failed / passed and probably the error message received.
Existing Report :
timeStamp,elapsed,label,responseCode,responseMessage,threadName,dataType,success,failureMessage,bytes,sentBytes,grpThreads,allThreads,URL,Latency,IdleTime,Connect
1565180794011,2067,DeactiveExistingActiveScenario,Non HTTP response code: org.apache.http.conn.HttpHostConnectException,"Non HTTP response message: Connect to localhost:1 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused: connect",TC1_Probe_MbaWmcOutboundHappyFlowScenario 1-1,text,false,Test failed: code expected to contain /200/,2738,0,1,1,http://localhost:1/XXX/XXX/XXXX,0,0,2067
1565180796093,2007,ActiveMbaWmcOutboundHappyFlowScenario,Non HTTP response code: org.apache.http.conn.HttpHostConnectException,"Non HTTP response message: Connect to localhost:1 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused: connect",parallel bzm - Parallel Controller,text,false,Test failed: text expected to contain /All 25 invocations validated successful./,3104,0,2,1,http://localhost:1/XXX/XXX/XXX?awaitSeconds=30,0,0,2007
1565180796092,2479,Call DB Procedure,200,OK,parallel bzm - Parallel Controller,text,true,,42,0,1,1,null,2478,0,390
Actual expected :
Probably the same report in a different format like
Test Set 1 :
TC1 :
Step 1:
Step 2:
Step 3:
TC2 :
Step 1:
Step 2:
Step 3:
Current Test Set Structure :
https://ibb.co/F4SVHxq
Two approach, i can think of:
Use transaction controller. Put all request of one test under 1 transaction controller. It will be shown in the report at the end. So, first steps then test case name at the end as shown below:-
Use dummy sampler for test set to produce the extra label as shown below:-
Here TC1 and TC2 are dummy sampler. Based on the above, you can use Test set and test cases labels according to your need. Test Set1--dummy, TC1--dummy,Step 1, Step 2 so on.
Considering, functional test with 1 thread.
Hope this helps.
The goal is to draw a graph using D3 (v3) in a WebWorker (Rickshaw would be even better).
Requirement #1:
The storage space for the entire project should not exceed 1 MB.
Requirement #2:
Internet Explorer 10 should be supported
I already tried to pass the DOM element to Webworker.
This brought the following error message:
DOMException: Failed to execute 'postMessage' on 'Worker': HTMLDivElement object could not be cloned.
var worker = new Worker( 'worker.js' );
worker.postMessage( {
'chart' : document.querySelector('#chart').cloneNode(true)
} );
The GitHub user chrisahardie has made...
a small proof on concept showing how to generate a d3 SVG chart in a
web worker and pass it back to the main UI thread to be injected into
a webpage.
https://github.com/chrisahardie/d3-svg-chart-in-web-worker
He integrated jsdom into the browser with Browserify.
The problem:
The script has almost 5 MB, which is too much memory requirements for the application.
So my question:
Does anyone have experience in solving the problem or has any idea how the problem can be solved and the requirements can be met?
The Web Workers don't have access to the following JavaScript objects: The window object, The document object and The parent object. So, all we could do on that side would be to build something that could be used for quickly creating the DOM. The worker(s) could e.g process the datasets and do all the heavy computations, then pass the result back as a set of arrays. More details, you could check this article and this sample
I am using the Microsoft Outlook REST API to synchronize messages in a folder using skipTokens with the Prefer: odata.track-changes header.
After 62 successful rounds of results, I get an error 500 ErrorInternalServerError with the message Unable to cast object of type 'LegacyPagingToken' to type 'Microsoft.Exchange.Services.OData.Model.SkipToken'
I have tried:
Retrying the same query (https://outlook.office.com/api/v2.0/me/MailFolders/Inbox/messages/?%24skipToken=1BWUA9eXs5dN89tPsr_FOvtzINQAA0Cwk5o), which results in the same error
Restarting the sync, which results in the same error at the same point
Adding a new message to the Inbox and restarting the sync, which results in the same error at the same point
Moving the messages from that part of the sync to another folder (in case the messages themselves were causing the problem), which results in the same error at the same point
Has anybody run into this error or have suggestions on what might cause it or workarounds?
It looks like the issue was on my end while parsing the skipToken from the #odata.nextLink response. The token in the original question is invalid - the actual skipToken passed back from the API had -AAAA on the end. After 63 queries, in which the skipToken increments, the Base64 encoded form started using characters the regexp I was using didn't find. Switching from a \w regexp to a proper URL parser solved the problem.