When running a load test for 50 users with a steady state load of 15 mins, the samples do not go in the next loop, it means if we put a load of 50 users, in the sample tables for the first 50 samples there are no errors, however all the requests after that fail.
On log out we receive an authentication token
BDT3-CHE8-GKA5-BWA1%7Cd67830e7c46bc1011d76e69de76c59c57c4f5956%7Clin
and in the previous requests the token is
BDT3-CHE8-GKA5-BWA1|d67830e7c46bc1011d76e69de76c59c57c4f5956|lin
noticed that pipe (|) character in previous token is replaced by %7C.
Also the session ID is just generated on URL launch page, however not captured in Jmeter parameters and not used in further request.
Please provide more insights on this issue or a possible solution on how to decode the token, so it can be passed to the next request
Exception on Log out page is :
java.net.URISyntaxException: Illegal character in query at index 113: http://www.siteunderprogress.com/secure/WorkflowUIDispatcher.jspa?id=17116&action=11&atl_token=BDT3-CHE8-GKA5-BWA1|d67830e7c46bc1011d76e69de76c59c57c4f5956|lin&decorator=dialog&inline=true&_=1422286605586
at java.net.URI$Parser.fail(Unknown Source)
at java.net.URI$Parser.checkChars(Unknown Source)
at java.net.URI$Parser.parseHierarchical(Unknown Source)
at java.net.URI$Parser.parse(Unknown Source)
at java.net.URI.<init>(Unknown Source)
at java.net.URL.toURI(Unknown Source)
at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl.sample(HTTPHC4Impl.java:283)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerProxy.sample(HTTPSamplerProxy.java:74)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1141)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1130)
at org.apache.jmeter.threads.JMeterThread.process_sampler(JMeterThread.java:431)
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:258)
at java.lang.Thread.run(Unknown Source)
You have to extract the token from wherever you are more comfortable, (usually from the page) and then use it in all 3 places.
Use a regular expression post processor. See:
http://jmeter.apache.org/usermanual/component_reference.html#Regular_Expression_Extractor
When inserting into the URL, remember to encode it
Related
We are getting the Caused by: java.io.InterruptedIOException: timeout exception in the logs from the server. However, the server is not giving the response code to us.
I am looking for the standard practice for the timeout monitoring to be followed in Splunk or Appdynamics to plot the graph for number of timeouts being received per second.
Shall we add the error code like 408 in the exception at client side or we should plot the graph for on basis of text "timeout" count with over the time.
Exception Logs
java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at lambdainternal.LambdaRTEntry.main(LambdaRTEntry.java:150)
Caused by: java.io.InterruptedIOException: timeout
at okhttp3.internal.connection.Transmitter.timeoutExit(Transmitter.kt:104)
at okhttp3.internal.connection.Transmitter.maybeReleaseConnection(Transmitter.kt:293)
at okhttp3.internal.connection.Transmitter.noMoreExchanges(Transmitter.kt:257)
at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.kt:192)
at okhttp3.RealCall.execute(RealCall.kt:66)
For AppDynamics the ideal solution would be for a "bad error code" to be returned from the server - this would cause an error to be detected (and mark any associated Business Transaction as being in error) - see https://docs.appdynamics.com/22.2/en/application-monitoring/troubleshooting-applications/errors-and-exceptions#ErrorsandExceptions-BusinessTransactionError
Else you can use Custom Error Configuration to set a logger which signals errors - see https://docs.appdynamics.com/22.2/en/application-monitoring/configure-instrumentation/error-detection#ErrorDetection-ErrorDetectionConfiguration
Else you can capture values using a Data Collector and then use these in Analytics to break out errors - see https://docs.appdynamics.com/22.2/en/application-monitoring/configure-instrumentation/data-collectors + https://docs.appdynamics.com/22.2/en/analytics/configure-analytics/collect-transaction-analytics-data
Here I use Parallel Controller but the Parallel Controller2 and Parallel Controller3 don't run and throw error in logs. The Parallel Controller1 is inside Transaction Controller. The Parallel Controller2 and Parallel Controller3 are inside Simple Controller.
Here is the error log for Controller2
2021-08-24 17:51:48,819 ERROR o.a.j.t.JMeterThread: Error while processing sampler: 'bzm - Parallel Controller2'.
java.lang.RuntimeException: Attempting to reset the thread context
at org.apache.jmeter.testelement.AbstractTestElement.setThreadContext(AbstractTestElement.java:579) ~[ApacheJMeter_core.jar:5.4.1]
at org.apache.jmeter.threads.JMeterThread.doSampling(JMeterThread.java:623) ~[ApacheJMeter_core.jar:5.4.1]
at org.apache.jmeter.threads.JMeterThread.executeSamplePackage(JMeterThread.java:558) ~[ApacheJMeter_core.jar:5.4.1]
at org.apache.jmeter.threads.JMeterThread.processSampler(JMeterThread.java:489) [ApacheJMeter_core.jar:5.4.1]
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:256) [ApacheJMeter_core.jar:5.4.1]
at com.blazemeter.jmeter.controller.JMeterThreadParallel.run(JMeterThreadParallel.java:61) [jmeter-parallel-0.11.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) [?:1.8.0_211]
at java.util.concurrent.FutureTask.run(Unknown Source) [?:1.8.0_211]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:1.8.0_211]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:1.8.0_211]
at java.lang.Thread.run(Unknown Source) [?:1.8.0_211]
How is it possible to execute the calls in parallel inside another parallel controller?
I don't think it's supported, moreover it's not clear what you're trying to achieve by this.
As per documentation Parallel Controller executes its children in parallel so instead of going for nested constructions you can just place everything under a single Parallel Controller.
If you need to mimic a parallel call which in its turn triggers another parallel calls, however I don't think the use case is valid, you can reach out to the plugin developers and/or maintainers at jmeter-plugins support forum
In the meantime you can use JSR223 Test Elements to kick off the parallel requests as per your network requests pattern using JSR223 Test Elements and Groovy language like it's described in the How to Load Test AJAX/XHR Enabled Sites With JMeter article
I'm working on a custom load function to load data from Bigtable using Pig on Dataproc. I compile my java code using the following list of jar files I grabbed from Dataproc. When I run the following Pig script, it fails when it tries to establish a connection with Bigtable.
Error message is:
Bigtable does not support managed connections.
Questions:
Is there a work around for this problem?
Is this a known issue and is there a plan to fix or adjust?
Is there a different way of implementing multi scans as a load function for Pig that will work with Bigtable?
Details:
Jar files:
hadoop-common-2.7.3.jar
hbase-client-1.2.2.jar
hbase-common-1.2.2.jar
hbase-protocol-1.2.2.jar
hbase-server-1.2.2.jar
pig-0.16.0-core-h2.jar
Here's a simple Pig script using my custom load function:
%default gte '2017-03-23T18:00Z'
%default lt '2017-03-23T18:05Z'
%default SHARD_FIRST '00'
%default SHARD_LAST '25'
%default GTE_SHARD '$gte\_$SHARD_FIRST'
%default LT_SHARD '$lt\_$SHARD_LAST'
raw = LOAD 'hbase://events_sessions'
USING com.eduboom.pig.load.HBaseMultiScanLoader('$GTE_SHARD', '$LT_SHARD', 'event:*')
AS (es_key:chararray, event_array);
DUMP raw;
My custom load function HBaseMultiScanLoader creates a list of Scan objects to perform multiple scans on different ranges of data in the table events_sessions determined by the time range between gte and lt and sharded by SHARD_FIRST through SHARD_LAST.
HBaseMultiScanLoader extends org.apache.pig.LoadFunc so it can be used in the Pig script as load function.
When Pig runs my script, it calls LoadFunc.getInputFormat().
My implementation of getInputFormat() returns an instance of my custom class MultiScanTableInputFormat which extends org.apache.hadoop.mapreduce.InputFormat.
MultiScanTableInputFormat initializes org.apache.hadoop.hbase.client.HTable object to initialize the connection to the table.
Digging into the hbase-client source code, I see that org.apache.hadoop.hbase.client.ConnectionManager.getConnectionInternal() calls org.apache.hadoop.hbase.client.ConnectionManager.createConnection() with the attribute “managed” hardcoded to “true”.
You can see from the stack track below that my code (MultiScanTableInputFormat) tries to initialize an HTable object which invokes getConnectionInternal() which does not provide an option to set managed to false.
Going down the stack trace, you will get to AbstractBigtableConnection that will not accept managed=true and therefore cause the connection to Bigtable to fail.
Here’s the stack trace showing the error:
2017-03-24 23:06:44,890 [JobControl] ERROR com.turner.hbase.mapreduce.MultiScanTableInputFormat - java.io.IOException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240)
at org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:431)
at org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:424)
at org.apache.hadoop.hbase.client.ConnectionManager.getConnectionInternal(ConnectionManager.java:302)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:185)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:151)
at com.eduboom.hbase.mapreduce.MultiScanTableInputFormat.setConf(Unknown Source)
at com.eduboom.pig.load.HBaseMultiScanLoader.getInputFormat(Unknown Source)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:264)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:194)
at java.lang.Thread.run(Thread.java:745)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:276)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
... 26 more
Caused by: java.lang.IllegalArgumentException: Bigtable does not support managed connections.
at org.apache.hadoop.hbase.client.AbstractBigtableConnection.<init>(AbstractBigtableConnection.java:123)
at com.google.cloud.bigtable.hbase1_2.BigtableConnection.<init>(BigtableConnection.java:55)
... 31 more
The original problem was caused by the use of outdated and deprecated hbase client jars and classes.
I updated my code to use the newest hbase client jars provided by Google and the original problem was fixed.
I still get stuck with some ZK issue that I still did not figure out, but that's a conversation for a different question.
This one is answered!
I have confronted the same error message:
Bigtable does not support managed connections.
However, according to my research, the root cause is that the class HTable can not be constructed explicitly. After changed the way to construct HTable by connection.getTable. The problem resolved.
I have a Neo4j database with about 130K nodes and probably somewhere between 17M relationships. My computer running Windows 10 has 16GB of RAM, 10GB (maximum) of which are allocated to the Neo4j-shell heap.
I want to run a query using the neo4j-shell from the command prompt and redirect the results to a csv file. The command I’m using for that is:
Neo4jShell -v -file query.cql > results.csv
Where the query is of the form:
MATCH (subject)-[:type1]->(intNode1:label)<-[:type2]-(intNode2:label)<-[:type3]-(object) RETURN subject.property1, object.property1;
The problem is that whenever I run this query, I get an OutOfMemory error (see error message at the bottom).
Does anyone have advice for how to run a query like this successfully? I feel like 10GB of RAM should be enough for a query like this given the size of the graph DB.
The error message I get is:
ERROR (-v for expanded information):
Error occurred in server thread; nested exception is:
java.lang.OutOfMemoryError: GC overhead limit exceeded
java.rmi.ServerError: Error occurred in server thread; nested exception is:
java.lang.OutOfMemoryError: GC overhead limit exceeded
at sun.rmi.server.UnicastServerRef.dispatch(Unknown Source)
at sun.rmi.transport.Transport$2.run(Unknown Source)
at sun.rmi.transport.Transport$2.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Unknown Source)
at sun.rmi.transport.tcp.TCPTransport.handleMessages(Unknown Source)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(Unknown Source)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.access$400(Unknown Source)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(Unknown Source)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
at sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(Unknown Source)
at sun.rmi.transport.StreamRemoteCall.executeCall(Unknown Source)
at sun.rmi.server.UnicastRef.invoke(Unknown Source)
at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(Unknown Source)
at java.rmi.server.RemoteObjectInvocationHandler.invoke(Unknown Source)
at com.sun.proxy.$Proxy1.interpretLine(Unknown Source)
at org.neo4j.shell.impl.AbstractClient.evaluate(AbstractClient.java:149)
at org.neo4j.shell.impl.AbstractClient.evaluate(AbstractClient.java:133)
at org.neo4j.shell.StartClient.executeCommandStream(StartClient.java:393)
at org.neo4j.shell.StartClient.grabPromptOrJustExecuteCommand(StartClient.java:372)
at org.neo4j.shell.StartClient.startRemote(StartClient.java:330)
at org.neo4j.shell.StartClient.start(StartClient.java:196)
at org.neo4j.shell.StartClient.main(StartClient.java:135)
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
Solution when I hade this problem was to add this line on the end of neo4j.properties
# Default values for the low-level graph engine
cache_type=none
neostore.nodestore.db.mapped_memory=50M
neostore.relationshipstore.db.mapped_memory=500M
neostore.propertystore.db.mapped_memory=100M
neostore.propertystore.db.strings.mapped_memory=100M
neostore.propertystore.db.arrays.mapped_memory=0M
also try to increase java in neo4j-wrapper.conf this line
#wrapper.java.initmemory=2048
#wrapper.java.maxmemory=2048
One more thing do you have index on node that you query?
You can provide more heap for Neo4jShell (use JAVA_OPTS=-Xmx4G -Xms4G -Xmn1G environment variable).
Did you try to run your query with profile? I presume you span up billions of paths as you don't have any restrictions.
You missed a label on subject and object, which cause the query planner to run a full-graph-scan.
MATCH (subject:label)-[:type1]->(intNode1:label)
<-[:type2]-(intNode2:label)
<-[:type3]-(object:label)
WITH distinct subject, object
RETURN subject.property1, object.property1;
You should reduce the in-between cardinality and also the output cardinality.
MATCH (subject:label)-[:type1]->(intNode1:label)
<-[:type2]-(intNode2:label)
WITH distinct subject, intNode2
MATCH (intNode2)<-[:type3]-(object:label)
WITH distinct subject, object
RETURN subject.property1, object.property1;
even better would be:
MATCH (subject:label)-[:type1]->(intNode1:label)
<-[:type2]-(intNode2:label)
WITH intNode2, collect(distinct subject) as subjects
MATCH (intNode2)<-[:type3]-(object:label)
WITH distinct object, subjects
UNWIND subjects as subject
RETURN subject.property1, object.property1;
I am getting the below error.Kindly help
[8/8/14 21:14:56:939 GMT-08:00] 00000005 TimeoutManage I WTRN0006W: Transaction 00000147B92EFAE20000000100000012DF462C9E681BA3670A44A25FE1B0F6182303FB5C00000147B92EFAE20000000100000012DF462C9E681BA3670A44A25FE1B0F6182303FB5C00000001 has timed out after 120 seconds.
[8/8/14 21:14:56:967 GMT-08:00] 00000006 TimeoutManage I WTRN0124I: When the timeout occurred the thread with which the transaction is, or was most recently, associated was Thread[WMQJCAResourceAdapter : 4,5,main]. The stack trace of this thread when the timeout occurred was:
java.net.SocketOutputStream.socketWrite0(Native Method)
java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:103)
java.net.SocketOutputStream.write(SocketOutputStream.java:147)
com.ibm.mq.jmqi.remote.internal.RemoteTCPConnection.send(RemoteTCPConnection.java:1212)
com.ibm.mq.jmqi.remote.internal.system.RemoteConnection.sendTSH(RemoteConnection.java:2289)
com.ibm.mq.jmqi.remote.internal.RemoteHconn.sendTSH(RemoteHconn.java:954)
com.ibm.mq.jmqi.remote.internal.RemoteFAP.jmqiPut(RemoteFAP.java:5443)
com.ibm.mq.jmqi.remote.internal.RemoteFAP.MQPUT(RemoteFAP.java:5205)
com.ibm.msg.client.wmq.v6.base.internal.MQSESSION.MQPUT(MQSESSION.java:1252)
com.ibm.msg.client.wmq.v6.base.internal.MQQueue.putMsg2(MQQueue.java:2090)
com.ibm.msg.client.wmq.v6.jms.internal.MQMessageProducer.sendInternal(MQMessageProducer.java:1262)
com.ibm.msg.client.wmq.v6.jms.internal.MQMessageProducer.send(MQMessageProducer.java:768)
com.ibm.msg.client.wmq.v6.jms.internal.MQMessageProducer.send(MQMessageProducer.java:2713)
com.ibm.msg.client.jms.internal.JmsMessageProducerImpl.sendMessage(JmsMessageProducerImpl.java:872)
com.ibm.msg.client.jms.internal.JmsMessageProducerImpl.send_(JmsMessageProducerImpl.java:727)
com.ibm.msg.client.jms.internal.JmsMessageProducerImpl.send(JmsMessageProducerImpl.java:398)
com.ibm.mq.jms.MQMessageProducer.send(MQMessageProducer.java:281)
com.ibm.ejs.jms.JMSQueueSenderHandle.send(JMSQueueSenderHandle.java:204)
com.scb.sts.stsappserver.sender.MessageSender.sendRecords(Unknown Source)
com.scb.sts.services.PCSPPaymentSplitter.doExecute(Unknown Source)
com.scb.sts.stsappserver.eventhandler.SplitterEventHandler.handleEvent(Unknown Source)
com.scb.sts.services.PCSPPaymentReceiver.doProcess(Unknown Source)
com.scb.sts.services.PCSPPaymentReceiver.doExecute(Unknown Source)
com.scb.sts.controllers.OCWSServlet.doPost(Unknown Source)
com.scb.sts.qlcomm.QLCommBean.processXMLFile(Unknown Source)
com.scb.sts.qlcomm.QLCommBean.isDoOutput(Unknown Source)
com.scb.sts.qlcomm.QLCommBean.onMessage(Unknown Source)
com.ibm.ejs.container.MessageEndpointHandler.invokeMdbMethod(MessageEndpointHandler.java:1093)
com.ibm.ejs.container.MessageEndpointHandler.invoke(MessageEndpointHandler.java:778)
$Proxy32.onMessage(Unknown Source)
com.ibm.mq.connector.inbound.MessageEndpointWrapper.onMessage(MessageEndpointWrapper.java:131)
com.ibm.mq.jms.MQSession$FacadeMessageListener.onMessage(MQSession.java:147)
com.ibm.msg.client.jms.internal.JmsSessionImpl.run(JmsSessionImpl.java:2557)
com.ibm.mq.jms.MQSession.run(MQSession.java:860)
com.ibm.mq.connector.inbound.WorkImpl.run(WorkImpl.java:172)
com.ibm.ejs.j2c.work.WorkProxy.run(WorkProxy.java:399)
com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1604)
The transaction timeout simply means that the transaction not was committed before the timeout expired, in this case 120s elapsed without a commit.
The stack shows that you're in the onMessage() function of an MDB named QLCommBean. And that this MDB was sending some messages via MessageSender.sendRecords(), which in turn was called the MQ JMS API:
JMSQueueSenderHandle.send()
The top of the stack is:
java.net.SocketOutputStream.socketWrite0(Native Method)
This means that the active code within the MDB at the time of the transaction timeout, was a socket write (sending data over the network). In this case MQ was sending a message to the queue manager.
The transaction timeout itself is not a bug. You need to review the MDB logic and determine if 120s is an appropriate amount of time to be in the MDB. If it isn't, I suggest you add logging to your MDB to find out what it was doing for 120s. It may be that the MQ code has used up a lot of this time, but it may not be. The stack shown is just where the code happened to be 120s after onMessage() was invoked.
As MQ in the process of sending data over the network to the queue manager, you may want to look at your network to see if its performing adequately, or possibly your queue manager. It might be heavily loaded.
If this occurs regularly, one good option is to take a number of javacores over the course of the 120s. You can then see what the stack was at various points.
Otherwise I suggest:
1) Instrument your MDB, make sure you know which code was executed, and at what time. Only this will rule out your MDB logic.
2) Consider your network
3) Possibly trace your queue manager & the MQ JMS code - you may need IBM's help to determine if the time taken by the IBM code is appropriate
4) If 120s is an acceptable length of time for onMessage(), consider increasing the transaction timeout value to a value greater than the maximum time you consider to be acceptable for onMessage().