During the construction process of SPARQL, I am having following error. I don't understand whether there is limitation on querying or not?
Response:
Virtuoso 42000 Error The estimated execution time 253 (sec) exceeds the limit of 240 (sec).
This means the data you are getting out takes longer than usual to be processed. If you are using the online endpoint, you can't do anything but setting a limit or filtering (filter) your data even more. Or you can consider getting a dataset dump and run in on your system.
Related
i tried with one API for 100 users. for 50 users I am getting success response but for remaining 50 I am getting 500 internal server error. how half of the API's alone getting failed. please suggest me a solution
As per 500 Internal Server Error description
The HyperText Transfer Protocol (HTTP) 500 Internal Server Error server error response code indicates that the server encountered an unexpected condition that prevented it from fulfilling the request.
So you need to look at your server logs to get the reason for failure. Most probably it becomes overloaded and cannot handle 100 users. Try increasing the load gradually and inspect relationship between:
Number of users and number of requests per second
Number of users and response time
My expectation is that
at the first phase of the test the response time will remain the same and the number of requests per second will grow proportionally to the number of users.
at some stage you will see that the number of requests per second stops growing. The moment right before that is known as saturation point.
After that response time will start growing
After that the errors will start occurring
You might want to collect and report the aforementioned metrics and indicate what is the current bottleneck. If you need to understand the reason - it's a whole big different story
I'm working with NIFI and PutDataBaseRecord to insert records to tables. I am simulating the case that the database is down in order to handle the error (to send an mail indicating connection time out for example ). The problem is when I disconnect the net cable to simulate the error and turns on PutDataBaseRecord the flows do not pass either to the relationship of failure, or to the relationship of retry and the processor sends bulletin error messages continually, it never stops sending messages.
I put 10 seconds in the Max wait time property with the hope that after that time the processor stops throwing errors and sends the flows to the fault relationship, but it does not work.
I think the option is not working as you expected. See HERE.
Max Wait Time: The maximum amount of time allowed for a running SQL statement, zero means there is no limit. Max time less than 1 second will be equal to zero.
Supports Expression Language: true (will be evaluated using variable registry only)
Since you are using PutDatabaseRecords processor, it will assume the database connection is well done. The error with this processor should be related to the SQL, not connection problem and so database connection failure is not going to failure relationship, I guess.
1) I can not understand error % in summary result in Listeners. 2) For example first time I've to run a test plan its error% is 90% and then run same test plan it shows 100% error. This error% is vary when i run my test plan.
Error% denotes the percent of requests with errors.
100% error means all the requests sent from JMeter have failed.
You should add a Tree View Listener and then check the individual requests and responses. Such high percentage of error means that either your server is not available or all of your requests are invalid.
So you should use Tree View Listener in order to identify the actual issue.
Error % means how many requests failed or resulted in error throughout the test duration. Its calculated based on the #samples field.
2 and 3 Can you please give more details about your test plan? Like number of threads, ramp-up and duration.
Such high error percentage will need further analysis. Check if you have missed out correlation of some requests i.e. any dynamic values that are passed from one request to other or check for resource utilization of your target system if it can handle the load you are generating.
I am new in Performance testing. I am using Jmeter. I have 15 transaction in my test plan and I am running the script for the 40 loops. Some of the transactions are getting failed 3 or 2 times out of 40 loops. So here what would be the possible reason for the failure.
Is there anything wrong in Scripting side
Do I need to use think time in script to avoid these errors
You need to check your target server utilization like CPU, Memory and IO. Check if the errors are connected to resources.
Check the target server logs for further investigation.
If you doubt that script is at fault make sure that you have correlated all the dynamic values but as you say they are failing 2-3 times out of 40 it means the JMeter script is working fine.
Adding think time may help reduce the errors but you need to design your performance test considering real life scenario and accordingly add think time.
I have implemented AgentX using mib2c.create-dataset.conf ( with cache enabled)
In my snmd.conf :: agentXTimeout 15
In testtable.h file I have changed cache value as below...
#define testTABLE_TIMEOUT 60
According to my understanding It loads data every 60 second.
Now my issue is if the data in data table is exceeds some amount it takes some amount of time to load it.
As in between If I fired SNMPWALK it gives me “no response from the host” If I use SNMPWALK for whole table and in between testTABLE_TIMEOUT occurs it stops in between and shows following error (no response from the host).
Please tell me how to solve it ? In my table large amount of data is present and changing frequently.
I read some where:
(when the agent receives a request for something in this table and the cache is older than the defined timeout (12s > 10s), then it does re-load the data. This is the expected behaviour.
However the agent does not automatically release the local cache (i.e. call the 'free' routine) as soon as the timeout has expired.
Instead this is handled by a regular "garbage collection" run (once a minute), which will free any stale caches.
In the meantime, a request that tries to use that cache will spot that it's expired, and reload the data.)
Is there any connection between these two ?? I can’t get this... How to resolve my problem ???
Unfortunately, if your data set is very large and it takes a long time to load then you simply need to suffer the slow load and slow response. You can try and load the data on a regular basis using snmp_alarm or something so it's immediately available when a request comes in, but that doesn't really solve the problem either since the request could still come right after the alarm is triggered and the agent will still take a long time to respond.
So... the best thing to do is optimize your load routine as much as possible, and possibly simply increase the timeout that the manager uses. For snmpwalk, for example, you might add -t 30 to the command line arguments and I bet everything will suddenly work just fine.