CloudKit. Slow connection to Container Database first time - performance

My question is about cloudkit and the delay I have when I run a CKQueryOperation to fetch records from the public database.
I have had a lot of tests and, definitely, this only happens when I run for the first time or when I haven´t use the app for a long time. In that situation, when I run the query I have to wait for several seconds before getting records. But if I repeat the request a couple of seconds later (or if I cancel the firts one and launch it again), then everything is fast and perfect.
Does cloudkit have any "cache" for queries already launched so the next time (in a short term) is faster? or is there anything about establishing the connection the first time and later this connection keeps alive?
I really have tried a lot of things and the result is always the same.
Please, do you have any clue about this behaviour?

Related

Does JMeter show the correct average response time for the first page it hits for many virtual users?

I'm load testing a system with 500 virtual users. I've kept the "Ramp-Up period (in seconds)" option to zero. So, what I understand, JMeter will hit the system with 500 virtual users all at the same time. Please correct me if I'm wrong here.
Now, the summary report shows the average response time for the first page is ~100 seconds!. Which is more than a minute and a half of wait time. But while the JMeter is running, I manually went to the same page/url using a browser and didn't have to wait for that long. It was not even close, the page response was almost immediate for me.
My question is: is there any known issue for the average response time of the first page? Is it JMeter which is taking long to trigger that many users?
Thanks in advance.
--Ishtiaque
There is no issue in Jmeter related to first page response time.
Summary Report shows all response time details in Milliseconds, the value "100" seconds have you converted milliseconds to seconds?
Also in order to make sure that 500 users hit concurrently, use Synchronizing Timer.
Hope this will help.
While the response times will be accurate, you need to consider the affect of starting so many threads at once on both your server and your client.
500 threads to start at once is not insignificant n the client. If your server has the connections, it will start 500 threads as well.
Ramping over a period of time is more realistic loadwise, but still not really indicative of server capability until the threads have all started and settled in.
Databases can also require a settling in period which can affect response times.
Alternative to ramping is introducing a random wait at the start of each thread before firing the first sample. You can then choose not to ramp over time, but still expect resources on the client to suddenly come under load and change the settings if you hit limits. This will make the entire run much more realistic of typical behaviour. However, you need to determine if your use cases are typical.
Although the heap size is increased, i notice there is still longer time as compared to actual response time. Later i realised it was the probe effect (the extra time a tool generates due to test execution)

Repeated tasks - spawn new processes or run continuously?

We have about 10 different Python scripts that download data from the web, read data from a database and write data back to that database. They do so repeatedly every 10 seconds (or 10 seconds after the last task has completed).
The question is, what is the best approach at running these tasks? I can think of a few ways:
a while True that runs the task then sleeps for the interval. It could be guarded by a watchdog like supervisord, making sure it is always up.
having the script execute the task just once, and invoking the script externally once every 10 seconds by another process.
having the script execute the task lets say for 1 hour (every 10 seconds for an hour), and having a watchdog make sure that task runs again once the hour is over.
I would like to avoid long running processes that actually do something because I don't want to deal with memory problems etc over long periods of time.
Additional Information
The scripts are different because they each retrieve data from a different source, and query, calculate and insert different data into the database.
The tasks are performed every 10 seconds since the data being retrieve is in real-time, and we need to not only keep updating it very frequently, but also keep all the historical data in the database.
There are a lot of resources being used by the scripts - MySQL connections, HTTP connections, Redis connections, etc. We have encountered issues with using the long-running approach before, specifically with MySQL connections (things like MySQL server has gone away, even though all connections had been closed). Hence the inclination toward having the scripts run in shorter periods of time.
What are some common approaches at this?
Unless your scripts somehow leak memory (quite unlikely), they should all be the same. So, for sheer simplicity (your time programming/debugging is much more expensive than a few miliseconds of the machine's time, even each 10 seconds!) I'd go for the single script that checks each 10 seconds.
OTOH, checking each 10 seconds sounds like busywork. Can't you set up so that whatever you are monitoring tells you when there are changes? Or batch the records up so you can retrieve, say, a day's worth at at time?
If you are running on linux, cron has granularity of a minute. We have processes we run constantly. Rather than watch them, the script will open a semaphore that gets released when the program finishes normally or not. This way if it runs long and it gets called again by cron, the copy will exit when it can't get the lock. This way you can call it a often as you need to without it stepping on a possibly still running copy.

Ajax request delay 1 second

This time my problem is delay between ajax request, and php file response.I have checked it out with google chrome statistics, and it shows almost always exact 1 second wait time every time function loops and sends ajax GET request.This makes my script pretty much unusable cause it slows so much, that browser get's unresponsive till it executes full loop.
I tried to remove all MYSQL queries, so exclude MYSQL as the problem, and delay still existed.Im pretty much sure it's not MYSQL taking so long to execute.
Does anyone have any idea what may be this delay caused on.Maybe it's some settings on my PC, or something with AJAX?
Thank You
Maybe this will explain better.
YES!I've figured it out.Apearantly my MYSQL connection have taken it's time, the reason was that i've used localhost to initiate connection instead 127.0.0.1.Now It's much MUCH faster! :D .No reason blaming AJAX, simple 127.0.0.1 did the trick.

no response from the host :snmpwalk

I have implemented AgentX using mib2c.create-dataset.conf ( with cache enabled)
In my snmd.conf :: agentXTimeout 15
In testtable.h file I have changed cache value as below...
#define testTABLE_TIMEOUT 60
According to my understanding It loads data every 60 second.
Now my issue is if the data in data table is exceeds some amount it takes some amount of time to load it.
As in between If I fired SNMPWALK it gives me “no response from the host” If I use SNMPWALK for whole table and in between testTABLE_TIMEOUT occurs it stops in between and shows following error (no response from the host).
Please tell me how to solve it ? In my table large amount of data is present and changing frequently.
I read some where:
(when the agent receives a request for something in this table and the cache is older than the defined timeout (12s > 10s), then it does re-load the data. This is the expected behaviour.
However the agent does not automatically release the local cache (i.e. call the 'free' routine) as soon as the timeout has expired.
Instead this is handled by a regular "garbage collection" run (once a minute), which will free any stale caches.
In the meantime, a request that tries to use that cache will spot that it's expired, and reload the data.)
Is there any connection between these two ?? I can’t get this... How to resolve my problem ???
Unfortunately, if your data set is very large and it takes a long time to load then you simply need to suffer the slow load and slow response. You can try and load the data on a regular basis using snmp_alarm or something so it's immediately available when a request comes in, but that doesn't really solve the problem either since the request could still come right after the alarm is triggered and the agent will still take a long time to respond.
So... the best thing to do is optimize your load routine as much as possible, and possibly simply increase the timeout that the manager uses. For snmpwalk, for example, you might add -t 30 to the command line arguments and I bet everything will suddenly work just fine.

Oracle Text: update while index is being synchronized

I have a complex context index that gets synchronized nightly. The process takes some 10 minutes, and any updates to this table that touch the index column during synchronization period result in ORA-29861: domain index is marked LOADING/FAILED/UNUSABLE exception - what can I do about it?
I don't think you can do anything else while that thing is synchronizing. If the index update is unavoidable then you'll need to find a way to queue up the requests and retry say every 15 minutes till it/they go through. I would suggest a limit of 3 tries until it goes through or fail gracefully. I make the suggestion of 3 times because if it is supposed to take 10 minutes and it doesn't work in 45 minutes, you have bigger fish to fry I would think. Might as well make it fail gracefully than endlessly repeating on a broken system. Hopefully you don't have so many attempted hits to the database during that period that you end up with a big queue. You could also see if you can synchronize your app with the times your infra people set up for updating indexes. So that you block these transactions at the same time. I don't know how big your organization is or what the systems are like (if you are running Oracle you guys have enough money for something sizable). That means you might have scheduling apps that might help? In any case, unless the DBAs stop doing these updates, you'll have to wait till they finish I would think.

Resources