Speed of NSFileManager CopyFileAtPath is wildly different than finder in cases - macos

I am using NSFileManager to copy a lot of files from one drive to another.
In some cases I am seeing users say " The app is unusable, It transfers at 0.33 MB/s, on a USB2 Connection. What would take me 10 min when I just Drag and drop"
I am running this on a background thread - is that maybe the issue?
secondaryTask=dispatch_queue_create( "com.myorg.myapp.task2",NULL);
dispatch_sync(secondaryTask,^{
NSFileManager *manager;
[manager copyItemAtPath:sourceFile toPath:filePath error:&error];
});

This seems to be related to OS X actually throttling my app. Some users actually see this in the log:
5/9/16 15:26:31.000 kernel[0]: process MyApp[937] thread 36146 caught burning CPU! It used more than 50% CPU (Actual recent usage: 91%) over 180 seconds. thread lifetime cpu usage 90.726617 seconds, (49.587139 user, 41.139478 system) ledger info: balance: 90006865992 credit: 90006865992 debit: 0 limit: 90000000000 (50%) period: 180000000000 time since last refill (ns): 98013987431
So... this is a GCD question... and Ive brought it up with Apple directly.

Related

Device storage consumption

Using the Android Management API, I'm trying to collect the device's storage consumption information.
I found some information in memoryInfo and memoryEvents.
In memoryInfo there is an attribute called "totalInternalStorage" and in "memoryEvents there is" an event of type "INTERNAL_STORAGE_MEASURED".
Questions:
Please, what does the value shown in "totalInternalStorage" mean? Does it mean the total amount of storage available?
What does the value shown in "INTERNAL_STORAGE_MEASURED" mean? Does it mean the consumed value of internal storage?
How is a "memoryEvents" fired? Can I collect this information at any time or do I have to wait for Google to do it in their time?
I took a test and collected the following information:
totalInternalStorage = 0.1 GB
memoryEvents = 4 GB (INTERNAL_STORAGE_MEASURED, 3 days ago)
This information, to me, is very confusing and that's why I need your help.
Thanks
totalInternalStorage in the memoryInfo is the measurement of the root of the total "system" partition storage
MemoryEvent returns 3 value per event eventType , createTime, and byteCount. in the test you made the value you receive is as follows
eventType - INTERNAL_STORAGE_MEASURED it means that the memory measured was the Internal Storage or read-only system partition
byteCount - 4 GB is the number of free bytes in the medium or in your internal storage
createTime - 3 days ago , it is the day where the event occurred
The memoryInfo measurements are taken asynchronously on the device either when a change is detected or when there's a periodic refresh of the device status. You can check the status everytime you call device.get()

MongoDB-Java performance with rebuilt Sync driver vs Async

I have been testing MongoDB 2.6.7 for the last couple of months using YCSB 0.1.4. I have captured good data comparing SSD to HDD and am producing engineering reports.
After my testing was completed, I wanted to explore the allanbank async driver. When I got it up and running (I am not a developer, so it was a challenge for me), I first wanted to try the rebuilt sync driver. I found performance improvements of 30-100%, depending on the workload, and was very happy with it.
Next, I tried the async driver. I was not able to see much difference between it and my results with the native driver.
The command I'm running is:
./bin/ycsb run mongodb -s -P workloads/workloadb -p mongodb.url=mongodb://192.168.0.13:27017/ycsb -p mongodb.writeConcern=strict -threads 96
Over the course of my testing (mostly with the native driver), I have experimented with more and less threads than 96; turned on "noatime"; tried both xfs and ext4; disabled hyperthreading; disabled half my 12 cores; put the journal on a different drive; changed sync from 60 seconds to 1 second; and checked the network bandwidth between the client and server to ensure its not oversubscribed (10GbE).
Any feedback or suggestions welcome.
The Async move exceeded my expectations. My experience is with the Python Sync (pymongo) and Async driver (motor) and the Async driver achieved greater than 10x the throughput. further, motor is still using pymongo under the hoods but adds the async ability. that could easily be the case with your allanbank driver.
Often the dramatic changes come from threading policies and OS configurations.
Async needn't and shouldn't use any more threads than cores on the VM or machine. For example, if you're server code is spawning a new thread per incoming conn -- then all bets are off. start by looking at the way the driver is being utilized. A 4 core machine uses <= 4 incoming threads.
On the OS level, you may have to fine-tune parameters like net.core.somaxconn, net.core.netdev_max_backlog, sys.fs.file_max, /etc/security/limits.conf nofile and the best place to start is looking at nginx related performance guides including this one. nginx is the server that spearheaded or at least caught the attention of many linux sysadmin enthusiasts. Contrary to popular lore one should reduce your keepalive timeout opposed to lengthen it. The default keep-alive timeout is some absurd (4 hours) number of seconds. you might want to cut the cord in 1 minute. basically, think a short sweet relationship with your clients connections.
Bear in mind that Mongo is not Async so you can use a Mongo driver pool. nevertheless, don't let the driver get stalled on slow queries. cut it off in 5 to 10 seconds using the following equivalents in Java. I'm just cutting and pasting here with no recommendations.
# Specifies a time limit for a query operation. If the specified time is exceeded, the operation will be aborted and ExecutionTimeout is raised. If max_time_ms is None no limit is applied.
# Raises TypeError if max_time_ms is not an integer or None. Raises InvalidOperation if this Cursor has already been used.
CONN_MAX_TIME_MS = None
# socketTimeoutMS: (integer) How long (in milliseconds) a send or receive on a socket can take before timing out. Defaults to None (no timeout).
CLIENT_SOCKET_TIMEOUT_MS=None
# connectTimeoutMS: (integer) How long (in milliseconds) a connection can take to be opened before timing out. Defaults to 20000.
CLIENT_CONNECT_TIMEOUT_MS=20000
# waitQueueTimeoutMS: (integer) How long (in milliseconds) a thread will wait for a socket from the pool if the pool has no free sockets. Defaults to None (no timeout).
CLIENT_WAIT_QUEUE_TIMEOUT_MS=None
# waitQueueMultiple: (integer) Multiplied by max_pool_size to give the number of threads allowed to wait for a socket at one time. Defaults to None (no waiters).
CLIENT_WAIT_QUEUE_MULTIPLY=None
Hopefully you will have the same success. I was ready to bail on Python prior to async

eBay API error : You have exceeded your maximum call limit

I have a table of eBay itemid, and for each id I want to apply a reviseitem call, but from the second call I get the following error:
You have exceeded your maximum call limit of 3000 for 5 seconds. Try back after 5 seconds.
NB: I have just 4 calls.
How can I fix this problem?
ebay count the calls per second per unique IP's. So please make sure your all calls from your application must be less than 3000 per 5 seconds. hope this would help.
I have just finished an eBay project and this error can be misleading. eBay allow a certain amount of calla a day and if you exceed that amount in one 24 hour period you can get this error. You can get this amount increased by completing an Application Check form http://go.developer.ebay.com/developers/ebay/forums-support/certification
The eBay Trading API, to which your ReviseItem call belongs, allows up to 5000 calls per 24 hour period for all applications, and up to 1.5M calls / 24hrs for "Compatible Applications", i.e. applications that have undergone a vetting process called "Compatible Application Check". More details here: https://go.developer.ebay.com/developers/ebay/ebay-api-call-limits
However, that's just the generic, "Aggregate" call limit. There are different limits for specific calls, some of which are more liberal (AddItem: 100.000 / day) and others of which are more strict (SetApplication: 50 / day) than that. Additionally, there are hourly and periodic limits.
You can find out any application's applicable limits by executing the GetApiAccessRules call:
<GetApiAccessRulesResponse xmlns="urn:ebay:apis:eBLBaseComponents">
<Timestamp>2014-12-02T13:25:43.235Z</Timestamp>
<Ack>Success</Ack>
<Version>889</Version>
<Build>E889_CORE_API6_17053919_R1</Build>
<ApiAccessRule>
<CallName>ApplicationAggregate</CallName>
<CountsTowardAggregate>true</CountsTowardAggregate>
<DailyHardLimit>5000</DailyHardLimit>
<DailySoftLimit>5000</DailySoftLimit>
<DailyUsage>10</DailyUsage>
<HourlyHardLimit>6000</HourlyHardLimit>
<HourlySoftLimit>6000</HourlySoftLimit>
<HourlyUsage>0</HourlyUsage>
<Period>-1</Period>
<PeriodicHardLimit>10000</PeriodicHardLimit>
<PeriodicSoftLimit>10000</PeriodicSoftLimit>
<PeriodicUsage>0</PeriodicUsage>
<PeriodicStartDate>2006-02-14T07:00:00.000Z</PeriodicStartDate>
<ModTime>2014-01-20T11:20:44.000Z</ModTime>
<RuleCurrentStatus>NotSet</RuleCurrentStatus>
<RuleStatus>RuleOn</RuleStatus>
</ApiAccessRule>
<ApiAccessRule>
<CallName>AddItem</CallName>
<CountsTowardAggregate>false</CountsTowardAggregate>
<DailyHardLimit>100000</DailyHardLimit>
<DailySoftLimit>100000</DailySoftLimit>
<DailyUsage>0</DailyUsage>
<HourlyHardLimit>100000</HourlyHardLimit>
<HourlySoftLimit>100000</HourlySoftLimit>
<HourlyUsage>0</HourlyUsage>
<Period>-1</Period>
<PeriodicHardLimit>0</PeriodicHardLimit>
<PeriodicSoftLimit>0</PeriodicSoftLimit>
<PeriodicUsage>0</PeriodicUsage>
<ModTime>2014-01-20T11:20:44.000Z</ModTime>
<RuleCurrentStatus>NotSet</RuleCurrentStatus>
<RuleStatus>RuleOn</RuleStatus>
</ApiAccessRule>
You can try that out four your own application by pasting an AuthToken for your application into the form at https://ebay-sdk.intradesys.com/s/9a1158154dfa42caddbd0694a4e9bdc8 and then press "Execute call".
HTH.

Time to first byte is massive on Joomla site

Time to first byte on www.netdyrlaege.dk is really big.
This is unfortunately an issue that is beyond my skills.
I have optimized everything as well as possible and now I get on only one F webpagetest.org
The TimeToFirstByte is crazy large!
I'm on a virtual private server, bought a big one, it is not a server issue.
No, it is something within Joomla. I've been able to reduce loading times from up to 12 seconds to something like from 3-5 seconds. That is still not okay.
I tried Joomla debug and here are these results. AfterDispatch is from 0.7 to 1.8 seconds depending on the browser! What is this and why?
How do I fix this?
Profile information
Application 0.001 seconds (+0.001); 1.34 MB (+1.336) - afterLoad<br>
Application 0.075 seconds (+0.075); 10.53 MB (+9.196) - afterInitialise<br>
Application 0.162 seconds (+0.087); 23.64 MB (+13.113) - afterRoute<br>
Application 0.747 seconds (+0.585); 34.98 MB (+11.336) - afterDispatch<br>
Application 0.808 seconds (+0.061); 37.29 MB (+2.309) - beforeRenderModule <br>mod_customerswhobought (Customers Who Bought...)<br>
Application 0.815 seconds (+0.007); 37.35 MB (+0.062) - afterRenderModule mod_customerswhobought (Customers Who Bought...)
Application 0.819 seconds (+0.004); 37.36 MB (+0.013) - beforeRenderModule mod_vm_prod_cat_full (Butik menu all left)
Application 1.065 seconds (+0.247); 37.51 MB (+0.141) - afterRenderModule mod_vm_prod_cat_full (Butik menu all left)
Application 1.065 seconds (+0.000); 37.51 MB (+0.007) - beforeRenderModule mod_vm_s5_column_cart_AJAX (Kurv med billeder)<br>
Application 1.426 seconds (+0.360); 47.91 MB (+10.393) - afterRenderModule mod_vm_s5_column_cart_AJAX (Kurv med billeder)<br>
Application 1.427 seconds (+0.001); 47.90 MB (-0.010) - beforeRenderModule mod_breadcrumbs (breadcrumbs)<br>
Application 1.432 seconds (+0.005); 47.94 MB (+0.041) - afterRenderModule mod_breadcrumbs (breadcrumbs)<br>
Application 1.433 seconds (+0.002); 47.93 MB (-0.004) - beforeRenderModule mod_vm_prod_cat_full (Butik menu all)<br>
Application 1.646 seconds (+0.213); 47.98 MB (+0.050) - afterRenderModule mod_vm_prod_cat_full (Butik menu all)<br>
Application 1.647 seconds (+0.001); 47.99 MB (+0.011) - beforeRenderModule mod_menu (Top Menu)<br>
Application 1.653 seconds (+0.006); 48.15 MB (+0.154) - afterRenderModule mod_menu (Top Menu)<br>
Application 1.654 seconds (+0.000); 48.06 MB (-0.085) - beforeRenderModule mod_virtuemart_mini_cart (mini kurv)<br>Application 1.658 seconds (+0.004); 48.08 MB (+0.021) - afterRenderModule mod_virtuemart_mini_cart (mini kurv)<br>
Application 3.524 seconds (+1.866); 49.01 MB (+0.927) - afterRender<br>
On
first of all disable debug on your site: end-users will question the stability of the site and attackers gain plenty of info.
In order to achieve optimization you should:
perform all configuration tasks that will allow you to gain speed (mainly, setup and use cache properly!)
see the modules in the debug list, and ensure they use cache; load the page twice and see if at least the second time it loads under one sec.
(Now you should be down to 1 second rendering time)
Then, the tough part begins:
examine your site's debug, and identify the plugins slowing down the site. The modules are already listed;
starting from the slowest, ponder if you can live without it, or get your hands on the code and fix it;
(Now you should be down to 100 - 300 ms)
configure the server to perform optimally
evaluate external cache solutions
(Now you should be below 50ms)
The more you optimize, the harder it will be to obtain substantial results. I bet I could get you down to 200ms in less than 3 hours, but then it would take days to get to 20ms.
And don't forget this is just rendering time, you also might want to optimize your page, you're using many libraries, making many calls that can be saved, graphics could be combined... and that's affecting the speed too.
Your site's homepage currently runs 900 queries in the homepage. This is way more than you need, there must be some pretty poorly optimized extensions there.

Why is my wordpress website so slow and am I having so much downtime?

I have used Yslow and PageSpeed to find the cause, but I can't seem to figure out why my blog http://www.fotokringarnhem.nl sometimes loads blazing fast (cached files I guess), and other times takes about 10 seconds or longer to load.
I am on a shared server, but haven't had problems like this with other websites on shared servers.
I'm using cloudflare to speed up my blog to speed things up, but to no avail.
Am I missing something?
Pingdom reports of last 30 days (also see http://stats.pingdom.com/hseaskprwiaz):
Response time average: 7.620 ms
Slowest Average: 18.307 ms
Fastest Average: 4.237 ms
Uptime: 96,24%
Edit 1:
from basicstate.com
diagnostics
+dns
+connect
-request
-response
So I guess it fails on the requests. Options to narrower it down?
Edit 2:
I used P3 (Plugin Performance Profiler) to determine which plugins caused the most loadtime. Turns out that User Access Manager caused about 60% of load time, so I deleted it.
This did something, I now get way less time outs, but it still takes a long time for anything to popup on the screen.
I used the plugin SQL monitor and determined there are 82 queries being executed on request which takes about 10 seconds!!!!
If you have a static site with not millions of users and performance is highly variable, your host is probably to blame. I have tried about 8 different hosts and researched a dozen others, I highly suggest Media Temple (mt). Best customer service and performance you can get for the money.
Also, check out a speed test tool by WP Engine: http://speed.wpengine.com/ - great insight into why your site is slow. Takes a few hours to generate a report.
Use the P3 plugin
and use this article to further optimize your website: http://andbreak.com/articles/guide-speed-wordpress/
when all else fails, try switching providers
Edit
Turns out that deleting all automatically saved backups of pages and post as well as the concepts was the key. This dramatically cuts my query time to the server.
Now at lightning speeds!
Here is the report from the last few days:
uptime: 99,21%
overall average: 3.322 ms
There is a usefull plugin for WP to also limit the number of autosaves and concepts for posts and pages: Revision control
Results weren't instant by the way. Took a day to take effect.
Basicstate results (clearly showing the improvement when I deleted the revisions and concepts on the 11th of 12th (not sure which)).
date uptime dns connect request ttfb ttlb
2012-09-18 98.97 0.031 0.047 0.047 0.353 0.475
2012-09-17 100.00 0.031 0.047 0.047 0.389 0.810
2012-09-16 100.00 0.029 0.045 0.045 0.342 0.499
2012-09-15 93.81 0.029 0.045 0.045 0.739 1.035
2012-09-14 98.97 0.053 0.068 0.068 0.387 0.565
2012-09-13 100.00 0.058 0.074 0.074 0.499 0.853
2012-09-12 95.00 0.030 0.046 0.046 5.994 7.024
2012-09-11 96.97 0.051 0.096 0.096 9.707 9.949
2012-09-10 73.15 0.027 0.043 0.043 6.765 6.952
2012-09-09 43.48 0.027 0.121 0.121 3.652 3.724
2012-09-08 31.82 0.028 0.045 0.045 2.757 2.867
2012-09-07 71.93 0.026 0.042 0.042 5.917 6.091
2012-09-06 60.49 0.027 0.043 0.043 4.590 4.751

Resources