How is the number of downloads computed in Google Play store? - download

Is the number of downloads shown in Google Play Store computed based on lifetime numbers ? My app (Match4app) shows 5.10 K "Installs by User" on Google Play Console (lifetime). However, on Google Play Store it only shows 1000+:
https://play.google.com/store/apps/details?id=com.palfonsoft.match4app
I know this is not real time, however I had more than 5K downloads since April 2019.
According to some documentation, it is supposed to show "5000+":
1+ (1 - 5)
5+ (6 - 10)
10+ (11 - 50)
50+ (51 - 100)
100+ (101 - 500)
500+ (501 - 1000)
1000+ (1001 - 5000)
5000+ (5001 - 10000)
10000+ (10001 - 50000)
etc...

Installed by User isn't the same as Download-Count. I think that the download counter only counts unique downloads.
The download count bracket you have is right.

"5000+ downloads" is finally shown for Match4App in Google Play!
I had to wait for more than one month though.
This is more likely a sync issue between the Google Play Console and the Playstore, because now I have 10000+ in the Console :)

The Google Play app download brackets are
0
1-5
5-10
10-50
50-100
100-500
500-1000
1000-5000
5000 - 10000
10000 - 50000
50000 - 100000
100000 - 500000
500000 - 1000000
1000000 - 5000000
5000000 - 10000000
10000000 - 50000000
50000000 - 100000000
100000000 - 500000000
500000000 - 1000000000
1000000000 - 5000000000
Answer from quora.

Related

Do Gatling reports req/s include pauses and pace?

I'm running load tests in gatling, and noticed when I ramp 250 users over 10 seconds, the report gives me an average of 31 req/s:
val combinedScenario = scenario("Combined")
.feed(UuidFeeder.feeder)
.exec(_.set("token", token))
.exec(saveData)
.exec(processDocumentRequest)
)
val scn = List(OAuthRequest.inject(atOnceUsers(1)),
combinedScenario.inject(nothingFor(5 seconds),
rampUsers(250) over (10 seconds)));
setUp(scn).protocols(httpConf).maxDuration(60 minutes)
However, when I surround the scenario in a forever loop and put a 60 second pace in between each set of requests, the report then says I average about 8 req/s:
val combinedScenario = scenario("Combined")
.feed(UuidFeeder.feeder)
.exec(_.set("token", token))
.forever(
pace(60 seconds)
.exec(saveData)
.exec(processDocumentRequest)
)
Is this simply because the report factors in the 50 seconds in between iterations where 0 requests are being sent? Can I assume that it's still sending around 31 req/s for the short bursts of requests being sent each minute?
Yes - the reports just show what the actual throughout during the scenario was, not some hypothetical maximum. The number you get could be constrained by your scenario or by the application under test. You would need to run some experiments to confirm.
With the pace in the scenario, you should also be able to increase the number of concurrent users, based on your initial testing

How to Set up Thread Properties for jmeter

can somebody please advise how to create a jmeter thread properties correctly with the following requirements
55 user a min ramp-up over an hour with a test running for 4 hours.
If you need 55 users added each minute during 1 hour, you set up should look like:
Number of Threads: 3300 (55 users x 60 minutes)
Ramp-up: 3600 (1 hour == 3600 seconds)
Loop Count: Forever
Scheduler -> Duration: 14400 (3600 seconds in hour x 4)
Be aware that 3300 concurrent threads is quite a high load, make sure that you're following recommendations from JMeter Performance and Tuning Tips guide.
If you won't be able to create such a load from a single machine consider Distributed Testing when one JMeter master machine orchestrates several slaves, for instance 3 slaves having 1100 virtual users each.
So you think about something like this setup I show on the screenshot below, isn't ot? It is set on jp#gc - Stepping Thread Group that comes with jmeter plugins in standard set. You have there 55 users tat will ramp up to that value through 3600 seconds and will hold that load for next 3 hours (10800 sec).

Jmeter distributed Testing - How does Ramp up time work in Distributed load testing

If I run distributed testing in GUI mode with 4 client servers(1 Master + 3 Slave) and set the following values in my Plan -
Number of Threads = 12000
Ramp Up time = 1000
Loop count = 1
After completion of Test I get 36000 Samples (which is ok as 12000 * 3 = 36000) but my question is for ramp up time - will it be 3000 for 36000 users??
or will it remain 1000 for 36000
Thanks in Advance
It will be 1000, the same for all client.
Note such a load test profile seems strange as running only 1 request is not what happens with usual usage, what are you trying to simulate ?

Time to first byte is massive on Joomla site

Time to first byte on www.netdyrlaege.dk is really big.
This is unfortunately an issue that is beyond my skills.
I have optimized everything as well as possible and now I get on only one F webpagetest.org
The TimeToFirstByte is crazy large!
I'm on a virtual private server, bought a big one, it is not a server issue.
No, it is something within Joomla. I've been able to reduce loading times from up to 12 seconds to something like from 3-5 seconds. That is still not okay.
I tried Joomla debug and here are these results. AfterDispatch is from 0.7 to 1.8 seconds depending on the browser! What is this and why?
How do I fix this?
Profile information
Application 0.001 seconds (+0.001); 1.34 MB (+1.336) - afterLoad<br>
Application 0.075 seconds (+0.075); 10.53 MB (+9.196) - afterInitialise<br>
Application 0.162 seconds (+0.087); 23.64 MB (+13.113) - afterRoute<br>
Application 0.747 seconds (+0.585); 34.98 MB (+11.336) - afterDispatch<br>
Application 0.808 seconds (+0.061); 37.29 MB (+2.309) - beforeRenderModule <br>mod_customerswhobought (Customers Who Bought...)<br>
Application 0.815 seconds (+0.007); 37.35 MB (+0.062) - afterRenderModule mod_customerswhobought (Customers Who Bought...)
Application 0.819 seconds (+0.004); 37.36 MB (+0.013) - beforeRenderModule mod_vm_prod_cat_full (Butik menu all left)
Application 1.065 seconds (+0.247); 37.51 MB (+0.141) - afterRenderModule mod_vm_prod_cat_full (Butik menu all left)
Application 1.065 seconds (+0.000); 37.51 MB (+0.007) - beforeRenderModule mod_vm_s5_column_cart_AJAX (Kurv med billeder)<br>
Application 1.426 seconds (+0.360); 47.91 MB (+10.393) - afterRenderModule mod_vm_s5_column_cart_AJAX (Kurv med billeder)<br>
Application 1.427 seconds (+0.001); 47.90 MB (-0.010) - beforeRenderModule mod_breadcrumbs (breadcrumbs)<br>
Application 1.432 seconds (+0.005); 47.94 MB (+0.041) - afterRenderModule mod_breadcrumbs (breadcrumbs)<br>
Application 1.433 seconds (+0.002); 47.93 MB (-0.004) - beforeRenderModule mod_vm_prod_cat_full (Butik menu all)<br>
Application 1.646 seconds (+0.213); 47.98 MB (+0.050) - afterRenderModule mod_vm_prod_cat_full (Butik menu all)<br>
Application 1.647 seconds (+0.001); 47.99 MB (+0.011) - beforeRenderModule mod_menu (Top Menu)<br>
Application 1.653 seconds (+0.006); 48.15 MB (+0.154) - afterRenderModule mod_menu (Top Menu)<br>
Application 1.654 seconds (+0.000); 48.06 MB (-0.085) - beforeRenderModule mod_virtuemart_mini_cart (mini kurv)<br>Application 1.658 seconds (+0.004); 48.08 MB (+0.021) - afterRenderModule mod_virtuemart_mini_cart (mini kurv)<br>
Application 3.524 seconds (+1.866); 49.01 MB (+0.927) - afterRender<br>
On
first of all disable debug on your site: end-users will question the stability of the site and attackers gain plenty of info.
In order to achieve optimization you should:
perform all configuration tasks that will allow you to gain speed (mainly, setup and use cache properly!)
see the modules in the debug list, and ensure they use cache; load the page twice and see if at least the second time it loads under one sec.
(Now you should be down to 1 second rendering time)
Then, the tough part begins:
examine your site's debug, and identify the plugins slowing down the site. The modules are already listed;
starting from the slowest, ponder if you can live without it, or get your hands on the code and fix it;
(Now you should be down to 100 - 300 ms)
configure the server to perform optimally
evaluate external cache solutions
(Now you should be below 50ms)
The more you optimize, the harder it will be to obtain substantial results. I bet I could get you down to 200ms in less than 3 hours, but then it would take days to get to 20ms.
And don't forget this is just rendering time, you also might want to optimize your page, you're using many libraries, making many calls that can be saved, graphics could be combined... and that's affecting the speed too.
Your site's homepage currently runs 900 queries in the homepage. This is way more than you need, there must be some pretty poorly optimized extensions there.

Time interval (in ms) from BPM (Midi tempo)

Does anybody know formula ?
I tried following:
1000 / ((BPM * 24) / 60).
But seems not correct.
I don't think my answer is MIDI-specific, but to convert beats-per-minute to ms-per-beat, would this work?
ms_per_beat = 1000 * 60 / bpm
In other words, I think you have an extra "24" in there.
It is simply:
Time of 1 beat in ms = 1000 * 60 / BPM = 60000 / BPM
It looks like your formula is assuming data coming from a standard midi file, where tempo is expressed in terms of ticks, where there are 24 ticks per quarter note. It's not giving you ms per beat, it's giving you ms per tick.
I wrote an article on converting BPM to MS
and I made an online app called a Delay Time Calculator that does just that including giving you dotted and triplet notes

Resources