Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have developed an app to analyze videos using OpenCV and Visual Studio 2013. I was planning to run this app in the Azure assuming that it will run faster in cloud. However, to my surprise, the app ran slower than my desktop, taking about twice the time when I configured the Azure instance with 8 cores. It is a 64-bit app, compiled with appropriate compiler optimization. Can someone help me understand why am I losing time in the cloud, and is there a way to improve the timing there?
The app takes as input a video (locally in each case) and outputs a flat file with the analysis data.
I am not sure why people are voting to close this question. This is very much about programming and if possible, please help me in pinpointing the problem.
There is only going to be 3 reasons for this
Disk IO speed
CPU Speed
Memory Speed
Taking a look here you can see someone who actually checked the performance of on premise to cloud: Azure compute power: Extra Large VM slow
Basically the Ghz is most likely slower (around 1.6) and disk IO speed, while local, is normally capped at 300 or 500 IOPS, which is only just higher than 15k rpm drives and no where near SSD level.
I am not sure on memory speed. While you can keep adding cores, most programs, even ones optimized for multiple cores, have a lot of dependencies on single threads, hence slowing the whole operation down. Increased Ghz is what can make a large difference.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
What are the expected performance characteristics of an Azure Function App in Consumption mode?
I was going to ask ... How can you carry out realistic testing of an Azure Function App?
A person in the team knocked together a Perl script that forked off and called our Function App to very crudely simulate the sort of load we're hoping to cope with, e.g. starting with say 150,000 users, calling 10 time a second
The script was running on a very beefy VM running over in Google
Things started Ok with lower numbers, but very quickly we started getting timeouts
We must be doing something "wrong", as I sort of assume that Function App's can cope with this sort of load ... but what?
... and can they cope with this load in Consumption Plan mode?
You can look into ARR-Affinity cookies and see if it is causing scaling problems.
When I was performing some load testing with my function I noticed all the traffic was only going to one instance, and it turned out to be a problem with AAR-Affinity cookies. The load client was being directed back to the same function instance for every request, so it was not scaling out to meet the demand.
You can disable this behavior to see if you get better scaling behavior.
https://blogs.msdn.microsoft.com/appserviceteam/2016/05/16/disable-session-affinity-cookie-arr-cookie-for-azure-web-apps/
or adding this response header.
Headers.Add("Arr-Disable-Session-Affinity", "True");
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I have download a golang package from github. it is in middle size. when compiling it from source, my computor gets slow down for i have more than one golang compile process and it takes much of the cpu.
how does golang make it to do concurrent compile?
any params to turn the number of cpu it use when compile?
Go is using a lot of CPU because it's trying to build as fast as possible, like any other compiler. It may also be because you're using a package that is using cgo, which can drastically increase compiling times as compiling medium to large C libraries is often quite intensive.
You can control the number of processes Go is using by setting the GOMAXPROCS environment variable. Such as GOMAXPROCS=1 go get ... to limit Go to only use 1 process (and thus only 1 CPU core). This however does not affect the number of processes used by external compilers that cgo may invoke.
If you need further CPU control, on Unix based systems you can use the nice command to change the priority of processes such that other programs have higher CPU priority, making your computer less sluggish.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
Usually I can easily watch 1080p no problem, as I should.
Now, it's so slow I have to wait a couple of minutes for it to buffer only to be interrupted again 5 minutes in to the video and having to wait again.
Here's my speedtest result
As you can see my download speed is plenty fast.
I don't know why the upload is so slow right now(it's usually around 7-10Mbps), but that shouldn't affect the buffering much, right?
The ping is that high because I'm using a VPN. (connecting to a German server from Finland)
Any idea why youtube would be buffering so slow?
I suppose the VPN could be causing some sort of a problem, but I've been using it for quite some time now and youtube has been buffering just fine previously.
Upload should have very little effect on the streaming - you only upload when you request more of the video and it's unlikely that that is happening constantly (connections stay open a while).
Speed tests don't tell you much - it's a theoretical speed against a good server with a good network in between. ISPs may also hike the result of the speed test by prioritising traffic to the speed test server; cheeky.
In reality streaming from YouTube is more complex.
I blame your ISP for bottle-necking the traffic ("traffic shaping") - it helps to make regular page loading (social media, etc) faster for everyone else. Look up net neutrality if you have time.
Obviously I don't have evidence for this; it's just the most logical explanation. Unfortunately nobody at your ISP will give you a good answer.
I'd also not stream over a VPN anyway; that is a bottle neck in itself skipping optimisation the network/server could do before reaching your client directly.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I just wanna know if Elasticsearch is free. I know it is open source but I checked the website and I didn't find anything about pricing, though I did found subscription with no pricing. So, is it free for long-term use?
Just to let you know, I'm working with the MERN stack (MongoDB, Express.js, React.js, Node.js) and socket IO.
If Elasticsearch is free, then is it going to work and integrate with my stack smoothly?
If you want managed hosting from elastic.co, they charge you according to several variables. You can find the pricing here: https://www.elastic.co/cloud/elasticsearch-service/pricing
If you want to use the open-source version, stand up your own servers and manage your own deployment, the code is at no cost and can be found here: https://github.com/elastic/elasticsearch
It's super-important to remember that spinning up VMs on the cloud is NOT free. In fact, you might spend more money on cloud VMs than using elastic.co's managed services. Elasticsearch is a memory hog and I found that very quickly, and with minor load, I had to dedicate 4GBs of RAM just for the Java heap space. Under heavy load, you'd have to dedicate more. All of that costs money.
As far as integrating with your mainly-javascript stack, it shouldn't be a problem. This library is very useful: https://www.npmjs.com/package/elasticsearch
Elasticsearch is free of cost and open source. They charge for services like support, consultancy etc. and for plugins like kibana.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 9 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Improve this question
What would be optimal machine configuration(CPU power, memory, and drive technology) for following usage:
Visual studio 2010 and Visual studio 2012- several instances at once
Oracle navigator and MSSQL Management studio
+ some other programs (like GIMP, PDF converter, MS Office..)
The main goal is fast build and compile in Visual studio.
I have got to justify every component since it is config
for development machine at work.
There are some threads on Stackoverflow about that like:
SSD idea
I have not tried yet SSD proposal...
-OS Windows 8, 64bit
a multi-core architecture (with or without HyperThreading) will give you a performance gain when concurrently running clock cycle intensive operations, as each core has dedicated execution units and pipelining, so more cores equals less chance of applications having to timeshare e.g. while compiling.
A system with a lot of RAM will have advantages when switching between different instances of e.g. Visual Studio because their state won't need to be written away to (slower than RAM) disk, be it SSD or not.
It will also reduce disk I/O when working with Gimp, Photoshop or the likes.
The amount of cache can also have a positive influence on your daily work, because more (faster than RAM) cache will reduce the need for your system to leave the confines of the CPU and go the extra mile to read from/write to RAM.
Finally, the advantages of an SSD over "conventional" disks are mainly noticeable in disk and file access times. Booting the OS will be smoother, so will starting up programs from said SSD be. But SSD size is still fairly limited within a reasonable price range. Is it worth it? Imo, no. Once your tools have been loaded, there's little to no real need in a developer's day for SSD drives.