Why Go takes so much CPU to build a package? [closed] - go

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I have download a golang package from github. it is in middle size. when compiling it from source, my computor gets slow down for i have more than one golang compile process and it takes much of the cpu.
how does golang make it to do concurrent compile?
any params to turn the number of cpu it use when compile?

Go is using a lot of CPU because it's trying to build as fast as possible, like any other compiler. It may also be because you're using a package that is using cgo, which can drastically increase compiling times as compiling medium to large C libraries is often quite intensive.
You can control the number of processes Go is using by setting the GOMAXPROCS environment variable. Such as GOMAXPROCS=1 go get ... to limit Go to only use 1 process (and thus only 1 CPU core). This however does not affect the number of processes used by external compilers that cgo may invoke.
If you need further CPU control, on Unix based systems you can use the nice command to change the priority of processes such that other programs have higher CPU priority, making your computer less sluggish.

Related

Strange process while turning off my computer [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
So I was on my way to turn off my computer when unexpectedly, in the screen where you see the programs that block Windows to turn off, i saw one process with a strange name like {4593-9493-8949-9390} (not the exact same name but similar) and before i could click on the cancel button the process close.
My question here is if I should be wondering about that strange process or its just some random Windows 10 routine
This very well could be a malware of sort being executed in the background. I would recommend scanning your computer with your preferred anti-virus program and then installing Process Monitor which should help you detect and analyze suspicious processes running on your computer:
Process Monitor is an advanced monitoring tool for Windows that shows real-time file system, Registry and process/thread activity. It combines the features of two legacy Sysinternals utilities, Filemon and Regmon, and adds an extensive list of enhancements including rich and non-destructive filtering, comprehensive event properties such session IDs and user names, reliable process information, full thread stacks with integrated symbol support for each operation, simultaneous logging to a file, and much more. Its uniquely powerful features will make Process Monitor a core utility in your system troubleshooting and malware hunting toolkit.

What are the expected performance characteristics of an Azure Function App in Consumption mode? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
What are the expected performance characteristics of an Azure Function App in Consumption mode?
I was going to ask ... How can you carry out realistic testing of an Azure Function App?
A person in the team knocked together a Perl script that forked off and called our Function App to very crudely simulate the sort of load we're hoping to cope with, e.g. starting with say 150,000 users, calling 10 time a second
The script was running on a very beefy VM running over in Google
Things started Ok with lower numbers, but very quickly we started getting timeouts
We must be doing something "wrong", as I sort of assume that Function App's can cope with this sort of load ... but what?
... and can they cope with this load in Consumption Plan mode?
You can look into ARR-Affinity cookies and see if it is causing scaling problems.
When I was performing some load testing with my function I noticed all the traffic was only going to one instance, and it turned out to be a problem with AAR-Affinity cookies. The load client was being directed back to the same function instance for every request, so it was not scaling out to meet the demand.
You can disable this behavior to see if you get better scaling behavior.
https://blogs.msdn.microsoft.com/appserviceteam/2016/05/16/disable-session-affinity-cookie-arr-cookie-for-azure-web-apps/
or adding this response header.
Headers.Add("Arr-Disable-Session-Affinity", "True");

Cloud performance vs Desktop [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have developed an app to analyze videos using OpenCV and Visual Studio 2013. I was planning to run this app in the Azure assuming that it will run faster in cloud. However, to my surprise, the app ran slower than my desktop, taking about twice the time when I configured the Azure instance with 8 cores. It is a 64-bit app, compiled with appropriate compiler optimization. Can someone help me understand why am I losing time in the cloud, and is there a way to improve the timing there?
The app takes as input a video (locally in each case) and outputs a flat file with the analysis data.
I am not sure why people are voting to close this question. This is very much about programming and if possible, please help me in pinpointing the problem.
There is only going to be 3 reasons for this
Disk IO speed
CPU Speed
Memory Speed
Taking a look here you can see someone who actually checked the performance of on premise to cloud: Azure compute power: Extra Large VM slow
Basically the Ghz is most likely slower (around 1.6) and disk IO speed, while local, is normally capped at 300 or 500 IOPS, which is only just higher than 15k rpm drives and no where near SSD level.
I am not sure on memory speed. While you can keep adding cores, most programs, even ones optimized for multiple cores, have a lot of dependencies on single threads, hence slowing the whole operation down. Increased Ghz is what can make a large difference.

Designing a makefile for installing / uninstalling software that I design [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm writing a compiler and there are certain things that I simply can't do without knowing where the user will have my compiler and its libraries installed such as including and referencing libraries that add built-in functionality like std I/O. Since this is my first venture into compilers I feel that it's appropriate to only target Linux distributions for the time being.
I notice that a lot of compilers (and software projects in general) include makefiles or perhaps an install.py file that move parts of the application across the user's file system and will ultimately leave the user with something like a new shell command to run the program, which, (in a compiler such as python's case) knows where the necessary libraries are and where the other necessary files have been placed in order to run the program properly.
How does this work? Is there some sort of guideline to follow when designing these install files?
I think the best guideline I can give you at a high level would be:
Don't do this yourself. Just don't.
Use something like the autotools or any of the dozen or so other build systems out there that handle much of the details involved here for you.
That being said they also add a certain amount of complexity when you are just starting out and that may or may not be worth the effort to start with but they will all pay off in the end assuming you use them appropriately and don't need anything too extensively specialized that they don't provide nicely.

Optimal machine configuration for Visual studio 2012 [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 9 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Improve this question
What would be optimal machine configuration(CPU power, memory, and drive technology) for following usage:
Visual studio 2010 and Visual studio 2012- several instances at once
Oracle navigator and MSSQL Management studio
+ some other programs (like GIMP, PDF converter, MS Office..)
The main goal is fast build and compile in Visual studio.
I have got to justify every component since it is config
for development machine at work.
There are some threads on Stackoverflow about that like:
SSD idea
I have not tried yet SSD proposal...
-OS Windows 8, 64bit
a multi-core architecture (with or without HyperThreading) will give you a performance gain when concurrently running clock cycle intensive operations, as each core has dedicated execution units and pipelining, so more cores equals less chance of applications having to timeshare e.g. while compiling.
A system with a lot of RAM will have advantages when switching between different instances of e.g. Visual Studio because their state won't need to be written away to (slower than RAM) disk, be it SSD or not.
It will also reduce disk I/O when working with Gimp, Photoshop or the likes.
The amount of cache can also have a positive influence on your daily work, because more (faster than RAM) cache will reduce the need for your system to leave the confines of the CPU and go the extra mile to read from/write to RAM.
Finally, the advantages of an SSD over "conventional" disks are mainly noticeable in disk and file access times. Booting the OS will be smoother, so will starting up programs from said SSD be. But SSD size is still fairly limited within a reasonable price range. Is it worth it? Imo, no. Once your tools have been loaded, there's little to no real need in a developer's day for SSD drives.

Resources