I'm about to buy a laptop specifically for software development and want to be sure I don't end up with something I'm unhappy with, as I have experienced the pain of waiting 2+ minutes when compiling code on an old i3 laptop.
I'm looking at something with an i7-7500u cpu and 256GB ssd which I thought would do the job admirably, but then I saw that the cpu only has 2 cores / 4 threads.
(https://www.intel.co.uk/content/www/uk/en/products/processors/core/i7-processors/i7-7500u.html)
Is compiling (C# MVC web application) mainly based on single thread performance or is it likely that I'll see significant improvement spending a bit more and going for a 4 core / 8 thread cpu?
If the compiler would schedule all the CPU threads to build the program then it matters. If not, then I think there is not much benefit in increasing the CPU core/thread number.
I think you may check the CPU usage when you compile the program and check if the usage of CPU is high enough. If it is below 50%, then I think it only uses one or two CPU thread to build the program, and increasing CPU core/thread count will not give you much benefit.
Also, if you use the hdd, not the ssd in your old laptop, then I think the bottleneck of the performance would probably be the read/write to disk drive, not the CPU. The usage of ssd would help a lot to improve the performance.
Related
I realize that there may not be a hard and fast rule but it seems 2 CPU machines will provide greater performance improvement when running multiple tasks as opposed to just running one task. Is this true in a Windows environment? Would a different OS make a difference?
Back in the old days CPU's were what we today would call "single core" and if you had a program use 100% CPU there was nothing left for anything else, including the taskbar you tried to get up with Ctrl+Alt+Del.
Two CPU systems (I had a dual Pentium III system at one time) fixed this as the other CPU was usually not 100% busy so it could handle the taskmanager even with the rogue program running at full speed.
Today this has moved inside the single CPU as multiple cores. So having more cores than rogue programs running at the same time is a good thing. For most users this is a dual core system but prices are falling and an eight core AMD CPU can be bought for under $100. I believe it is close to impossible to find a singe core CPU these days.
I develop a multithreaded cpu-intensive application. Until now this application has been tested on multicore (but single-cpu) systems like an i7-6800K and worked well under Linux and Windows. A newly observed phenomenon is that it does not run well on certain sever hardware: 2 x Xeon E5 2660 v3:
When 40 threads are active then cpu utilization drops to 5-10 %. This server has two physical CPUs and supports NUMA. The application has not been written with the NUMA-model in mind and thus we have certainly lots of memory accesses to non-local memory and that should be improved. But the question is: "Can low displayed cpu-utilization be caused by slow memory access?"
I believe this is the case but a colleque said that the cpu utilization would nevertheless stay at 100 %. This is important because if he is right then the trouble does not come from memory-misplacement. I don't know how Windows10 counts cpu utilization so I hope that somebody knows from practical experience with server hardware if the displayed cpu utilization drops in case of congested memory controllers.
I have a system - processor 2.8 ghz, 20 physical cores, 40 logical cores, 128 gb ram and 4tb hard drive.
Scenario:
I am running 3 (independent) python base processes/scripts (running independently) that read data from file and write it to database. They are taking time while not using CPU and Memory 100% not even 40%.
Why is it so? (I think it depends upon OS)
How can I configure it to utilise CPU and Memory more?
I am using Windows 8.1.
take a look at processoraffinity and processpriority
https://msdn.microsoft.com/en-us/library/system.diagnostics.processthread.processoraffinity(v=vs.110).aspx
https://msdn.microsoft.com/en-us/library/system.diagnostics.process.priorityclass(v=vs.110).aspx
A process (including a python script) isn't going to use any more cores than it has running threads. So if your python script is single-threaded, it's only going to use a single core.
Further, disk and database operations will stall the process while blocked on I/O and network. (Effective CPU usage == 0).
In other words, your program may not be "cpu bound" if it's doing a lot of I/O.
I'm not sure what your programs do, but if the problem at hand can be parallelized (split up into multiple independent tasks), then it might lend itself to having more threads or processes to take advantage of the extra hardware you have. But it's tricky and very hard to get this right and get the performance gain.
When using the desktop PC's in my university (Which have 4Gb of ram), calculations in Matlab are fairly speedy, but on my laptop (Which also has 4Gb of ram), the exact same calculations take ages. My laptop is much more modern so I assume it also has a similar clock speed to the desktops.
For example, I have written a program that calculates the solid angle subtended by 50 disks at 500 points. On the desktop PC's this calculation takes about 15 seconds, on my laptop it takes about 5 minutes.
Is there a way to reduce the time taken to perform these calculations? e.g, can I allocate more ram to MATLAB, or can I boot up my PC in a way that optimises it for using MATLAB? I'm thinking that if the processor on my laptop is also doing calculations to run other programs this will slow down the MATLAB calculations. I've closed all other applications, but I know theres probably a lot of stuff going on I can't see. Can I boot my laptop up in a way that will have less of these things going on in the background?
I can't modify the code to make it more efficient.
Thanks!
You might run some of my benchmarks which, along with example results, can be found via:
http://www.roylongbottom.org.uk/
The CPU core used at a particular point in time, is the same on Pentiums, Celerons, Core 2s, Xeons and others. Only differences are L2/L3 cache sizes and external memory bus speeds. So you can compare most results with similar vintage 2 GHz CPUs. Things to try, besides simple number crunching tests.
1 - Try memory test, such as my BusSpeed, to show that caches are being used and RAM not dead slow.
2 - Assuming Windows, check that the offending program is the one using most CPU time in Task Manager, also that with the program not running, that CPU utilisation is around zero.
3 - Check that CPU temperature is not too high, like with SpeedFan (free D/L).
4 - If disk light is flashing, too much RAM might be being used, with some being swapped in and out. Task Manager Performance would show this. Increasing RAM demands can be checked my some of my reliability tests.
There are many things that go into computing power besides RAM. You mention processor speed, but there is also number of cores, GPU capability and more. Programs like MATLAB are designed to take advantage of features like parallelism.
Summary: You can't compare only RAM between two machines and expect to know how they will perform with respect to one another.
Side note: 4 GB is not very much RAM for a modern laptop.
Firstly you should perform a CPU performance benchmark on both computers.
Modern operating systems usually apply the most aggressive power management schemes when it is run on laptop. This usually means turning off one or more cores, or setting them to a very low frequency. For example, a Quad-core CPU that normally runs at 2.0 GHz could be throttled down to 700 MHz on one CPU while the other three are basically put to sleep, while it is on battery. (Remark. Numbers are not taken from a real example.)
The OS manages the CPU frequency in a dynamic way, tweaking it on the order of seconds. You will need a software monitoring tool that actually asks for the CPU frequency every second (without doing busy work itself) in order to know if this is the case.
Plugging in the laptop will make the OS use a less aggressive power management scheme.
(If this is found to be unrelated to MATLAB, please "flag" this post and ask moderator to move this question to the SuperUser site.)
I have two servers, one running core i7 920 (8 logic CPUs at 2.8Ghz), the other running Xeon X3430 (4 logic CPUs at 2.4Ghz). For the same .NET 4 application, CPU usage on the first machine is 6%; on the second machine it is 50%! I wonder what makes this huge difference. And how can I diagnose the cause of the issue?
Its not just CPU that matters, are you saturating IO? Is the faster machine so much faster that it is writing much more data that the CPU cannot keep up, whereas the slower machine is rattling along and so the CPU fully utilised.
Locking might also have a part to play, I know a simple test app I wrote a long time ago showed large performance differences between a single core and quad core systems. (the single core was a lot faster, I think .NET optimised away locks for it whereas the quad core suffered).
In short, unless there's a fair bit more information on the problem, no-one can give you anything other than guesses as to the cause.