Disable CPU core to get higher overclocks? - cpu

Maybe this is not the right forum for this question. But I'm really curious about this.
If I have a CPU with 4 cores # 3 GHz, and I disable one core in the BIOS, can I overclock to 4 GHz without changing the voltage? (Since this should not change power usage.) Of course I realize one has to consider temperatures as well.
The background is this: I work with Python, and don't need all my CPU cores, only clock speed has an influence on performance. Perhaps I can speed things up a bit this way.

Related

CPU usage is higher than expected to be possible (over 200% with a total of two cores)

Correct me if I'm wrong, but based on answers to this question and this question, I understand that the percentage of CPU used can easily go up to 100 * number of processors * number of cores per processor without affecting performance too badly; e.g if I have one processor with two cores, my CPU usage should be able to go easily to 200%.
I just checked with top while training a small neural network in Python / TensorFlow, and Python is consistently using over 300% of my single, dual core processor.
I have not been noticing any poor performance in any other applications. How is this possible?
It could be that your particular CPU supports multiple logical threads per physical core by some other mechanism. This question suggests some ways you could discover if this is the case.
With the caveat that I'm kind of guessing, another reason you might see this effect is with Intel's Turbo Boost feature interacting with the way CPU counters and CPU usage are reported.
The 100% * nCores thing breaks down in the presence of things like hyper-threading, CPU frequency scaling, and the murky relationship between what CPU vendors market as "threads" vs. "cores".

does cpu core count matter for code compiling?

I'm about to buy a laptop specifically for software development and want to be sure I don't end up with something I'm unhappy with, as I have experienced the pain of waiting 2+ minutes when compiling code on an old i3 laptop.
I'm looking at something with an i7-7500u cpu and 256GB ssd which I thought would do the job admirably, but then I saw that the cpu only has 2 cores / 4 threads.
(https://www.intel.co.uk/content/www/uk/en/products/processors/core/i7-processors/i7-7500u.html)
Is compiling (C# MVC web application) mainly based on single thread performance or is it likely that I'll see significant improvement spending a bit more and going for a 4 core / 8 thread cpu?
If the compiler would schedule all the CPU threads to build the program then it matters. If not, then I think there is not much benefit in increasing the CPU core/thread number.
I think you may check the CPU usage when you compile the program and check if the usage of CPU is high enough. If it is below 50%, then I think it only uses one or two CPU thread to build the program, and increasing CPU core/thread count will not give you much benefit.
Also, if you use the hdd, not the ssd in your old laptop, then I think the bottleneck of the performance would probably be the read/write to disk drive, not the CPU. The usage of ssd would help a lot to improve the performance.

How to reduce time taken for large calculations in MATLAB

When using the desktop PC's in my university (Which have 4Gb of ram), calculations in Matlab are fairly speedy, but on my laptop (Which also has 4Gb of ram), the exact same calculations take ages. My laptop is much more modern so I assume it also has a similar clock speed to the desktops.
For example, I have written a program that calculates the solid angle subtended by 50 disks at 500 points. On the desktop PC's this calculation takes about 15 seconds, on my laptop it takes about 5 minutes.
Is there a way to reduce the time taken to perform these calculations? e.g, can I allocate more ram to MATLAB, or can I boot up my PC in a way that optimises it for using MATLAB? I'm thinking that if the processor on my laptop is also doing calculations to run other programs this will slow down the MATLAB calculations. I've closed all other applications, but I know theres probably a lot of stuff going on I can't see. Can I boot my laptop up in a way that will have less of these things going on in the background?
I can't modify the code to make it more efficient.
Thanks!
You might run some of my benchmarks which, along with example results, can be found via:
http://www.roylongbottom.org.uk/
The CPU core used at a particular point in time, is the same on Pentiums, Celerons, Core 2s, Xeons and others. Only differences are L2/L3 cache sizes and external memory bus speeds. So you can compare most results with similar vintage 2 GHz CPUs. Things to try, besides simple number crunching tests.
1 - Try memory test, such as my BusSpeed, to show that caches are being used and RAM not dead slow.
2 - Assuming Windows, check that the offending program is the one using most CPU time in Task Manager, also that with the program not running, that CPU utilisation is around zero.
3 - Check that CPU temperature is not too high, like with SpeedFan (free D/L).
4 - If disk light is flashing, too much RAM might be being used, with some being swapped in and out. Task Manager Performance would show this. Increasing RAM demands can be checked my some of my reliability tests.
There are many things that go into computing power besides RAM. You mention processor speed, but there is also number of cores, GPU capability and more. Programs like MATLAB are designed to take advantage of features like parallelism.
Summary: You can't compare only RAM between two machines and expect to know how they will perform with respect to one another.
Side note: 4 GB is not very much RAM for a modern laptop.
Firstly you should perform a CPU performance benchmark on both computers.
Modern operating systems usually apply the most aggressive power management schemes when it is run on laptop. This usually means turning off one or more cores, or setting them to a very low frequency. For example, a Quad-core CPU that normally runs at 2.0 GHz could be throttled down to 700 MHz on one CPU while the other three are basically put to sleep, while it is on battery. (Remark. Numbers are not taken from a real example.)
The OS manages the CPU frequency in a dynamic way, tweaking it on the order of seconds. You will need a software monitoring tool that actually asks for the CPU frequency every second (without doing busy work itself) in order to know if this is the case.
Plugging in the laptop will make the OS use a less aggressive power management scheme.
(If this is found to be unrelated to MATLAB, please "flag" this post and ask moderator to move this question to the SuperUser site.)

Dual core i7-3540M 3.0 GHz vs Quad corei7-3632QM 2.2 GHz

I intend to buy a laptop to study parallel computing with GPU and multicore CPU. I don't know which is the better one between a Dual core i7-3540M 3.0 GHz and a Quad core i7-3632QM 2.2 GHz. Both of 2 latops has a Nvidia GT 650 graphic card. As far as i know, in GPU computing there is just one core of CPU to be used. So may be the Dual Core with higher clock speed make sense with better performance computing ? Do anyone please give me any suggestion? I really appreciate any reply. Thank you.
Quad core means twice as many cores as dual. This means it is able to run more applications at once. The i7 3636QM also has a larger L3 Cache meaning more data can be stored for quick access. Therefore the quad core processor seems to be better for parallel computing; even though it has a lower clock speed.
When you have a single task that needs to be done right away, multiple cores can help you by breaking the task into smaller chunks, working on each chunk in parallel, and thus you'll get your work done quicker. In lower end tasks you would probably see a higher clock speed in a dual core work better; however in more complex tasks a quad core processor with a lower clock speed is still likely to perform better. With scientific and engineering tasks(presuming these are the tasks you are expecting to do) Quad would always triumph.

What CPU instructions use the most power?

The background is thus: next week our office will have one day with no heating, due to maintenance. Outdoor temperature is expected between 7 and 12 degrees Celcius, so it might become chilly. The portable electric heaters are too few to cater for everyone.
However, I have, in my office of about 6-8 m2, a big honkin' (3 yrs old) workstation (HP xw8600 with 3.0 GHz Quad-core Xeon) that should be able output a couple of hundred Watts of heat. Running Furmark will max out the GPU but I'm not sure how to best work the CPU.
Last time I was in a cold office I either compiled more often or just launched 4-8 DOSBox:es running Norton Commander, but I think one can do better by using SSE1-2-3-4,MMX etc, i.e. stuff that does more work per cycle.
So, what CPU instructions toggle the most transistors each cycle, and thus use cause the CPU to draw most amount of power and thus give off maximum heat?
If I had a power meter available, then I could benchmark myself, but I figure this would be a fun challenge for the SO-crowd. :)
For your specific goal, if you really want to use your system as a heat generator, you need to first make sure that the cooling system is working really well (throwing the heat out of the box). Processors today are designed to throttle themselves when they reach a critical temperature which happens when a proper heatsink is used and the processor is at TDP (Thermal Design Power is the max power for the processor using normal programs). If you have a better heat sink and good ventilation (box fan?), you can probably get beyond TDP assuming that your power supply can handle it. If you turn the fan off, you basically will hit the thermal limit right away.
To be more explicit, the individual instructions that burn the most are generally load instructions that miss in the caches and go out to memory. To guarantee misses, you'll want to allocate a chunk of memory that's bigger than the last level CPU cache and hop around that memory. The pattern of hopping in the maximum power case is a bit complex because you're trying to get the maximum number of misses outstanding at every level of the cache hierarchy simultaneously. If you have 3 levels of cache, in a given period of time, you can have more misses to the L1 than you can to the L2 than you can to the L3 than you can to the DRAM page. (And the specific design of your processor will have a total limit on misses.) Between misses, the instruction doesn't matter too much, but I'd guess that one of the SSE4 multiplies (PMULUDQ) is probably the best since on a lot of modern processors, they execute in pretty quickly and generally do a whole lot of work (compared to say an add).
The funny thing is, running the GPU may limit the amount of heat that you can generate using misses to the L3 cache since the memory may be bogged down by the GPU. In that case, you should make sure that all accesses to the L3 are hits, but that you're missing in the other levels.
For GeForce graphics, my CudaMFLOPS program (free) is quite handy for obtaining high temperatures on the graphics card. If you have an appropriate card details are in:
http://www.roylongbottom.org.uk/cuda1.htm#anchor8
I find that my tests that execute SSE instructions with data from L1 cache generally produce the highest CPU temperatures.
For cpu use Prime95. That is lightweight and will load up all cores nicely. You aren't really going to get much heat out of a 3ghz xeon though. Chips of that age are usually good for over 4ghz with average cooling, and close to 5ghz with high end water loops. With a 6-core chip # >4ghz with extra voltage added you might be hitting 200w TDP but with that system you will be lucky to get the cpu to 100w.
As for the GPU, the Heaven Benchmark is a good one for quickly getting it up to temperature. Again, unless you have a high end card a couple of hundred watts of heat is unlikely. Another alternative on AMD gpus (maybe nvidia too?) is to use crpto-currency mining software, maybe get a USB stick with a mining linux distribution installed and ready to go. You could also use Prime95 on the same rig as mining software uses very little cpu time.
I actually kept a couple of rooms warm over winter with the heat from a computer, only rarely needing extra heating. This was done with a crypto-currency mining rig, which had 4 gpus running at ~80 degrees C, 24/7, with a box fan to circulate the air round the room. That rig had a 1300W PSU. Might I suggest that instead of trying to use the computer to keep you warm, you wear more clothes?

Resources