Simulate high CPU, network etc using cmd on Windows 10? - terminal

I need to simulate high load on CPU, memory, disk, network on my Windows 10 for project purposes (progressively higher upto 100%).
So far I've been able to find VB scripts and tools (https://blogs.msdn.microsoft.com/vijaysk/2012/10/26/tools-to-simulate-cpu-memory-disk-load/) that work to this end. Is there a way I can do this in the command terminal?
Is there something in Windows similar to using /dev/null/ on linux shell where the % usage can be specified as well?

Related

f3probe for Mac?

I'm using f3, a Linux version of the Windows tool H2testw, to test on a Mac the actual capacity of some flash memory I bought. Trouble is that the quicker test, done via f3probe, is only available for Linux, so I'm using the standard test, via the GUI F3X, which does a full integrity test with f3write/f3write rather than just a total capacity test. Trouble is that the flash I bought is claimed to be 512GB large, so it's taking forever. What are alternative best bets? Running f3probe in a VirtualBox? Running H2testw through Wine?

How memory management in Windows different than Linux? Does Windows OS support paging or segmentation?

I am curious to know about the difference between memory management in Windows and Linux.Does Windows OS support paging or segmentation?
I trying to understand, If all processes cumulatively uses all RAM on Windows machine then every user is prevented even from log in to the system but that is not the case with Linux systems.
So how it's achieved in Linux system?
Additionally to the recent posts, Windows 10 also supports compressing of the RAM. that means, before Windows tries so SWAP out memmory on hard drive, it will try to compress the RAM

how to enable cuda only for computing purpose, not for display

Iam using nvidia gt 440 gpu. It is used for both display and computational purpose which leads to less performance while computation. can i enable it only for computational purpose? if so how can i disable it from using display.
It depends -- are you working on Windows or Linux? Do you have any other display adapters (graphics cards) in the machine?
If you're on Linux, you can run without the X Windows Server (i.e., from a terminal) and SSH into the box (or attach your display to another adapter).
If you're on Windows, you need to have a second display adapter. As long as your display is connected to your GeForce 440 GT, there's no way to use it only for computational purposes. That also includes Remote Desktop, which won't work at all unless you have a Tesla card because of the way the WDDM (Windows Display Driver Model) was designed (it can't be accessed from within Session 0, which is where the RDP service runs).
I'm using Intel integrated graphics for display purposes and GPU for compute purpose on Linux.
You'll need to setup from bios to use the integrated graphics on mobo. This will leave your GPU free. It depends on your hardware available. =)
How much does it affects the performance? I did checked before, the display in windows did takes up some memory (less than 10mb).
Check that you have write permission on the /dev/nvidia* devices. The CUDA C Getting Started Guide for Linux contains a script that automatically sets the correct permissions at startup.

Emulating a processor's (limited) resources, including clock speed

I would like a software environment in which I can test the speed of my software on hardware with specific resources. For example, how fast does this program run on an 800MHz x86 with 24 Mb of RAM, when my host hardware is a 3GHz quad core amd64 with 12GB of RAM? Emulators such as qemu make a great point of running "almost as fast" as the underlying hardware; I would like to make it run slower. Is there a way to do that?
I have never tried it, but perhaps you could achieve what you want to some extent by combining an emulator like QEMU or VirtualBox on Linux with something like this:
http://cpulimit.sourceforge.net/
If you can limit the CPU time available to the emulator you might be able to simulate the results of execution on a slower computer. Keep in mind, though, that this would only affect the execution speed (or so I hope, anyway).
The CPU instruction set and other system features would remain unchanged. This means that emulating a specific processor accurately would be difficult if not impossible.
In addition, using something like cpulimit, which works using SIGSTOP and SIGCONT to repeatedly stop/restart the emulator process might cause side-effects, such as timing inconsistencies, video display artifacts etc.
In your emulator, keep a virtual "clock" and increment it appropriately as you execute each instruction. From there you can simply report how long it took in virtual time to execute, or you can have your emulator sleep now and again to keep execution speed roughly where it would be in the target.

Turbo C 3.0 and lower versions were really using high CPU power?

I am using Turbo C 3.0 and Turbo c 2.0 for the programming. Added to this I am using Windows XP. While using Windows 98, the above said programs were really worked fine. But after installing XP, those programs were really slow-down my system. Those were really using high CPU power even when idle(idle refers to "no interaction between program and user").
Can anybody previously solved this issue, Post here.
Also, I want to know what is causing those slow-down!
Those are 16 bit DOS programs, and they probably will not run on XP. They are probably running in the NT Virtual DOS Machine. Use the task manager, or better yet, Process Explorer, to check this. You will probably not see your programs running; look for instances of ntvdm.exe instead.
I have noticed several antivirus programs (Checkpoint, Proventia Desktop) seem have a problem with ntvdm. It is as if they eat up quite a bit of cpu when an ntvdm instance is running.
Also, wasn't Turbo C finicky about its extended memory settings? If you still have your Autoexec.bat and Config.sys files from the Win98 system, you could try changing XP's settings to match. The XP equivalent to these files are autoexec.nt and config.nt; they are in the Windows\System32 directory.
I suspect Adrian's comment is the correct answer: old DOS programs did not account for multitasking and so tended to put themselves in tight loops when "idle". Back in the day, it didn't matter as nothing else was running at the same time and the operating system would interrupt the running program to handle hardware, well, interrupts.
I would highly recommend avoiding such tools on modern hardware because the programs the generate are likewise not multitasking friendly. They are also going to be optimized for ancient processors and have limited memory addressing. If you have some old hardware and want to goof around with it, then knock yourself out. But there are plenty of modern compilers that are free (either as Visual C++ Express is to get you hooked, or open source).
This can be avoided partially by setting process priority.
Start the App eg. Turbo C++ 3.0
Minimize and go to Task Manager
Find ntvdm.exe
Right Click > Set Priority > Low > Yes
Then it runs with not so annoying speeds.

Resources