shell that protects against machine hanging due to bad process - performance

Is there a shell, or technique, that protects me against my entire machine hanging if a process goes haywire?
I'm using ubuntu 10.10.

If you restrict your resources with limits than you can even prevent fork bombs killing your machine.
Here's a nice tutorial: http://www.cyberciti.biz/tips/linux-limiting-user-process.html

I bought a multi-core CPU for that (it does not protect all cases in theory, but for all practical purposes it lets at least one core stay free almost always - to let your interactive "kill" command run.) :p

Related

Rails. Free memory of Delayed Job (active record) without process restart

It must be obvious, but I cant get a usecase of Delayed Job, cause due to ruby`s Gargabe Collector specific, it doesnt free memory back to OS. And once delayed job process will take all memory anyway. And the only way is to restart delayed job process.
But if I restart delayed job process and there is currenlty running task - it will never be completed. Probably, there is some workaround to restart that task later, but this approach seems ugly to me.
I tried real jobs and some simple computatuin without any variables, symbols or links so I dont think that "my code leaks". Still, every new job increases memory of delayed_job process.
May be I use Delayed job for something that its not designed? Or it could be environment problem (besides, tried on local machine and on VPS) ?
Tested on: Ubuntu 14.04 and Debian 6 (both x86), Rails 3.2, delayed_job 4.0.2, delayed_job_active_record 4.0.1, ruby 2.1.2
I could give some code examples, but, as I mentioned, I tried both: real job and simple computation. So I won`t if it is not significant and my mistakes are fundamental.
Due to my conditions - my tasks can be executed for couple of minutes, read and write about 100K records to database and require a lot of computation, tasks cant be interrupted, and number of tasks limited by 10-20 dayli, may be - I only guess to use Resque, because it forks process everytime, so there should be no problems with accumulating memory with time.
So do I realy do something wrong or this is a nature of DJ - to occupie all memory or require a restart - and if I cant restart it, I shouldnt use its approach ?
Everything I read on the internet (not so much, by the way) tells that its rubys GC trouble that it doesnt free memory back to OS, and some advises to profile code for unlinked objects (it sounds the most realistic to my case, but, I tried a lot with code that doesnt create any objects, and I explicitly set everything to nil and call GC.start)

Does Process Overhead Make Linux a Better Node.js Host than Windows?

My understanding is that node.js is designed to scale by adding processes rather than by spawning threads in a process. In fact, from watching an awesome introductory video by Ryan Dahl, I get the idea that spawning threads is forbidden in node.js. I like the simplicity of this approach, but I am concerned that there might be downside when running on Windows, since processes creation is more expensive on Windows than Linux.
Given modern hardware and the fact that node.js processes can be expected to be relatively long running, does process overhead still create a significant advantage for Linux when considering hosting node.js? To put it in concrete terms, if we assume an organization that is using the Windows stack only, but is planning a big move onto node.js, is there a point in considering a new OS because of this issue?
No. Node.js runs in only 1 process and doesn't spawn processes during execution.
The reason you might have gotten the impression that node uses processes to scale is because you can add a process per CPU core to enable node to take advantage of your multicore computer (you'll need a load balancer like solution for this tho). Still: you don't spawn processes on the fly. So yes, you can run node perfectly fine on Windows (or Azure) without too much of a performance hit (if any).

what to do when windows goes in dreaded 100% cpu usage zombie mode

happens to me occasionally:
I start my program in visual studio and due to some bug my program goes into 100% cpu usage and basically freezes windows completely.
Only by utter patience requesting the task manager (takes forever to come up and paint itself) I can kill my process.
Do others encounter this too sometimes? Is there a clever trick to get this process down (other than pulling the plug and possible ruining files on the HD)? It now takes 5-10 minutes to kill it properly if the task manager is not accidentally present and I have to request this first
R
p.s. weird that a 'multitasking os' can still allow processes to eat up so much time that nothing else can be done anymore. My program doesn't even bump up it's thread priorities or anything
Check out Process Lasso
"Process Lasso is a unique new technology that will, amongst other things, improve your PC's responsiveness and stability. Windows, by design, allows programs to monopolize your CPU without restraint -- leading to freezes and hangs. Process Lasso's ProBalance (Process Balance) technology intelligently adjusts the priority of running programs so that badly behaved or overly active processes won't interfere with your ability to use the computer!"
http://www.bitsum.com/prolasso.php
I am not affiliated with Bitsum, just a user of their product, and it helps me solve this type of problems.
For what it's worth, I've never see this on either XP 64 or Vista 64, developing C++ apps in Visual Studio. Perhaps an OS upgrade is in order?
Edit: I use Process Explorer as a replacement Task Manager - it wouldn't surprise me if it did a better job of appearing in good time even when there's a rogue process running. And you can use it to boost its own priority.
I usually hit ctrl-alt-delete start the task manager sort by cpu find the offending process and right click and end the process..
task manager usually has enough priority to do this although it may be slow.
I think a shotgun to the head is the only way to be sure.
I generally don't see anything like this happen strictly as a function of an app that's eating 100% CPU. As part of stability / performance testing, I've gotten apps to cause Windows to get very slow, but this is usually done by writing heavily threaded apps (thus causing the O/S scheduler to thrash), or by writing apps that consume all available system memory or resources (much more impactful to the GUI apps than simply one thread that consumes its full share of processor time during its slices).
You say you get this behavior under Visual Studio? VS has a "Pause" button...

Temporarily suspend the PC operating system

How does one programmatically cause the OS to switch off, go away and stop doing anything at all so that a program may have complete control of a PC system?
I'm interested in doing this from both an MS Windows and Linux environments. Any languages or APIs considered.
I want the OS to stop preempting my program, stop its virtual memory management, stop its device drivers and interrupt service routines from running and basically just go away. Then, when my program has had its evil way with the bare metal, I want the OS to come back again without a reboot.
Is this even possible?
With Linux, you could use kexec jump to transfer control completely to another kernel (ie, your program). Of course, with great power comes great responsibility - it is entirely up to you to service interrupts, and avoid corrupting the old kernel's memory. You'll end up having to write your own OS kernel to do this. Also, the transfer of control takes quite some time, as the kernel has to de-initialize all hardware, then reinitialize it when it's time to resume. Since kexec jump was originally designed for hibernation support, this isn't a problem in its original context, but depending on what you're doing, it might be a problem.
You may want to consider instead working within the framework given to you by the OS - just write a normal driver for whatever you're doing.
Finally, one more option would be using the linux Real-Time patchset. This lets you assign static priorities to everything, even interrupt handlers; by running a process with higher priority than anything else, you could suspend /nearly/ everything - the system will still service a small stub for interrupts, as well as certain interrupts that can't be deferred, like timing interrupts, but for the most part the heavy work will be deferred until you relinquish control of the CPU.
Note that the RT patchset won't stop virtual memory and the like - mlockall will prevent page faults on valid pages though, if that's enough for you.
Also, keep in mind that whatever you do, the system BIOS can still cause SMM traps, which cannot be disabled, except by motherboard-model-specific methods.
There are lots of really ugly ways to do this. You could modify the running kernel by writing some trampoline code to /dev/kmem that passes control to your application. But I wouldn't recommend attempting something like that!
Basically, you would need to have your application act as its own operating system. If you want to read data from a file, you would have to figure out where the data lives on disk, and generate your own SCSI requests to talk to the disk drive. You would have to implement your own interrupt handler to get notified when the data is ready. Likewise you would have to handle page faults, memory allocation, etc. Most users feel that this isn't worth the effort...
Why do you want to do this?
Is there something that your application needs to do that the OS won't let it do? Are you concerned with the OS impact on performance? Something else?
If you don't mind shelling out some cash, you could use IntervalZero's RTX to do this for a Windows system. It's a hard realtime subsystem that gets installed on a Windows box as sort of a hack into the HAL and takes over the machine, letting Windows have whatever CPU cycles are left over.
It has its own scheduler and device drivers, but if you run your program at the top RTX priority, don't install any RTX device drivers (or disable interrupts for the duration), then nothing will interrupt it.
It also supports a small amount of interaction with programs on the Windows side.
We use it as a nice way to get a hard realtime box that runs Windows.
coLinux loads CoLinuxDriver into the NT kernel or a colinux.ko into the Linux kernel. It does exactly what you asked – it "unschedules" the host OS, and runs its own code, with its own memory management, interrupts, etc. Then, when it's done, it "reschedules" the host OS, allowing it to continue from where it left off. coLinux uses this to run a modified Linux kernel parallel to the host OS.
Unlike more common virtualization techniques, there are no barriers between coLinux and the bare metal hardware at all. However, hardware and the host OS tend to get confused if the coLinux guest touches anything without restoring it before returning to the host OS.
Not really. Operating Systems are a foundation, and your program runs on top of them. The OS handles memory access, disk writing operations, communications, etc. when your application makes requests, and asking the OS to move out of the way would mean that your program would have to do the OS's job instead.
Not as such, no.
What you want is basically an application that becomes an OS; a severely stripped down Linux kernel coupled with some highly customized and minimized tools might be the way to go for this.
if you were devious, and wanted to avoid alot of the operating system housekeeping you could probably hook yourself into a driver routine. Thinking out aloud, verging on hacking. google how to write root kits.
Yeah dude, you can totally do that, you can also write a program to tell my bank to give you all my money and send you a hot Russian.

Win32 console processes in VISTA - 10% CPU, but VERY SLOW

I have a Win32 console application which is doing some computations, compiled in Compaq Visual Fortran (which probably doesn't matter).
I need to run a lot of them simultaneously.
In XP, they take around 90-100% CPU together, work very fast.
In Vista, no matter how many of them I run, they take no more than 10% of CPU (together), and work very slow respectively.
There is quite a bit of console output going on, but now VERY much.
I can minimize all the windows, it does not help. CPU is basically doing nothing...
Any ideas?
Update:
No, these are different machines, but they run relatively the same hardware. 2. Threads are not used, this is a VERY OLD (20 yrs) plain app for DOS, compiled in win32. It is supposed to compute iterations until they meet, consume all it has. My impression - VISTA just does NOT GIVE IT MORE CPU
Have you tried redirecting the console output to a file?
If your applications are being held up writing to the console (this happens sometimes unfortunately) then redirecting the output should help, as it's much quicker to write to a simple file than write to the console.
You do this like so
c:\temp> dir > output.log
If you really don't care about the output at all, you can throw it away, by redirecting to nul. eg:
c:\temp> dir > nul
There was a known "feature" in Vista that limits certain console applications to 32MB of RAM. I don't know if those compiled by Compaq Visual Fortran are affected by this "feature."
This article appears to have been updated as recently as October 2008, so the problem still exists.
To expound on Daok's post - your XP machine might be CPU bound for this process, whereas the vista machine is bound by some other resource.
To clarify:
output to stdout (or other) can be slowing down the processing. (as can context switching or file access, etc)
As Tim hinted, console output (stdout) is EXTREMELY expensive.
I suggest rerunning your test while redirecting the console output to a separate log file for each process. If possible, tune down the verbosity of the output in another test run.
Beyond that, there are other obvious possibilities: is the hardware significantly different, are there other major processes running, is there a shared resource that is under contention?
Other than the obvious, look for a nonobvious resource contention such as a shared file.
But the main area where I would look is whether there is a significant difference in how your code is compiled for the two OS environments--I wonder if your Fortran code is incurring some kind of special penalty when running on Vista, such as a compatibility mode. Look to see how well Vista is supported and whether you can target your compile for Vista specifically. Also look for anyone reporting similar issues, such as in bug reports, feature requests, etc.
Your loops are obviously not simple computations. There is a blocking system call in there somewhere. Just because it worked on XP doesn't mean the app is bug free.
Since you can minimize the console windows and see no improvement, I would not consider that an issue. In my experience console output slows a program down only if the console window is drawing text, not when it's minimized.
Is it the same machine hardware on your Vista and XP? It might use just 10% of the Vista because it doesn't require more. Are you using Thread? I think it requires more information about your project to have a better idea. Have you try to use a profiler to see what's going on?

Resources