Let say I want a Ruby process to not use more than 15% of the CPU. Is it possible? How?
You could try using Process.setrlimit from the standard core:
Sets the resource limit of the process.
This looks like it is just a wrapper around the setrlimit from the C library so it may only be available on Unix-ish platforms. setrlimit doesn't support CPU percentage limits but it does support limiting CPU time in seconds.
If you're just trying to keep your Ruby process from hogging the whole CPU then you could try adjusting its priority with Process.setpriority which is just a wrapper around libc's setpriority and offers some control over your process's scheduling priority. Again, availability will probably be limited by your platform but it should work on Linux, OSX, or any other Unix-ish system.
Related
I need to run a compiler, and people have previously found that it runs well on a single core.
Now that Intel's 12th generation consumer chips have separate P-cores and E-cores, can I somehow tell the compile worker to run specifically on a P-core, so that it gets the fastest core on my machine?
Unless someone restricts the affinity mask on purpose, Windows is free to move threads to different CPUs as it sees fit. This most likely includes putting CPU intensive tasks on the faster cores.
This applies not just to different physical core types but also the older feature known as Core parking and motherboard layout on NUMA systems.
How Windows uses cores is probably influenced by the battery status and if features such as battery saver are enabled.
Im (re)writing a socket server in ruby in hopes of simplifying it. Reading about ruby sockets I ran across a site that says multithreaded ruby apps only use one core / processor in a machine.
Questions:
Is this accurate?
Do I care? Each thread in this server will potentially run for several minutes and there will be lots of them. Is the OS (CentOS 6.5) smart enough to share the load?
Is this any different from threading in C++? (language of the current socket server) IE do pthreads use multiple cores automatically?
What if I fork instead of thread?
CRuby has a global interpreter lock, so it cannot run threads in parallel. Jruby and some other implementations can do it, but CRuby will never run any kind of code in parallel. This means that, no matter how smart your OS is, it can never share the load.
This is different in threading in C++. pthreads create real OS threads, and the kernal's scheduler will run them on multiple cores at the same time. Technically Ruby uses pthreads as well, but the GIL prevents them from running in parallel.
Fork creates a new process, and your OS's scheduler will almost certainly be smart enough to run it on a separate core. If you need parallelism in Ruby, either use an implementation without a GIL, or use fork.
There is a very nice gem called parallel which allows data processing with parallel threads or multiple processes by forking (work around GIL of current CRuby implementation).
Due to GIL in YARV, ruby is not thread friendly. If you want to write multithreaded ruby use jruby or rubinius. It would be even better to use a functional language with actor model such as Erlang or Elixir and let the Virtual Machine handle the threads and you only manage the Erlang processes.
Threading
If you're going to want multi-core threading, you need to use an interpreter that actively uses multiple cores. MRI Ruby as of 2.1.3 is still only single-core; JRuby and Rubinius allow access to multiple cores.
Threading Alternatives
Alternatives to changing your interpreter include:
DRb with multiple Ruby processes.
A queuing system with multiple workers.
Socket programming with multiple interpreters.
Forking processes, if the underlying platform supports the fork(2) system call.
I'm looking for CPU architecture, which is supported by GCC (and is still maintained) for which is easiest to implement software simulator.
It should be something simple, with flat memory model, 16bit+ address space, 16-32 bit ALU and good code dencity is prefered as for it will be running programs with program memory limitations.
Just few words about origin of those requirements. I need virtual CPU for running 'sandboxed' programs. That will be running on microcontrollers with ~5 KBytes RAM, ARM CPU ~20 MHz clock speed.
Performance is non an issue at all, what I really need is writing C/C++ programs and then running them in sandbox without stdlib. For writing programs GCC can help, just need implement vcpu for one of target architectures.
I've got acquainted with ARMv7-m, avr32 references and found them pretty accaptable but some more powerfull then I need. The less/simpler code I need to write for vcpu implementation, the sooner I will have what I need and less bugs will be there.
UPDATE:
Seems like I found what I need. Is was already answered here: What is the smallest, simplest CPU that gcc can compile for?
Thank you all.
In Windows 7, is there a tool that will allow me to see the cpu/core to which a process has been assigned for a recent timeslice under windows? I need to demonstrate that a particular application's process's threads can, and do, land on different processors/cores in a multi-processor/core environment with default scheduling behavior.
Intel VTune for Windows may be what you're looking for.
As for the point you're trying to demonstrate, the answer is almost certainly yes, but it will depend on what else is happening in the system. You can of course take control of which core(s) a thread runs on using the core affinity API routines, but you have to work really hard to beat the OSes own judgement.
Under Solaris there's DTrace, and Linux has a clone called FTrace. I've used FTrace and it does exactly what you want. It might be worth Googling around for an DTrace for Windows. The Windows Performance Toolkit might be just that.
Is it possible for multicore processor to run tasks from 2 different processes simultaneously(at same time).
Yes, if the operating system supports it. The popular ones do and have done so for years.