How are two processes running on two different terminal's tabs are different from when they run on the same terminal tab? If both processes are running on same CPU core, how CPU time will be assigned to them if both processes start together in Linux system (Ubuntu) {For both cases, on same and different terminal's tab}.
# ./process1.sh& # Running in a terminal tab
# ./process2.sh # Running in the same terminal tab where previous one is running in background
Assuming both processes are running for time, enough for looking at the CPU usage by these two processes.
Related
I use Debian. When I run parent process from user in terminal, "forked" processes are run on different processor cores. But the parent-process is run from rc.local with root permission, all "forked" process are run on the same processor core as the parent core. How to make the processes running under root is distributed to the processor cores as well as if we ran under a standard user?
We have several jenkins pipeline jobs that are taking much longer to complete than we expect. Specific steps seem to "hang" for unwarranted periods of time. Running those same steps manually on another system runs significantly faster.
One example job is a step that uses Ruby to recurse through a bunch of directories and performs a shell command on each file in those directories. Running on our Ubuntu 14.04 Jenkins system takes about 50 minutes. Running the same command on my desktop Mac runs in about 10 seconds.
I did some experimentation on the Jenkins builder by running the Ruby command at the command prompt and had the same slow result that Jenkins had. I also removed Ruby from the equation by batching up each of the individual shell commands Ruby would have run and put them in a shell script to run each shell command sequentially. That took a long time as well.
I've read some posts about STDERR blocking may be the reason. I've then done some experimentation with redirecting STDERR and STDOUT to /dev/null and the commands will finish in about 20 seconds. That is what I would expect.
My questions are:
1. Would these slowdowns in execution time be the result of some I/O blocking?
2. What is the best way to fix this? Some cases I may want the output so redirecting to /dev/null is probably not going to work. Is there a kernel or OS level change I can make?
Running on Ubuntu 14.04 Amazon EC2 instance R3.Large.
Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-108-generic x86_64)
ruby 2.1.5p273 (2014-11-13 revision 48405) [x86_64-linux]
Yes. Transferring huge amounts of data between slave and master leads to performance problems indeed. This applies for storing build artifacts as well as for massive amounts of console output.
For console output, the performance penalty is particularly big if you use the timestamper plugin. If enabled for your job, try disabling that first.
Otherwise, I'd avoid huge amounts of console output in general. Try to restrict console output to very high job-level information that (in case of failure) provides links to further "secondary" logfile data.
Using I/O redirection (as you already did) is the proper way to accomplish that, e.g.
mycommand 2>mycommand.stderr.txt 1>mycommand.stdout.txt
This will always work (except for very special cases where you may need to select command-specific options to redirect a console output stream that a command explicitly creates on its own).
I have parallelized my algorithm using pmap. The performance improvement on one machine using the -p option is great. Now I would like to run on multiple machines.
I used the --machinefile option on julia start. It works but it launches only one process on remote machine. I would like to have multiple processes running on each machine. Option -p enables multiple processes only on the local machine. Is there a way to specify number of processes on remote machines?
On Julia 0.3 you have to list the remote machines multiple times to open multiple Julia copies.
On Julia 0.4 (unreleased) you can actually put a count next to each address, see this pull request.
I have a computer with Windows XP embedded on it. I run a command (from the prompt or by shelling out with the start command), that is of the form "(step 1) | (step 2) | (step 3) | (step 4)", where each of the steps is a different program that pipes its stdout to the next steps stdin.
This works fine, but, on a multicore machine (with 4 cores), it only uses 25% of the cpu for all of the steps, even though I think they should be able to run on separate cores. Am I missing something? Does piping through a command shell prevent the use of more than one core at a time?
I have tried explicitly changing the affinity of each of my steps, and, while that changes which core is reported to be doing the work, the total CPU usage never rises above 25%. If I just run (step 1) > NUL, then that program consumes one entire core.
Thanks.
We use employees' desktops for CPU-intensive simulation during the night. Desktops run Windows - usually Windows XP. Employees don't log off, they just lock the desktops, switch off their monitors and go.
Every employee has a configuration file which he can edit to specify when he is most likely out of office. When that time comes a background program grabs data for simulation from the server, spawns worker processes, watches them, gets results and sends them to the server. When the time specified by the employee elapses simulation stops so that normal desktop usage is not interfered.
The problem is that simuation consumes a lot of memory, so when the worker processes run they force other programs into the swap file. So when the employee comes all the programs he left are luggish and slow until he opens them one by one so that they are unswapped.
Is there a way the program can force other programs out of swap file when it stops simulation so that they again run smoothly?
Loop through system and user processes, starting with one that uses most memory (aside from your background application) or one that is used most by the employee, and send the process a WM_ACTIVATEAPP message. That should have the same effect as "clicking" an application window icon of said process in the taskbar.