writing a bash script to run in gpu - bash

I am writing a script to run a sequence of bash scripts and it would be idea to run the programs embedded in them on the GPU to speed this up. I haven't found any obvious way to make this happen without appealing to python. I am running this on Ubtuntu 18 and don't have access to a network to run a grid system and I haven't been able to successfully set up this to run in parallel either...it would be ideal to cut some time off if possible.
Ideas?

Related

Is there a way to determine / specify what core an M1 MacOS program is running on?

Scenario: I'm logging in remotely to my M1 Mini, and trying to run a Perl script which launches a long stream of instances of my (compiled for arm64 C++) program, one at a time, for testing purposes. Because it will surely take hours to run the full sequence, I nohup the Perl script.
I launched the first test run about 36 hours ago, and it is about one-third completed thus far. This is much slower than I'd expected -- in general my individual tests of the program have shown this machine to be faster than any other machine I have, and this isn't living up to that at all.
I got to thinking about this, and wondered if my code is getting treated as a "background" task, and run on one of the power-efficient "Icestorm" cores rather than one of the fast "Firestorm" cores. Does anyone know how to detect which core a process is running on and/or control which core a process is run on?

How to detect or log (ubuntu 14.04) when Ruby forks the process?

I'm trying to reduce the amount of forking our Ruby/Rails app does. We shell out a lot with backticks, and each of these forks the entire process, which can cause a huge memory bloat.
I'm going through, identifying the ones that get called the most, and trying to replace them with code which achieves the same thing without making a shell call. However, in some cases I suspect it might still be forking under the hood anyway.
Is there a way to detect or log whenever a process forks? I'm using ubuntu 14.04. A log would be ideal as I can then keep an eye on it when I run the amended code.

Jenkins jobs slow. Is it I/O related?

We have several jenkins pipeline jobs that are taking much longer to complete than we expect. Specific steps seem to "hang" for unwarranted periods of time. Running those same steps manually on another system runs significantly faster.
One example job is a step that uses Ruby to recurse through a bunch of directories and performs a shell command on each file in those directories. Running on our Ubuntu 14.04 Jenkins system takes about 50 minutes. Running the same command on my desktop Mac runs in about 10 seconds.
I did some experimentation on the Jenkins builder by running the Ruby command at the command prompt and had the same slow result that Jenkins had. I also removed Ruby from the equation by batching up each of the individual shell commands Ruby would have run and put them in a shell script to run each shell command sequentially. That took a long time as well.
I've read some posts about STDERR blocking may be the reason. I've then done some experimentation with redirecting STDERR and STDOUT to /dev/null and the commands will finish in about 20 seconds. That is what I would expect.
My questions are:
1. Would these slowdowns in execution time be the result of some I/O blocking?
2. What is the best way to fix this? Some cases I may want the output so redirecting to /dev/null is probably not going to work. Is there a kernel or OS level change I can make?
Running on Ubuntu 14.04 Amazon EC2 instance R3.Large.
Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-108-generic x86_64)
ruby 2.1.5p273 (2014-11-13 revision 48405) [x86_64-linux]
Yes. Transferring huge amounts of data between slave and master leads to performance problems indeed. This applies for storing build artifacts as well as for massive amounts of console output.
For console output, the performance penalty is particularly big if you use the timestamper plugin. If enabled for your job, try disabling that first.
Otherwise, I'd avoid huge amounts of console output in general. Try to restrict console output to very high job-level information that (in case of failure) provides links to further "secondary" logfile data.
Using I/O redirection (as you already did) is the proper way to accomplish that, e.g.
mycommand 2>mycommand.stderr.txt 1>mycommand.stdout.txt
This will always work (except for very special cases where you may need to select command-specific options to redirect a console output stream that a command explicitly creates on its own).

concurrent pipelining in windows with cygwin

Lets say I had a series of operations I wanted to apply to some data. The programs implementing the operations are not necessarily written in the same language, but they all work by reading from STDIN and writing to STDOUT.
In a unix environment it can be set it up as a pipeline like:
cat data.txt | prog1.sh | prog2.pl | prog3.py | prog4 > out.txt
and it will execute the 4 operations concurrently on the stream of data.
Does the same happen in windows?
I remember testing this out a few years ago with cygwin on windows xp, but I only saw a single prog running in the task manager.
Has anything changed with cygwin, the new xp service packs, or windows 7/8 that would allow for concurrent pipelining? Or has it always worked and I just made a silly mistake in my tests?
I don't have access to a windows machine right now or I'd test it out myself. If someone knows what's going on, I appreciate any help.
While the Unix-like layer implemented by Cygwin has many flaws compared to a native POSIX system or to native Windows programming (especially where performance is concerned), the pipes it implements are quite "real." The programs in the pipeline will run concurrently and will process the data they receive in parallel.
However, as with any pipeline, the speed of the entire operation will be determined by the speed of the slowest component. So if one of the programs in the pipeline is markedly less efficient than the others, it will dominate the CPU usage in the process list.

restarting a python script when memory usage is too high

I have a server written in python that would use a lot of RES memory when occasionally certain input comes in. It'd be annoying to have that python script continuously occupying that much RAM because we have a lot of other things running on the same machine. I did some research and found that the only sure way to release those memory back to the OS is to exit the process. So I am hoping to be able to restart the python script when it detects itself using too much memory after processing each input.
I tried the following trick to reload the process, and but found the process still uses as much RAM after reloading. No cleanup is done.
os.execl(sys.executable, sys.executable, * sys.argv)
Is there another clean way to restart a python script without inheriting all this RAM usage?
I'm actually in a similar situation myself. I haven't come up with a solution yet, but what you might be able to do is make a bash script or .bat file that restarts the script when it finishes. This would most likely not inherit all that RAM because python itself exits and then starts again.
os.execl(sys.executable, sys.executable, * sys.argv)
This doesn't work because when you call os.execl it spawns a new process within the original python script and waits until that finishes, and then exits. That's why it "inherits" the RAM because it's still running in the background and hasn't exited yet.

Resources