Uploading scripts to run to non-local cluster using slurm? - bash

I am new to running on multiple cluster systems and I am stuck. I have a bash shell script (myjob.sh) and a secondary script to be executed (stuff.R). I am unsure of how, once I am logged on to the non-local cluster, to upload these files to be ran? "scp" is usually my go to for this sort of thing but I cannot figure out how to make the file move to the cluster. If I go into an interactive shell mode I can make the files using "nano" but I really need to figure this out without the interactive shell. I'm weirdly stuck in limbo. Any help would be much appreciated, thank you!

Related

Bash Scripts (even trivial ones) stuck when invoked on the terminal

I have a server on which we execute multiple bash scripts to automate tasks (like copying files to other servers, kicking off backups, etc). It has been working for some months, but today it started to get erratic.
What is happening, is that the script gets 'stuck' for a while, and after that, it runs with no problem. If I copy and paste the commands one by one on the terminal, it works, so is not something on the script itself, but it seems something that is preventing the bash interpreter (if that makes sense).
Another weird behavior is that the same script will run with no issues eventually. However, as we use Jenkins for automation, the scripts are re-created every time a new job starts.
For example, I created a new script, tst.sh, which only contains an echo. If I try to run it directly, it gets stuck for a while. I tried to debug it with bash -xeav but it does not print my script code, which means that it is not reading it. After a while, the script ran, with no changes. However, creating one script, with the same content and a different name, resurfaces the issue.
My hypothesis is that something prevents the script to be read, and just waits until whatever is blocking it to finish. However, I did not see any process holding the file, which means that it may not the case.
Is there any other thing I should try? My knowledge in bash is pretty basic, so I don't know if there is a flag that may help me on debugging this internally.
I am working on RHEL 8.85, the bash version is GNU bash, version 4.4.20(1)-release (x86_64-redhat-linux-gnu)
UPDATES BASED ON THE COMMENTS
Server resources are OK, no usage for them.
Hardware for the server also works fine, the ops team has not reached out with any known issue at least
Reboot makes the issue disappear, however, it reappears after 5 minutes or so
The issue seems that is not related to bash profiles and such.
Issue solved, posting this as an answer so people can find it quicker.
Turns out, as multiple users suggested in the comments (thanks to all!!) the problem was caused by a security monitor, which analyzed each of the scripts that were executed. The team changed some settings on that end to prevent it from happening, and so far is working.

writing a bash script to run in gpu

I am writing a script to run a sequence of bash scripts and it would be idea to run the programs embedded in them on the GPU to speed this up. I haven't found any obvious way to make this happen without appealing to python. I am running this on Ubtuntu 18 and don't have access to a network to run a grid system and I haven't been able to successfully set up this to run in parallel either...it would be ideal to cut some time off if possible.
Ideas?

CalDAV/CardDAV Radicale backup

Now that I am runing Radicale on my own Linux server (to manage calendars and contacts), I am trying to figure out how to backup Addressbooks via a bash script (which I could then cron or manually launch).
The exporting part is not going to be so difficult thanks to Duplicity.
But where the ... is located the Addressbook ?
There is no *.vcf related to Radicale anywhere on my system.
I've found it.
It is in located in the personal directory :
~/.config/radicale/collections/contact/AddressBook.vcf
In ~/.config/radicale/collections/contact you there are as well the calendars.
Hum. This seems to me to be (remotly) a programing question, since its answer is program for who want to program its own bash backup script.

Can the shell created from PHP shell_exec is reusable?

I am working on a project, where I need to execute multiple unix script from PHP environment. Could this be possible to open a single unix shell and execute all the unix scripts.
Currently im using shell_exec for each of the scripts execution. This makes the application slow, as each time shell_exec,a new shell is being opened and the script is executed.
Thanks in Advance,
No, the underlying shell is not accessible.
You could try few things:
Optimise the scripts so you have to do fewer execs. Pipe them or something like that
I am not sure if it will work but you should be able to start a bash process and send commands to it (see proc_open). This way you could be able to manually and reuse the shell. But I imagine it will be a nightmare, especially in parsing the responses from the scripts (if you need that).

Run a process each time a new file is created in a directory in linux

I'm developing an app. The operating system I'm using is linux. I need to run if possible a ruby script on the file created in the directory. I need to keep this script always running. The first thing I thought about is inotify:
The inotify API provides a mechanism for monitoring file system events. Inotify can be used to monitor individual files, or to monitor directories.
It's exactly what I need, then I found "rb-inotify", a wrapper fir inotify.
Do you think there is a better way of doing what I need than using inotify? Also, I really don't understand the way that I have to use rb-inotify.
I just create, for example, a rb file with:
notifier = INotify::Notifier.new
notifier.watch("directory/to/check",:create) do |event|
#do task with event.name file
end
notifier.run
Then I just ruby myRBNotifier.rb, and it will stay looping for ever. How do I stop it? Any idea? Is this a good approach?
I'd recommend looking at god. It's designed for this sort of task, and makes it pretty easy to build a monitoring system for background and daemon apps.
As for the main code itself, inotify isn't cross-platform, so if you have a possibility you'll need to run on Windows or Mac OS then you'll need a different solution. It's not too hard to write a little piece of code that checks your target directory periodically for a change. If you need to know what changed, read and cache the directory entries then compare them the next time your code runs. Use sleep between runs to wait some period of time before looping.
The old-school method of doing similar things is to use cron to fire off a job at regular intervals. That job can be your script that checks whether the file list changed by comparing it to the cached version, then acting as needed if something is different.
Just run your script in the background with
ruby myRBNotifier.rb &
When you need to stop it, find the process id and use kill on it:
ps ux
kill [whatever pid your process gets from the OS]
Does that answer your question?
If you're running on a mac/unix machine, look at the launchctl man page. You can set up a process to run and execute a ruby script whenever a file changes. It's highly configurable.

Resources