How to see the current user's queue in SLURM - cluster-computing

On a cluster that is managed by SLURM, I want to check the queue of the current user (and cluster). Normally, I have to use this command:
squeue --user=username --clusters=clustername
The problem with this, apart from the fact that this is a rather long command to use frequently, is that it needs the username. I have created a script in which at some point I want to check the queue of the user, but I have to get the username first.
I have a workaround for all these, but it would be great if I could use a command like the respective one for LoadLeveller:
llu
Is there anything like that? Or can I somehow specify the "current user" in the --user flag?

You may use an alias in the /etc/bashrc file (or ~/.bashrc for some users):
alias llu="squeue --user=$USER --clusters=clustername"
EDIT
You could also use this alias which does not depend on an environment variable:
alias llu="squeue --user=`whoami` --clusters=clustername"
or
alias llu="squeue --user=`logname` --clusters=clustername"

You can simply use squeue -u $LOGNAME. If you want to query for the jobs on the current cluster it should be the default behavior without having to add the --clusters parameter, so this way the squeue command becomes simplier.
According to this answer https://unix.stackexchange.com/a/76369, $LOGNAME should always be defined in the environment, so this should be completely portable.

Newer versions of slurm accept
squeue --me
as a shortcut.

Related

How to use openssh-client in Cyberark environments with autocompletion and multiple servers?

I usually have what I need in my ~/.ssh/ folder (a config file and more) to connect to servers with ssh <tab><tab>. In an environment with Cyberark the configuration seems to be a bit more intricate due to the three # signs
I found this answer, but I struggled to find a way to enjoy autocompletion for many hosts because the User field does not support tokens like %h for host, so I'd have to create the same entry again for every server where I previously just added servers to the Host line. Is there a way this can be achieved?
After spending some time I came up with the following solution which is more like a workaround. I'm not really proud of it, but it gets the job done with the least amount of new code or difficult to understand code.
Create a wrapper script like this:
$ cat ~/bin/ssh-wrapper.sh
#!/bin/bash
# https://docs.cyberark.com/Product-Doc/OnlineHelp/PAS/latest/en/Content/PASIMP/PSSO-PMSP.htm#The
# Replace where appropriate:
# $1 = Server FQDN
# prefix = your administrative user may be different from you normal user
# internal.example.org = domain server (controller)
# pam.example.org = Cyberark jump host
ssh -t ${USERNAME,,}#prefix${USERNAME,,}#internal.example.org#$1#pam.example.org
Add the following to your bash startup file. Yours may be different than mine, because I'm hacking here in a customer environment with Tortoise Git-Bash. (Which works nice by the way when you use it with Flux Terminal, k9s and jq.)
Create an alias for you wrapper script, I chose sshw here.
Create variable with all the FQDNs of the servers you want to have in your autocompletion, or create file which contains these FQDNs and read it to a variable.
Create a bash completion expression which applies this to your sshw alias.
$ cat ~/.bash_profile
alias sshw="$HOME/bin/ssh-wrapper.sh"
SSH_COMPLETE=$(cat "$HOME/.ssh/known_hosts_wrapper")
complete -o default -W "${SSH_COMPLETE[*]}" sshw
Now you can tab you way to servers.

How to execute a script in zsh and then become interactive?

I want to run a predetermined set of commands after the invocation of zsh (i.e. after .zshrc is executed) that returns the user to an interactive shell when complete.
Things like this comes to my mind:
urxvt -e 'zsh -c ". scriptname"'
but instead of exiting zsh and the terminal once the script finishes, I want an interactive shell at the end. The idea is to simply save users from having to type ". scriptname" whenever they log in.
Application: Several users are using the same account (strange but true) and I want to help in adjusting user specific settings. Yes, I know that one could use different accounts for that :-)
Not really what you were asking for, but should have the desired result. You use an environment variable to pass the name of the user-specific script to source to .zshrc (or other appropriate startup file).
urxvt -e 'USERSCRIPT=scriptname zsh'
Then in .zshrc for the actual user, include
. $USERSCRIPT
(All of this is not to say that there isn't an option to run a command then remain in interactive mode; I just can't find a way to do it, so I offer this workaround.)

What does $VARIABLE$ stand for

..in the following shell script?
$USER1$=/usr/lib/nagios/plugins
As far as i know variable defining is done as-
export USER1=/usr/lib/nagios/plugins
Source:
Ok, the command works. Now I have to implement it into Nagios. Because
all my "local" command not installed by the package-manager shall be
in /usr/lib/nagios/plugins_local I define a $USER2$ variable for this
path:
# vim resource.cfg
...
# Sets $USER1$ to be the path to the plugins
$USER1$=/usr/lib/nagios/plugins
# my own check-commands live here:
$USER2$=/usr/lib/nagios/plugins_local
$USERn$ (more specifically, $USER1 to $USER255$) is the way to declare a user-defined Macro in Nagios.
See also "Understanding Macros and how they work."
More specifically and more interestingly this is a good way to hide usernames/passwords needed withing database/http checks for instance.
This means that you can attempt soemthing like the following directly in your configuration files and thus you do not fear that you are committing or backing up usernames/passwords.
./nrpe -c check_http -H $IP -a $USER1$:$USER2$ -u $LINK
An aside: Unfortunetly Nagios only supports up to 32 of the USER variables.

Ruby, Unicorn, and environment variables

While playing with Heroku, I found their approach of using environment variables for server-local configuration brilliant. Now, while setting up an application server of my own, I find myself wondering how hard that would be to replicate.
I'm deploying a sinatra application, riding Unicorn and Nginx. I know nginx doesn't like to play with the environment, so that one's out. I can probably put the vars somewhere in the unicorn config file, but since that's under version control with the rest of the app, it sort of defeats the purpose of having the configuration sit in the server environment. There is no reason not to keep my app-specific configuration files together with the rest of the app, as far as I'm concerned.
The third, and last (to my knowledge) option, is setting them in the spawning shell. That's where I got lost. I know that login and non-login shells use different rc files, and I'm not sure whether calling something with sudo -u http stuff is or not spawning a login shell. I did some homework, and asked google and man, but I'm still not entirely sure on how to approach it. Maybe I'm just being dumb... either way, I'd really appreciate it if someone could shed some light on the whole shell environment deal.
I think your third possibility is on the right track. What you're missing is the idea of a wrapper script, whose only function is to set the environment and then call the main program with whatever options are required.
To make a wrapper script that can function as a control script (if prodEnv use DB=ProdDB, etc), there is one more piece that simplifies this problem. Bash/ksh both support a feature called sourcing files. This an operation that the shell provides, to open a file and execute what is in the file, just as if it was in-lined in the main script. Like #include in C and other languages.
ksh and bash will automatically source /etc/profile, /var/etc/profile.local (sometimes), $HOME/.profile. There are other filenames that will also get picked up, but in this case, you'll need to make your own env file and the explicitly load it.
As we're talking about wrapper-scripts, and you want to manage how your environment gets set up, you'll want to do the sourcing inside the wrapper script.
How do you source an environment file?
envFile=/path/to/my/envFile
. $envFile
where envFile will be filled with statements like
dbServer=DevDBServer
webServer=QAWebServer
....
you may discover that you need to export these variable for them to be visble
export dbServer webServer
An alternate assignment/export is supported
export dbServer=DevDBServer
export webServer=QAWebServer
Depending on how non-identical your different environments are, you can have your wrapper script figure out which environment file to load.
case $( /bin/hostame ) in
prodServerName )
envFile=/path/2/prod/envFile ;;
QASeverName )
envFile=/path/2/qa/envFile ;;
devSeverName )
envFile=/path/2/dev/envFile ;;
esac
. ${envFile}
#NOW call your program
myProgram -v -f inFile -o outFile ......
As you develop more and more scripts in your data processing environment, you can alway source your envFile at the top. When you eventually change the physical location of a server (or it's name), then you have only one place that you need to make the change.
IHTH
Also a couple of gems dealing with this. figaro that works both with or without heroku. Figaro uses a yaml file (in config and git ignored) to keep track of variables. Another option is dotenv that reads variables from an .env file. And also another article with all them options.
To spawn an interactive shell (a.k.a. login shell) you need to invoke sudo like this:
sudo -i -u <user> <command>
Also you may use -E to preserve the environment. This will allow some variables to be pased for your current environment to the command invoked with sudo.
I solved a similar problem by explicitly telling Unicorn to read a variables file as part of startup in its init.d script. First I created a file in a directory above the application root called variables. In this script I call export on all my environment variables, e.g. export VAR=value. Then I defined a variable GET_VARS=source /path/to/variables in the /etc/init.d/unicorn file. Finally, I modified the start option to read su - $USER -c "$GET_VARS && $CMD" where $CMD is the startup command and $USER is the app user. Thus, the variables defined in the file are exported into the shell of Unicorn's app user on startup. Note that I used an init.d script almost identical to the one from this article.

how to invoke ruby script containing system command with cron job?

I have a ruby script containing system command like http://gist.github.com/235833, while I ran this script from shell, it works correctly, but when I added it to my cron job list, it doesn't work any more, the cron job is like:
10/* * * * * cd /home/hekin; /usr/bin/ruby my_script.rb
any idea what's going wrong with what i've done? Thank you.
Thank you all for your answers.
It's my mistake.
Since I'm using ssh key forwarding on the local machine, while I executed the script from the shell, the ssh key forwarding related environment variables are all sitting there, but from cron job context, those environment variables are missing.
Try to separate the things that might go wrong. The ones I can think of are:
The cron syntax - is the time value given legal and fitting your shell?
Permissions - execute permissions and read permissions for the relevant directory and file
Quoting - what scope does cron cover? Does it run only the first command?
In order to dissect this, I suggest you first run a really simple cron job, like 'ls'. Next run a single-liner script. Next embed your commands in a shell-script file. Somewhere along these lines you should find the problem.
The problem is your environment. While testing in your shell its fully equipped and boosted by your shell environment. While running under cron its very, very stripped down.
Where is the destination "." for your script? I guess it will be "/" and may not "$HOME" thus your script won't be able to write at that location and fails. Try using an absolut path for the destination.

Resources