prevent bash from running user defined binary - bash

In our cluster with PBS batch system (torque) installed, we want all the users to execute their jobs by qsub so that the CPU resources can be well managed. However, we found that users in our cluster can still run their programs directly in their bash shell.
I have noticed that some other cluster systems have restricted users from running their own binary. Their command prompt is different from full privileged command prompt.(starting from ~>)
qczhan2#barrine1:~>echo $0
-bash
In their configuration, users can run basic commands, like ls, pwd, cp, and 'cd' to system folders, but when users run their own binary, the system states "permission not allowed." It is also necessary to mention that if one tries to call user-owned binary using any mpi command, it is also not allowed either.
For example:
qczhan2#barrine1:~>mpiexec -n 64 ./abc.out
permission denied
where abc.out is a user-defined binary file.
I am just wondering how to configure the system to be like that?

You want to change the default shell for all your users from /bin/bash to:
/bin/bash -r
so their shell becomes a restricted shell. Amonst other restriction the users are not allowed to cd, set or unset PATH or issue commands containing /. This locks them into only running commands you give them access to.

If you use Linux: mount filesystems where users have write permission (e.g. /home, /tmp, /var/tmp, /dev/shm) with option "noexec".

Related

Running a script as another user while knowing who ran the script

In Linux, need to write a shell script which can be ran as other users (via doas permissions for this one script), but I need to know in the script who ran it originally. How would I go about doing this?
From the man page
By default, a new environment is created. The variables HOME, LOGNAME, PATH, SHELL, and USER and the umask(2) are set to values appropriate for the target user. DOAS_USER is set to the name of the user executing doas.
So use $DOAS_USER to get the original username.

shell script behaves differently when called using paramiko vs interactive shell [duplicate]

I am trying to run sesu command in Unix server from Python with the help of Paramiko exec_command. However when I am running this command exec_command('sesu test'), I am getting
sh: sesu: not found
When I am running simple ls command it giving me desired output. Only with sesu command it is not working fine.
This is how my code looks like:
import paramiko
host = host
username = username
password = password
port = port
ssh=paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(ip,port,username,password)
stdin,stdout,stderr=ssh.exec_command('sesu test')
stdin.write('Password')
stdin.flush()
outlines=stdout.readlines()
resp=''.join(outlines)
print(resp)
The SSHClient.exec_command by default does not run shell in "login" mode and does not allocate a pseudo terminal for the session. As a consequence a different set of startup scripts is (might be) sourced, than in your regular interactive SSH session (particularly for non-interactive sessions, .bash_profile is not sourced). And/or different branches in the scripts are taken, based on an absence/presence of TERM environment variable.
Possible solutions (in preference order):
Fix the command not to rely on a specific environment. Use a full path to sesu in the command. E.g.:
/bin/sesu test
If you do not know the full path, on common *nix systems, you can use which sesu command in your interactive SSH session.
Fix your startup scripts to set the PATH the same for both interactive and non-interactive sessions.
Try running the script explicitly via login shell (use --login switch with common *nix shells):
bash --login -c "sesu test"
If the command itself relies on a specific environment setup and you cannot fix the startup scripts, you can change the environment in the command itself. Syntax for that depends on the remote system and/or the shell. In common *nix systems, this works:
PATH="$PATH;/path/to/sesu" && sesu test
Another (not recommended) approach is to force the pseudo terminal allocation for the "exec" channel using the get_pty parameter:
stdin,stdout,stderr = ssh.exec_command('sesu test', get_pty=True)
Using the pseudo terminal to automate a command execution can bring you nasty side effects. See for example Is there a simple way to get rid of junk values that come when you SSH using Python's Paramiko library and fetch output from CLI of a remote machine?
You may have a similar problem with LD_LIBRARY_PATH and locating shared objects.
See also:
Environment variable differences when using Paramiko
Certain Unix commands fail with "... not found", when executed through Java using JSch

ssh does not find /usr/local/bin path

If I log in to my remote Mac via ssh -p22 jenkins#192.168.2.220 and type docker, it finds the executable because it also finds the path /usr/local/bin if I check with echo $PATH. But if I do the same in a heredoc inside a file setup-mac.sh like
#!/bin/bash
ssh jenkins#192.168.2.220 '/bin/bash -s' << 'EOF'
"echo $PATH"
"bash run-docker.sh"
EOF
which I execute via shell and bash setup-mac.sh it does not find /usr/local/bin in PATH and consequently does not run docker, because the command is unknown.
On the remote Mac, there is a file run-docker.sh which is a bash file that calls docker commands, and it works if called locally.
To solve this issue, I've enabled PermitUserEnvironment on the mac in sshd_config, but this did not work. Though I only restarted ssh service and not the whole machine. Meanwhile I've changed all docker commands on the remote run-docker.sh script to an alias ${DOCKER} and I initialize it at the beginning of the script to DOCKER=/usr/local/bin/docker, but this is only a workaround.
I guess that the problem is occurring because /usr/local/bin is being added to the PATH by the 'jenkins' user's personal initialization file (~/.bashrc). That is run only by interactive shells, and the shell run by ssh ... '/bin/bash -s' ... is not interactive.
You could try forcing the shell to be interactive by invoking it with /bin/bash -i -s, but that is likely to cause other problems. (The shell may try and fail to set up job control. The value of PS1 may appear in outputs. ...)
In general, you can't rely on the PATH being set correctly for programs. See Setting the PATH in Scripts - Scripting OS X for a thorough analysis of the problem. Correct way to use Linux commands in bash script is also relevant, but doesn't have much information.
A simple and reliable way to fix the problem permanently is to set the required PATH explicitly at the start of run-docker.sh. For example:
export PATH=/bin:/usr/bin:/usr/local/bin
You may need to add other directories to the path if run-docker.sh runs programs that are in other locations.
Another solution to the problem is to use full paths to commands in code. However, that makes the code more difficult to read, more difficult to maintain, and more difficult to test. Setting a safe PATH is usually a better option.

Is there a standard place that the PAGER environment variable is set by default for all users in Ubuntu Linux? [duplicate]

Can I have certain settings that are universal for all my users?
As well as /etc/profile which others have mentioned, some Linux systems now use a directory /etc/profile.d/; any .sh files in there will be sourced by /etc/profile. It's slightly neater to keep your custom environment stuff in these files than to just edit /etc/profile.
If your LinuxOS has this file:
/etc/environment
You can use it to permanently set environmental variables for all users.
Extracted from: http://www.sysadmit.com/2016/04/linux-variables-de-entorno-permanentes.html
man 8 pam_env
man 5 pam_env.conf
If all login services use PAM, and all login services have session required pam_env.so in their respective /etc/pam.d/* configuration files, then all login sessions will have some environment variables set as specified in pam_env's configuration file.
On most modern Linux distributions, this is all there by default -- just add your desired global environment variables to /etc/security/pam_env.conf.
This works regardless of the user's shell, and works for graphical logins too (if xdm/kdm/gdm/entrance/… is set up like this).
Amazingly, Unix and Linux do not actually have a place to set global environment variables. The best you can do is arrange for any specific shell to have a site-specific initialization.
If you put it in /etc/profile, that will take care of things for most posix-compatible shell users. This is probably "good enough" for non-critical purposes.
But anyone with a csh or tcsh shell won't see it, and I don't believe csh has a global initialization file.
Some interesting excerpts from the bash manpage:
When bash is invoked as an interactive
login shell, or as a non-interactive
shell with the --login option, it
first reads and executes commands from
the file /etc/profile, if that file
exists. After reading that file, it
looks for ~/.bash_profile,
~/.bash_login, and ~/.profile, in that
order, and reads and executes commands
from the first one that exists and is
readable. The --noprofile option may
be used when the shell is started to
inhibit this behavior.
...
When an
interactive shell that is not a login
shell is started, bash reads and
executes commands from
/etc/bash.bashrc and ~/.bashrc, if
these files exist. This may be
inhibited by using the --norc option.
The --rcfile file option will force
bash to read and execute commands from
file instead of /etc/bash.bashrc and
~/.bashrc.
So have a look at /etc/profile or /etc/bash.bashrc, these files are the right places for global settings. Put something like this in them to set up an environement variable:
export MY_VAR=xxx
Every process running under the Linux kernel receives its own, unique environment that it inherits from its parent. In this case, the parent will be either a shell itself (spawning a sub shell), or the 'login' program (on a typical system).
As each process' environment is protected, there is no way to 'inject' an environmental variable to every running process, so even if you modify the default shell .rc / profile, it won't go into effect until each process exits and reloads its start up settings.
Look in /etc/ to modify the default start up variables for any particular shell. Just realize that users can (and often do) change them in their individual settings.
Unix is designed to obey the user, within limits.
NB: Bash is not the only shell on your system. Pay careful attention to what the /bin/sh symbolic link actually points to. On many systems, this could actually be dash which is (by default, with no special invocation) POSIXLY correct. Therefore, you should take care to modify both defaults, or scripts that start with /bin/sh will not inherit your global defaults. Similarly, take care to avoid syntax that only bash understands when editing both, aka avoiding bashisms.
Using PAM is execellent.
# modify the display PAM
$ cat /etc/security/pam_env.conf
# BEFORE: $ export DISPLAY=:0.0 && python /var/tmp/myproject/click.py &
# AFTER : $ python $abc/click.py &
DISPLAY DEFAULT=${REMOTEHOST}:0.0 OVERRIDE=${DISPLAY}
abc DEFAULT=/var/tmp/myproject

SSH to debian server instantly logs out

I'm trying to help someone with their Debian server.
They have Plesk. I made myself an user with Plesk and enabled SSH access.
I can log on ... but only for one second. I see the MOTD, I see a Debian disclaimer, then I'm logged out again. "Connection closed".
The only thing I could think to try is to change the shell settings, Plesk has a dropdown list of bash, csh, tcsh and so on next to the "allow ssh using:" option. But none of them works.
Any ideas gratefully received.
The way I fixed this problem is, unfortunately, to manually change the last parameter in /etc/passwd for users I want to give shell access. It is /bin/bash instead of /bin/false.
Plesk can get a bit quirky sometimes...
That behavior is similar to the one you get when a user account has a 'nologin' shell selected on the Plesk config. I would do some things:
Connect using ssh with the verbose option activated (ssh -v user#host) so you can get more detail.
Check the /etc/passwd file, look for your user and check that, the final field on that line, is pointing to a valid shell (something like /bin/bash instead of /bin/nologin or /bin/false).
Check also in that line that the home directory for that user ( that's configured on the field before of the shell ), is valid, exists, and has proper permissions and owner
Finally, check your logs (in /var/log; I think I would check syslog, messages and user), so maybe you can get any meaningful message.
When a user logs on, the shell takes them to their user directory and possibly runs a "startup" script.
Is the user directory on the local machine? Does it have to be mounted from a fileshare (this has happened to me on more than one occasion)? If that fileshare is not mounted you will get disconnected.
Take a look at the startup scripts for those shells. Bash uses various startup scripts depending on the circumstance, these include /etc/profile and ~/.bashrc. These scripts sometimes do wacky things that may disconnect you for any number of reasons.

Resources