different ulimit when I run /bin/sh -c - shell

$ ulimit -n
1024
$ /bin/sh -c ulimit -n
unlimited
Even if I specify the shell I am using:
$ echo $SHELL
/bin/bash
$ /bin/bash -c ulimit -n
unlimited
Why is ulimit not giving me the same value?

This happens because you're running ulimit without arguments. The -n is not part of the command being executed, and will instead become $0. The thing that's unlimited is therefore the max file size
Compare the output of:
bash -c 'echo hello' # says hello
bash -c echo hello # blank line
and then run:
bash -c 'ulimit -n'

Related

how to use trickle to limit upload bandwith from .sh file?

I want to limit the upload bandwidth limit of the linux version of 115.com webapp. This webapp actually is run by "sh /usr/local/115/115.sh". If I do
trickle -s -u 5 sh /usr/local/115/115.sh, then the upload limit is not in effect.
The inside of /usr/local/115/115.sh is
#!/bin/sh
export LD_LIBRARY_PATH=/usr/local/115/lib:$LD_LIBRARY_PATH export PATH=/usr/local/115:$PATH
/bin/bash -c "exec -a $0 /usr/local/115/115 > /dev/null 2>&1" $0
I feel I need to put the trickle command inside the 115.sh. How exactly should I do it? Thanks
I tried
trickle -s -u 5 /bin/bash -c "exec -a $0 /usr/local/115/115 > /dev/null 2>&1" $0
/bin/bash -c "exec -a trickle -s -u 5 $0 /usr/local/115/115 > /dev/null 2>&1" $0
and
/bin/bash -c "exec -a $0 trickle -s -u 5 /usr/local/115/115 > /dev/null 2>&1" $0
but still the speed limit is not effective.

Drop from root to user preserving ALL environment variables

The following bash command works to drop down to user privileges, and preserve an environment for the most part:
root#machine:/root# DOLPHIN=1 sudo -E -u someuser bash -c 'echo $DOLPHIN'
1
However, this does not work for all variables, such as PATH, and LD_LIBRARY_PATH:
root#machine:/root# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
root#machine:/root# sudo -E -u someuser bash -c 'echo $PATH'
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
Notice the PATH is different ^
Why is this happening?
Must be some bash mechanics I don't understand...
Looks like this is a workable option:
root#machine:/root# sudo PATH=$PATH LD_LIBRARY_PATH=$LD_LIBRARY_PATH -E -u someuser bash -c 'echo $PATH'
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games

Direct group of commands into `docker exec`

I have the following command that works fine and prints foo before returning:
docker exec -i <id> /bin/sh < echo "echo 'foo'"
I want to direct multiple commands into the container with one pipe, for example echo 'foo' and ls /. I have tried the following:
This fails because it runs the commands on the host and pipes the output into the container:
{
echo "foo"
ls /
} | docker exec -i <id> /bin/sh
This fails because it has bad syntax. It also runs on the host:
{
echo "foo"
ls /
} | docker exec -i <id> /bin/sh
This one fails, but I would like to not use an array of strings anyway:
for COMMAND in 'echo "foo"' 'ls /'
do
docker exec -i <id> /bin/sh < echo $COMMAND
done
I've also tried several other methods like piping commands into tee or echo but haven't had any luck. If you would like to know why I want to do this seemingly ridiculous thing, it's because:
This is a short script that I would like to keep all in one place
I would like to use syntax highlighting, so I don't want to store it all in a list of strings
The container has the programs the script should run and the host does not
This is an automatic process that I would like to trigger with crontab on the host
You can run a group of commands in the below fashion
docker exec -i <id> /bin/sh -c 'echo "foo"; ls -l'
OR
docker exec -i 996eee5d121d /bin/sh -c 'echo 'foo'; ls -l'
OR
docker exec -i 996eee5d121d /bin/sh -c 'echo foo; ls -l'
If you want to run more than 2 commands, just append ; after each command like
docker exec -i 996eee5d121d /bin/sh -c 'echo "foo"; ls -l; ls -a'
Use a here document.
docker run -i --rm alpine /bin/sh <<EOF
echo abc
ls /
EOF
Note the difference between quoted and unquoted here document delimiter.
docker exec -i <id> /bin/sh < echo "echo 'foo'"
I think you meant to do:
docker exec -i <id> /bin/sh < <(echo "echo 'foo'")
which is just the same as:
docker exec -i <id> /bin/sh <<<"echo 'foo'"
#edit There is a cool little trick. The idea is to pipe the script itself except first lines to another subprocess, it's sometimes used by installer scripts:
#!/bin/sh
# output this script except first 4 lines to docker
tail -n+5 "$0" | docker run -i --rm alpine /bin/sh -x
exit # we exit original script
#!/bin/sh
# inside docker now
echo abc
ls /
Execution:
$ bash -x ./script.sh
+ tail -n+5 ./script.sh
+ docker run -i --rm alpine /bin/sh -x
+ echo abc
+ ls /
abc
bin
...
var
+ exit
In a similar fashion you could use sed or another parsing tool to extract the only the relevant part between some marks for example.
I found a gist that explained how to pipe commands into docker exec:
echo "echo foo" | docker exec -i <id> /bin/sh -
Now we need a way to pipe multiple commands. Command groups won't work because they run on the host and semicolon separated commands can get messy. I thought of writing a function and getting just its body, it turns out you can do that with a simple declare and sed call.
You can combine all these pieces to pipe a command into the container:
function func {
echo "foo"
ls /
}
declare -f func | sed '1,2d;$d' | docker exec -i <id> /bin/bash -
Syntax highlighting still works in the function and it is easy to read.
If you want to use environment variables that are on the host in the container you have to list them manually in docker exec like so:
... | docker exec -i -e VAR=$VAR <id> /bin/bash -
Edit: I'm leaving this here as a possible solution, but the accepted answer is the proper solution I am using.

Mac terminal : trying to add to /etc/shells

This one works
$ cat /etc/shells
# List of acceptable shells for chpass(1).
# Ftpd will not allow users to connect who are not using
# one of these shells.
/bin/bash
/bin/csh
/bin/ksh
/bin/sh
/bin/tcsh
/bin/zsh
But this one does not :
sudo -s 'echo /usr/local/bin/zsh >> /etc/shells'
/bin/bash: echo /usr/local/bin/zsh >> /etc/shells: No such file or directory
sudo takes the string as complete command. You should use a shell to interpret your command like this:
sudo sh -c 'echo /usr/local/bin/zsh >> /etc/shells'
This executes sh with root privileges, and sh interprets the string as a shell command including >> as output redirection.
The only thing you really need sudo for is to open the protected file for writing. You can use the tee command to append to the file.
echo /usr/local/bin/zsh | sudo tee -a /etc/shells > /dev/null

docker run -i -t image /bin/bash - source files first

This works:
# echo 1 and exit:
$ docker run -i -t image /bin/bash -c "echo 1"
1
# exit
# echo 1 and return shell in docker container:
$ docker run -i -t image /bin/bash -c "echo 1; /bin/bash"
1
root#4c064f2554de:/#
Question: How could I source a file into the shell? (this does not work)
$ docker run -i -t image /bin/bash -c "source <(curl -Ls git.io/apeepg) && /bin/bash"
# content from http://git.io/apeepg is sourced and shell is returned
root#4c064f2554de:/#
In my case, I use RUN source command (which will run using /bin/bash) in a Dockerfile to install nvm for node.js
Here is an example.
FROM ubuntu:14.04
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
...
...
RUN source ~/.nvm/nvm.sh && nvm install 0.11.14
I wanted something similar, and expanding a bit on your idea, came up with the following:
docker run -ti --rm ubuntu \
bash -c 'exec /bin/bash --rcfile /dev/fd/1001 \
1002<&0 \
<<<$(echo PS1=it_worked: ) \
1001<&0 \
0<&1002'
--rcfile /dev/fd/1001 will use that file descriptor's contents instead of .bashrc
1002<&0 saves stdin
<<<$(echo PS1=it_worked: ) puts PS1=it_worked: on stdin
1001<&0 moves this stdin to fd 1001, which we use as rcfile
0<&1002 restores the stdin that we saved initially
You can use .bashrc in interactive containers:
RUN curl -O git.io/apeepg.sh && \
echo 'source apeepg.sh' >> ~/.bashrc
Then just run as usual with docker run -it --rm some/image bash.
Note that this will only work with interactive containers.
I don't think you can do this, at least not right now. What you could do is modify your image, and add the file you want to source, like so:
FROM image
ADD my-file /my-file
RUN ["source", "/my-file", "&&", "/bin/bash"]

Resources