For three days I am learning to work with Docker.
During this I ran commands like
sudo docker run --rm -ti --net=example --name server ubuntu:14.04 bash
and
nc -lp 1234
I was wondering why I have to use sometimes a - and for other commands a --
Is there any logic?
Regarding the topic of my question: I am aware that it is not a good topic. I am sorry for that. This question occurred while working with Docker but I do not know if the - or -- thematic is more a terminal or docker topic.
A single dash can be followed by multiple single-character flags. A double dash is followed by a single, multi-character flag.
in your case
sudo docker run --rm -ti --net=example --name server ubuntu:14.04 bash
flags:
rm (multi-character)
t (single)
i (single)
net (multi-character)
name (multi-character)
,
nc -lp 1234
flags:
l (single)
p (single)
It depends on the command. There are conventions, but none of them are followed universally.
In the Old Days, options were single letters. If an option took an argument, it could follow the option letter with or without an intervening space (command -x foo or command -xfoo).
Options that don't take arguments can be bundled, so command -x -y can be written as command -xy. For many commands, even options that do take arguments can be bundled, with the last specified option taking the argument: command -x -y foo vs. command -xy foo. The nc -lp 1234 in your question is an example of this; l and p are two different options. That could also have been written as nc -l -p 1234.
Commands from the GNU project can typically accept options either in the traditional short form and in a long form, where the option name is an entire word that can be abbreviated as long as the abbreviation is unique. For example, ls has a -F option to append a / to directory names and so forth. Gnu ls lets this be specified as --classify, or abbreviated as --cl. To avoid ambiguity and for backwards compatibility, old-style single-letter options use a single -, while long-form options use --.
Finally some commands take options with long names introduced by a single -; the find command is an example of this.
The only real solution is to read the man page for the specific command you're running.
Related
I’m setting up a docker container to just be a simple environment for Ocaml, since I don’t wanna have to manage two OPAM tool chains on two computers. (Windows desktop, Linux laptop) My goal is to have the container load in to a bash command prompt on docker-compose run with ocaml ready to go, and to do this I need to enter in to bash and then run eval $(opam env) on startup. This is my current docker file:
FROM ocaml/opam:alpine-3.12
# Create folder and assign owner
USER root
RUN mkdir /code
WORKDIR /code
RUN chown opam:opam /code
USER opam
# Install ocaml
RUN opam init
RUN opam switch create 4.11.1
RUN opam install dune
# bash env
CMD [ "/bin/bash" ]
ENTRYPOINT [ "eval", "\$(opam env)" ]
Building and trying to run this gives me the error:
sh: $(opam env): unknown operand
ERROR: 2
I tried making a run.sh script but that ran into some chmod/permission issues that are probably harder to debug than this. What do I do to open this container in bash and then run the eval $(opam env) command? I don’t want to do this with command line arguments, I’d like to do this all in a dockerfile or docker-compose file
The trick is to use opam exec1 as the entry point, e.g.,
ENTRYPOINT ["opam", "exec", "--"]
Then you can either run directly a command from the installed switch or just start an interactive shell with run -it --rm <cont> sh and you will have the switch fully activated, e.g.,
$ docker run -it --rm binaryanalysisplatform/bap:latest sh
$ which ocaml
/home/opam/.opam/4.09/bin/ocaml
As an aside, since we're talking about docker and OCaml, let me share some more tricks. First of all, you can look into our collection of dockerfiles in BAP for some inspiration. And another important trick that I would like to share is using multistage builds to shrink the size of the image, here's an example Dockerfile. In our case, it gives us a reduction from 7.5 Gb to only 750 Mb, while still preserving the ability to run and build OCaml programs.
And another side note :) You also should run your installation in a single RUN entry, otherwise your layers will eventually diverge and you will get weird missing packages errors. Basically, here's the Dockerfile that you're looking for,
FROM ocaml/opam2:alpine
WORKDIR /home/opam
RUN opam switch 4.11.1 \
&& eval "$(opam env)" \
&& opam remote set-url default https://opam.ocaml.org \
&& opam update \
&& opam install dune \
&& opam clean -acrs
ENTRYPOINT ["opam", "exec", "--"]
1)Or opam config exec, i.e., ENTRYPOINT ["opam", "config", "exec", "--"] for the older versions of opam.
There's no way to tell Docker to do something after the main container process has started, or to send input to the main container process.
What you can do is to write a wrapper script that does some initial setup and then runs whatever the main container process is. Since that eval command will just set environment variables, those will carry through to the main shell.
#!/bin/sh
# entrypoint.sh
# Set up the version-manager environment
eval $(opam env)
# Run the main container command
exec "$#"
In the Dockerfile, make this script be the ENTRYPOINT:
COPY entrypoint.sh /usr/local/bin
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
CMD ["/bin/bash"]
It also might work to put this setup in a shell dotfile, and run bash -l as the main container command to force it to read dotfiles. However, the $HOME directory isn't usually well-defined in Docker, so you might need to set that variable. If you expand this setup to run a full application, the entrypoint-wrapper approach will will there too, but that sequence probably won't read shell dotfiles at all.
What you show looks like an extremely straightforward installation sequence and I might not change it, but be aware that there are complexities around using version managers in Docker. In particular every Dockerfile RUN command has a new shell environment and the eval command won't "stick". I'd ordinarily suggest picking a specific version of the toolchain and directly installing it, maybe in /usr/local, without a version manager, but that approach will be much more complex than what you have currently. For more mainstream languages you can also usually use e.g. a node:16.13 prebuilt image.
What's with the error you're getting? For ENTRYPOINT and CMD (and also RUN) Docker has two forms. If something is a JSON array then Docker runs the command as a sequence of words, with one word in the array translating to one word in the command, and no additional interpretation or escaping. If it isn't a JSON array – even if it's mostly a JSON array, but has a typo – Docker will interpret it as a shell command and run it with sh -c. Docker applies this rule separately to the ENTRYPOINT and CMD, and then combines them together into a single command.
In particular in your ENTRYPOINT line, RFC 8259 §7 defines the valid character escapes in JSON, so \n is a newline and so on, but \$ is not one of those. That makes the embedded string invalid, and therefore the ENTRYPOINT line isn't valid, and Docker runs it via a shell. The single main container command is then
sh -c '[ "eval", "\$(opam env)" ]' '/bin/bash'
which runs the shell command [, as in if [ "$1" = yes ]; then ...; fi. That command doesn't understand the $(...) string as an argument, which is the error you're getting.
The JSON array already has escaped the things that need to be escaped, so it looks like you could get around this immediate error by removing the erroneous backslash
ENTRYPOINT ["eval", "$(opam env)"] # won't actually work
Docker will run this as-is, combining it with the CMD, and you get
'eval' '$(opam env)' '/bin/bash'
But eval isn't a "real" command – there is no /bin/eval binary – and Docker will pass on the literal string $(opam env) without interpreting it at all. That's also not what you want.
In principle it's possible to do this without writing a script, but you lose a lot of flexibility. For example, consider
# no ENTRYPOINT; shell-form CMD
CMD eval $(opam env) && exec /bin/bash
Again, though, if you replace this CMD with anything else you won't have done the initial setup step.
For some reason, running docker commands from bash scripts doesn't work if you add regular variables, example:
c=$(date +"%x")
targets="www.example.com"
docker build -t amass https://github.com/OWASP/Amass.git
docker run amass --passive -d $targets > $c.txt
The error is as follows:
./main.sh: 13: ./main.sh: cannot create 12/29/2018.txt: Directory nonexistent
Running same commands from a terminal operate directly. How can I fix this?
In your situation, it is too dangerous to use the %x option of date, which stands for:
%x locale's date representation (e.g., 12/31/99)
You wouldn't control anything, and may have various behaviour between your testing computer, and the docker, if the locale is different.
Anyway, using date format with slash '/', which are going to be interpreted as directory separator will lead to issue.
For both reasons, you should define the format of your date.
For instance:
#!/bin/bash
c=$(date +'%Y-%m-%d-%H-%M-%S')
targets="www.example.com"
docker build -t amass https://github.com/OWASP/Amass.git
docker run amass --passive -d $targets > $c.txt
You should add as many information (hour, minute, second ...) in your date as you think you may run your script; otherwise, the output of previous run will be overriden.
Is there a way of running in windows command prompt, a "docker run" command over multiple lines e.g. instead of
"docker run -v .. --name .. --entrypoint .. <image_name>"
something like
"docker run
-v. ..
--name ..
--entrypoint ..
<image_name>"
It's becoming a pain to edit! Thanks, Jonny
edit: I've tried adding in ^ command, it doesn't work. Just errors with "docker run" requires at least 1 argument.
I think the best tool is docker-compose and it uses docker-compose.yml file to prepare run or build instructions.
So it will be a text file in yml or yaml format and take a reference on different keywords for it's prepartion.
https://docs.docker.com/compose/overview/
Compose is a tool for defining and running multi-container Docker applications.
With Compose, you use a YAML file to configure your application’s services.
Then, with a single command, you create and start all the services from your configuration.
To learn more about all the features of Compose, see the list of features.
In case anyone ended up here instead of the possible duplicates above and is put off by the ^ didn't work for me:
if using dos (i.e. not powershell)
replacing the characters
\
space and backslash at the end of the docker command for linux with
^
i.e. space and caret at the end of all but the penultimate line, seems to work for me.
docker run -v ^
more? --name ^
more? --entrypoint ^
more? <image_name>
more? is by default asking for further commands in Command Prompt.
In windows, you can use ^ caret sign and \ backslash on mac to run multiple line of commands for docker.
I have a bash script that uses docopts. It works beautifully on my Debian machine, but fails to set defaults on my Ubuntu laptop. Here's the docopts code:
eval "$(docopts -A args -V - -h - : "$#" <<EOF
Usage: cmus_select.sh [--list <tag>] [--random] [--structure=</alt/dir/structure>] [--dir=/path/to/music]
-h --help Show help
-r --random Randomize selection instead of using a dmenu
-l --list TAG List music by tag. Unless you use -s, genre, artist, album, and song are expected. [default: song]
-s --structure STRUCT Directory structure for your music. [default: genre/artist/album/song]
-d --dir DIR Location of music [default: $HOME/Music/]
----
cmus_select.sh 0.0.1
EOF
)"
I found the two spaces requirement and already checked that (not sure if stackoverflow formatting will eat the spaces.)
Both machines use docopts 0.6.1+fix. The debian machine uses bash 4.2.37 and python 2.7.3. The ubuntu machine is on 4.2.45 and 2.7.5+.
I tried a variety of ways to describe the options. Different order of -l/--list. = sign between the option and its variable. Var name in angle brackets. Etc. It reliably works in debian and not in Ubuntu.
-- followup--
I encountered the same problem on a debian testing machine. Docopts is looking for a new maintainer so I gave up. As an alternative I wrote https://raw.github.com/sagotsky/.dotfiles/612fe9e5c4aa7e1fae268810b24f8f80960a6d66/scripts/argh.sh which is smaller than docopts but does what I need.
After designing a simple shell/bash based backup script on my Ubuntu engine and making it work, I've uploaded it to my Debian server, which outputs a number of errors while executing it.
What can I do to turn on "error handling" in my Ubuntu machine to make it easier to debug?
ssh into the server
run the script by hand with either -v or -x or both
try to duplicate the user, group, and environment of the error run in your terminal window If necessary, run the program with something like "su -c 'sh -v script' otheruser
You might also want to pipe the result of the bad command, particularly if run by cron(8), into /bin/logger, perhaps something like:
sh -v -x badscript 2>&1 | /bin/logger -t badscript
and then go look at /var/log/messages.
Bash lets you turn on debugging selectively, or completely with the set command. Here is a good reference on how to debug bash scripts.
The command set -x will turn on debugging anywhere in your script. Likewise, set +x will turn it off again. This is useful if you only want to see debug output from parts of your script.
Change your shebang line to include the trace option:
#!/bin/bash -x
You can also have Bash scan the file for errors without running it:
$ bash -n scriptname