[Updated1] I have a shell which will change TCP kernel parameters in some functions, but now I need to make this shell run in Docker container, that means, the shell need to know it is running inside a container and stop configuring the kernel.
Now I'm not sure how to achieve that, here is the contents of /proc/self/cgroup inside the container:
9:hugetlb:/
8:perf_event:/
7:blkio:/
6:freezer:/
5:devices:/
4:memory:/
3:cpuacct:/
2:cpu:/docker/25ef774c390558ad8c4e9a8590b6a1956231aae404d6a7aba4dde320ff569b8b
1:cpuset:/
Any flags above can I use to figure out if this process is running inside a container?
[Updated2]: I have also noticed Determining if a process runs inside lxc/Docker, but it seems not working in this case, the content in /proc/1/cgroup of my container is:
8:perf_event:/
7:blkio:/
6:freezer:/
5:devices:/
4:memory:/
3:cpuacct:/
2:cpu:/docker/25ef774c390558ad8c4e9a8590b6a1956231aae404d6a7aba4dde320ff569b8b
1:cpuset:/
No /lxc/containerid
Docker creates .dockerenv and .dockerinit (removed in v1.11) files at the top of the container's directory tree so you might want to check if those exist.
Something like this should work.
#!/bin/bash
if [ -f /.dockerenv ]; then
echo "I'm inside matrix ;(";
else
echo "I'm living in real world!";
fi
To check inside a Docker container if you are inside a Docker container or not can be done via /proc/1/cgroup. As this post suggests you can to the following:
Outside a docker container all entries in /proc/1/cgroup end on / as you can see here:
vagrant#ubuntu-13:~$ cat /proc/1/cgroup
11:name=systemd:/
10:hugetlb:/
9:perf_event:/
8:blkio:/
7:freezer:/
6:devices:/
5:memory:/
4:cpuacct:/
3:cpu:/
2:cpuset:/
Inside a Docker container some of the control groups will belong to Docker (or LXC):
vagrant#ubuntu-13:~$ docker run busybox cat /proc/1/cgroup
11:name=systemd:/
10:hugetlb:/
9:perf_event:/
8:blkio:/
7:freezer:/
6:devices:/docker/3601745b3bd54d9780436faa5f0e4f72bb46231663bb99a6bb892764917832c2
5:memory:/
4:cpuacct:/
3:cpu:/docker/3601745b3bd54d9780436faa5f0e4f72bb46231663bb99a6bb892764917832c2
2:cpuset:/
We use the proc's sched (/proc/$PID/sched) to extract the PID of the process. The process's PID inside the container will differ then it's PID on the host (a non-container system).
For example, the output of /proc/1/sched on a container
will return:
root#33044d65037c:~# cat /proc/1/sched | head -n 1
bash (5276, #threads: 1)
While on a non-container host:
$ cat /proc/1/sched | head -n 1
init (1, #threads: 1)
This helps to differentiate if you are in a container or not. eg you can do:
if [[ ! $(cat /proc/1/sched | head -n 1 | grep init) ]]; then {
echo in docker
} else {
echo not in docker
} fi
Using Environment Variables
For my money, I prefer to set an environment variable inside the docker image that can then be detected by the application.
For example, this is the start of a demo Dockerfile config:
FROM node:12.20.1 as base
ENV DOCKER_RUNNING=true
RUN yarn install --production
RUN yarn build
The second line sets an envar called DOCKER_RUNNING that is then easy to detect. The issue with this is that in a multi-stage build, you will have to repeat the ENV line every time you FROM off of an external image. For example, you can see that I FROM off of node:12.20.1, which includes a lot of extra stuff (git, for example). Later on in my Dockerfile I then COPY things over to a new image based on node:12.20.1-slim, which is much smaller:
FROM node:12.20.1-slim as server
ENV DOCKER_RUNNING=true
EXPOSE 3000
COPY --from=base /build /build
CMD ["node", "server.js"]
Even though this image target server is in the same Dockerfile, it requires the ENV var to be defined again because it has a different base image.
If you make use of Docker-Compose, you could instead easily define an envar there. For example, your docker-compose.yml file could look like this:
version: "3.8"
services:
nodeserver:
image: michaeloryl/stackdemo
environment:
- NODE_ENV=production
- DOCKER_RUNNING=true
Thomas' solution as code:
running_in_docker() {
(awk -F/ '$2 == "docker"' /proc/self/cgroup | read non_empty_input)
}
Note
The read with a dummy variable is a simple idiom for Does this produce any output?. It's a compact method for turning a possibly verbose grep or awk into a test of a pattern.
Additional note on read
What works for me is to check for the inode number of the '/.'
Inside the docker, its a very high number.
Outside the docker, its a very low number like '2'.
I reckon this approach would also depend on the FileSystem being used.
Example
Inside the docker:
# ls -ali / | sed '2!d' |awk {'print $1'}
1565265
Outside the docker
$ ls -ali / | sed '2!d' |awk {'print $1'}
2
In a script:
#!/bin/bash
INODE_NUM=`ls -ali / | sed '2!d' |awk {'print $1'}`
if [ $INODE_NUM == '2' ];
then
echo "Outside the docker"
else
echo "Inside the docker"
fi
We needed to exclude processes running in containers, but instead of checking for just docker cgroups we decided to compare /proc/<pid>/ns/pid to the init system at /proc/1/ns/pid. Example:
pid=$(ps ax | grep "[r]edis-server \*:6379" | awk '{print $1}')
if [ $(readlink "/proc/$pid/ns/pid") == $(readlink /proc/1/ns/pid) ]; then
echo "pid $pid is the same namespace as init system"
else
echo "pid $pid is in a different namespace as init system"
fi
Or in our case we wanted a one liner that generates an error if the process is NOT in a container
bash -c "test -h /proc/4129/ns/pid && test $(readlink /proc/4129/ns/pid) != $(readlink /proc/1/ns/pid)"
which we can execute from another process and if the exit code is zero then the specified PID is running in a different namespace.
golang code, via the /proc/%s/cgroup to check a process in a docker,include the k8s cluster
func GetContainerID(pid int32) string {
cgroupPath := fmt.Sprintf("/proc/%s/cgroup", strconv.Itoa(int(pid)))
return getContainerID(cgroupPath)
}
func GetImage(containerId string) string {
if containerId == "" {
return ""
}
image, ok := containerImage[containerId]
if ok {
return image
} else {
return ""
}
}
func getContainerID(cgroupPath string) string {
containerID := ""
content, err := ioutil.ReadFile(cgroupPath)
if err != nil {
return containerID
}
lines := strings.Split(string(content), "\n")
for _, line := range lines {
field := strings.Split(line, ":")
if len(field) < 3 {
continue
}
cgroup_path := field[2]
if len(cgroup_path) < 64 {
continue
}
// Non-systemd Docker
//5:net_prio,net_cls:/docker/de630f22746b9c06c412858f26ca286c6cdfed086d3b302998aa403d9dcedc42
//3:net_cls:/kubepods/burstable/pod5f399c1a-f9fc-11e8-bf65-246e9659ebfc/9170559b8aadd07d99978d9460cf8d1c71552f3c64fefc7e9906ab3fb7e18f69
pos := strings.LastIndex(cgroup_path, "/")
if pos > 0 {
id_len := len(cgroup_path) - pos - 1
if id_len == 64 {
//p.InDocker = true
// docker id
containerID = cgroup_path[pos+1 : pos+1+64]
// logs.Debug("pid:%v in docker id:%v", pid, id)
return containerID
}
}
// systemd Docker
//5:net_cls:/system.slice/docker-afd862d2ed48ef5dc0ce8f1863e4475894e331098c9a512789233ca9ca06fc62.scope
docker_str := "docker-"
pos = strings.Index(cgroup_path, docker_str)
if pos > 0 {
pos_scope := strings.Index(cgroup_path, ".scope")
id_len := pos_scope - pos - len(docker_str)
if pos_scope > 0 && id_len == 64 {
containerID = cgroup_path[pos+len(docker_str) : pos+len(docker_str)+64]
return containerID
}
}
}
return containerID
}
Based on Dan Walsh's comment about using SELinux ps -eZ | grep container_t, but without requiring ps to be installed:
$ podman run --rm fedora:31 cat /proc/1/attr/current
system_u:system_r:container_t:s0:c56,c299
$ podman run --rm alpine cat /proc/1/attr/current
system_u:system_r:container_t:s0:c558,c813
$ docker run --rm fedora:31 cat /proc/1/attr/current
system_u:system_r:container_t:s0:c8,c583
$ cat /proc/1/attr/current
system_u:system_r:init_t:s0
This just tells you you're running in a container, but not which runtime.
Didn't check other container runtimes but https://opensource.com/article/18/2/understanding-selinux-labels-container-runtimes provides more info and suggests this is widely used, might also work for rkt and lxc?
What works for me, as long as I know the system programs/scrips will be running on, is confirming if what's running with PID 1 is systemd (or equivalent). If not, that's a container.
And this should be true for any linux container, not only docker.
Had the need for this capability in 2022 on macOS and only the answer by #at0S still works from all the other options.
/proc/1/cgroup only has the root directory in a container unless configured otherwise
/proc/1/sched showed the same in-container process number. The name was different (bash) but that's not very portable.
Environment variables work if you configure your container yourself, but none of the default environment variables helped
I did find an option not listed in the other answers: /proc/1/mounts included an overlay filesystem with "docker" in its path.
Related
I have something like this on a Jenkinsfile (Groovy) and I want to record the stdout and the exit code in a variable in order to use the information later.
sh "ls -l"
How can I do this, especially as it seems that you cannot really run any kind of groovy code inside the Jenkinsfile?
The latest version of the pipeline sh step allows you to do the following;
// Git committer email
GIT_COMMIT_EMAIL = sh (
script: 'git --no-pager show -s --format=\'%ae\'',
returnStdout: true
).trim()
echo "Git committer email: ${GIT_COMMIT_EMAIL}"
Another feature is the returnStatus option.
// Test commit message for flags
BUILD_FULL = sh (
script: "git log -1 --pretty=%B | grep '\\[jenkins-full]'",
returnStatus: true
) == 0
echo "Build full flag: ${BUILD_FULL}"
These options where added based on this issue.
See official documentation for the sh command.
For declarative pipelines (see comments), you need to wrap code into script step:
script {
GIT_COMMIT_EMAIL = sh (
script: 'git --no-pager show -s --format=\'%ae\'',
returnStdout: true
).trim()
echo "Git committer email: ${GIT_COMMIT_EMAIL}"
}
Current Pipeline version natively supports returnStdout and returnStatus, which make it possible to get output or status from sh/bat steps.
An example:
def ret = sh(script: 'uname', returnStdout: true)
println ret
An official documentation.
quick answer is this:
sh "ls -l > commandResult"
result = readFile('commandResult').trim()
I think there exist a feature request to be able to get the result of sh step, but as far as I know, currently there is no other option.
EDIT: JENKINS-26133
EDIT2: Not quite sure since what version, but sh/bat steps now can return the std output, simply:
def output = sh returnStdout: true, script: 'ls -l'
If you want to get the stdout AND know whether the command succeeded or not, just use returnStdout and wrap it in an exception handler:
scripted pipeline
try {
// Fails with non-zero exit if dir1 does not exist
def dir1 = sh(script:'ls -la dir1', returnStdout:true).trim()
} catch (Exception ex) {
println("Unable to read dir1: ${ex}")
}
output:
[Pipeline] sh
[Test-Pipeline] Running shell script
+ ls -la dir1
ls: cannot access dir1: No such file or directory
[Pipeline] echo
unable to read dir1: hudson.AbortException: script returned exit code 2
Unfortunately hudson.AbortException is missing any useful method to obtain that exit status, so if the actual value is required you'd need to parse it out of the message (ugh!)
Contrary to the Javadoc https://javadoc.jenkins-ci.org/hudson/AbortException.html the build is not failed when this exception is caught. It fails when it's not caught!
Update:
If you also want the STDERR output from the shell command, Jenkins unfortunately fails to properly support that common use-case. A 2017 ticket JENKINS-44930 is stuck in a state of opinionated ping-pong whilst making no progress towards a solution - please consider adding your upvote to it.
As to a solution now, there could be a couple of possible approaches:
a) Redirect STDERR to STDOUT 2>&1
- but it's then up to you to parse that out of the main output though, and you won't get the output if the command failed - because you're in the exception handler.
b) redirect STDERR to a temporary file (the name of which you prepare earlier) 2>filename (but remember to clean up the file afterwards) - ie. main code becomes:
def stderrfile = 'stderr.out'
try {
def dir1 = sh(script:"ls -la dir1 2>${stderrfile}", returnStdout:true).trim()
} catch (Exception ex) {
def errmsg = readFile(stderrfile)
println("Unable to read dir1: ${ex} - ${errmsg}")
}
c) Go the other way, set returnStatus=true instead, dispense with the exception handler and always capture output to a file, ie:
def outfile = 'stdout.out'
def status = sh(script:"ls -la dir1 >${outfile} 2>&1", returnStatus:true)
def output = readFile(outfile).trim()
if (status == 0) {
// output is directory listing from stdout
} else {
// output is error message from stderr
}
Caveat: the above code is Unix/Linux-specific - Windows requires completely different shell commands.
this is a sample case, which will make sense I believe!
node('master'){
stage('stage1'){
def commit = sh (returnStdout: true, script: '''echo hi
echo bye | grep -o "e"
date
echo lol''').split()
echo "${commit[-1]} "
}
}
For those who need to use the output in subsequent shell commands, rather than groovy, something like this example could be done:
stage('Show Files') {
environment {
MY_FILES = sh(script: 'cd mydir && ls -l', returnStdout: true)
}
steps {
sh '''
echo "$MY_FILES"
'''
}
}
I found the examples on code maven to be quite useful.
All the above method will work. but to use the var as env variable inside your code you need to export the var first.
script{
sh " 'shell command here' > command"
command_var = readFile('command').trim()
sh "export command_var=$command_var"
}
replace the shell command with the command of your choice. Now if you are using python code you can just specify os.getenv("command_var") that will return the output of the shell command executed previously.
How to read the shell variable in groovy / how to assign shell return value to groovy variable.
Requirement : Open a text file read the lines using shell and store the value in groovy and get the parameter for each line .
Here , is delimiter
Ex: releaseModule.txt
./APP_TSBASE/app/team/i-home/deployments/ip-cc.war/cs_workflowReport.jar,configurable-wf-report,94,23crb1,artifact
./APP_TSBASE/app/team/i-home/deployments/ip.war/cs_workflowReport.jar,configurable-temppweb-report,394,rvu3crb1,artifact
========================
Here want to get module name 2nd Parameter (configurable-wf-report) , build no 3rd Parameter (94), commit id 4th (23crb1)
def module = sh(script: """awk -F',' '{ print \$2 "," \$3 "," \$4 }' releaseModules.txt | sort -u """, returnStdout: true).trim()
echo module
List lines = module.split( '\n' ).findAll { !it.startsWith( ',' ) }
def buildid
def Modname
lines.each {
List det1 = it.split(',')
buildid=det1[1].trim()
Modname = det1[0].trim()
tag= det1[2].trim()
echo Modname
echo buildid
echo tag
}
If you don't have a single sh command but a block of sh commands, returnstdout wont work then.
I had a similar issue where I applied something which is not a clean way of doing this but eventually it worked and served the purpose.
Solution -
In the shell block , echo the value and add it into some file.
Outside the shell block and inside the script block , read this file ,trim it and assign it to any local/params/environment variable.
example -
steps {
script {
sh '''
echo $PATH>path.txt
// I am using '>' because I want to create a new file every time to get the newest value of PATH
'''
path = readFile(file: 'path.txt')
path = path.trim() //local groovy variable assignment
//One can assign these values to env and params as below -
env.PATH = path //if you want to assign it to env var
params.PATH = path //if you want to assign it to params var
}
}
Easiest way is use this way
my_var=`echo 2`
echo $my_var
output
: 2
note that is not simple single quote is back quote ( ` ).
I am trying to conditionally add arguments to a docker call in a bash script but docker says the flag is unknown, but I can run the command verbatim by hand and it works.
I have tried a few strategies to add the command, including using a string instead of an array, and I have tried using a substitution like the solution here ( using ${array[#]/#/'--pull '} ): https://stackoverflow.com/a/68675860/10542275
docker run --name application --pull "always" -p 3000:3000 -d private.docker.repository/group/application:version
This bash script
run() {
getDockerImageName "/group" "$PROJECT_NAME:$VERSION" "latest";
local imageName=${imageName};
local additionalRunParameters=${additionalRunParameters};
cd "$BASE_PATH/$PROJECT_NAME" || exit 1;
stopAnyRunning "$PROJECT_NAME";
echo docker run --name "$PROJECT_NAME" \
"${additionalRunParameters[#]}" \
-p 3000:3000 \
-d "$imageName";
// docker run --name application --pull "always" -p 3000:3000 -d private.docker.repository/group/application:version
docker run --name "$PROJECT_NAME" \
"${additionalRunParameters[#]}" \
-p 3000:3000 \
-d "$imageName";
//unknown flag: --pull "always"
}
The helper 'getDockerImageName'
# Gets the name of the docker image to use for deploy.
# $1 - The path to the image in the container registry
# $2 - The name of the image and the tag
# $3 - 'latest' if the deploy should use the container built by CI
export imageName="";
export additionalRunParameters=();
getDockerImageName() {
imageName="group/$2";
if [[ $3 == "latest" ]]; then
echo "Using docker image from CI...";
docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "https://$DOCKER_BASE_URL";
imageName="${DOCKER_BASE_URL}${1}/$2";
additionalRunParameters=('--pull "always"');
fi
}
Don't put code (such as arguments) in a variable. Basically, use an array is good, and you are almost doing that. This line -
local additionalRunParameters=${additionalRunParameters};
is probably what's causing you trouble, along with
additionalRunParameters=('--pull "always"');
which is embedding the spaces between what you seem to have meant to be two operands (the option and its argument), turning them into a single string that is an unrecognized garble. //unknown flag: --pull "always" is telling you the flag it's parsing is --pull "always", which is NOT the --pull flag docker DOES know, followed by an argument.
Also,
export additionalRunParameters=(); # nope
arrays don't really export. Take that out, it will only confuse someone.
A much simplified set of examples:
$: declare -f a b c
a ()
{
foo=('--pull "always"') # single value array
}
b ()
{
echo "1: \${foo}='${foo}' <= scalar, returns first element of the array";
echo "2: \"\${foo[#]}\"='${foo[#]}' <= returns entire array (be sure to put in quotes)";
echo "3: \"\${foo[1]}\"='${foo[1]}' <= indexed, returns only second element of array"
}
c ()
{
foo=(--pull "always") # two values in this array
}
$: a # sets ${foo[0]} to '--pull "always"'
$: b
1: ${foo}='--pull "always"' <= scalar, returns first element of the array
2: "${foo[#]}"='--pull "always"' <= returns entire array (be sure to put in quotes)
3: "${foo[1]}"='' <= indexed, returns only second element of array
$: c # setd ${foo[0]} to '--pull' and ${foo[1]} to "always"
$: b
1: ${foo}='--pull' <= scalar, returns first element of the array
2: "${foo[#]}"='--pull always' <= returns entire array (be sure to put in quotes)
3: "${foo[1]}"='always' <= indexed, returns only second element of array
So what you need is:
getDockerImageName(){
. . . # stuff
additionalRunParameters=( --pull "always" ); # no single quotes
}
and just take OUT
local additionalRunParameters=${additionalRunParameters}; # it's global, no need
You have one more issue though - "${additionalRunParameters[#]}" \ is good, as long as the array isn't empty. In your example it will apparently always be loaded with the same values, so I don't see why you are adding all this extra complication of putting it into a global array that gets loaded incidentally in another function... seems like an antipattern. Just put the arguments you are universally enforcing anyway on the command itself.
However, on the possibility that you simplified some details out, then if this array is ever empty it's going to pass a literal quoted empty string as an argument on the command line, and you're likely to get something like the following error:
$: docker run --name foo "" -p 3000:3000 -d bar # docker doesn't understand the empty string
docker: invalid reference format.
Maybe, rather than
additionalRunParameters=( --pull "always" ); # no single quotes
and
"${additionalRunParameters[#]}" \
what you really want is
pull_always=true
and
${pull_always:+ --pull always } \
...with no quotes, so that if the var has been set (with anything) it evaluates to the desired result, but if it's unset and unquoted it evaluates to nothing and gets ignored, as nothing actually gets passed in - not even a null string.
Good luck. Let us know if you need more help.
I am using the Matlab Docker Image to run Matlab in a Container and do calculations within it.
https://github.com/mathworks-ref-arch/matlab-dockerfile
When you commonly start the container with docker run() it spins up a command line with running Matlab which is waiting for user Input.
My goal is to use the Docker Image as a build slave in Jenkins, so it spinns up the Container and kills it automatically after usage. The desired Matlab commands should be implemented in the Jenkins Job. So far it is all set up and Jenkins is starting Matlab in a Docker slave environment and then exits:
[Pipeline] sh
+ chmod +x startmatlab.sh [Pipeline] sh
+ ./startmatlab.sh MATLAB is selecting SOFTWARE OPENGL rendering.
< M A T L A B (R) >
Copyright 1984-2019 The MathWorks, Inc.
R2019b (9.7.0.1190202) 64-bit (glnxa64)
August 21, 2019
To get started, type doc. For product information, visit www.mathworks.com.
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
[withMaven] pipelineGraphPublisher - triggerDownstreamPipelines
[withMaven] downstreamPipelineTriggerRunListener - completed in 58 ms Finished: SUCCESS
The startmatlab.sh is the script provided by Mathworks to start matlab in the container and looks like this.
#!/bin/bash
#
# Copyright 2019 The MathWorks, Inc.
ECHO=echo
#=======================================================================
build_cmd () { # Takes the cmd input string and outputs the same
# string correctly quoted to be evaluated again.
#
# Always returns a 0
#
# usage: build_cmd
#
# Use version of echo here that will preserve
# backslashes within $cmd. - g490189
$ECHO "$1" | awk '
#----------------------------------------------------------------------------
BEGIN { squote = sprintf ("%c", 39) # set single quote
dquote = sprintf ("%c", 34) # set double quote
}
NF != 0 { newarg=dquote # initialize output string to
# double quote
lookquote=dquote # look for double quote
oldarg = $0
while ((i = index (oldarg, lookquote))) {
newarg = newarg substr (oldarg, 1, i - 1) lookquote
oldarg = substr (oldarg, i, length (oldarg) - i + 1)
if (lookquote == dquote)
lookquote = squote
else
lookquote = dquote
newarg = newarg lookquote
}
printf " %s", newarg oldarg lookquote }
#----------------------------------------------------------------------------
'
return 0
}
ARGLIST=""
while [ $# -gt 0 ]; do
case "$1" in
-r|-batch)
QUOTED_CMD=`build_cmd "$2"`
ARGLIST="${ARGLIST} $1 `$ECHO ${QUOTED_CMD}`"
shift
;;
*)
ARGLIST="${ARGLIST} $1"
esac
shift
done
eval exec "matlab ${ARGLIST}"
exit
In my Jenkins Job I defined a stage that is just executing that script and starts Matlab which is working fine:
stage('test2'){
sh 'chmod +x startmatlab.sh'
sh './startmatlab.sh'
sh '1+1'
}
The only thing is that I am not able to get any commands from the pipeline in Jenkins to be executed within the containers command line. I tried to just do a simple '1+1' and put it under the './startmatlab.sh' without success.
/home/jenkins/workspace/DR/docker.MatlabR2019b#tmp/durable-f5af635b/script.sh: 1: /home/jenkins/workspace/DR/docker.MatlabR2019b#tmp/durable-f5af635b/script.sh: 1+1: not found
So my question is if anyone can help me resolve how or where I have to put my commands to be executed here...
My Thanks in advance!
I was facing similar issue, I wanted to execute MATLAB commands inside the container once it spins up. Below is the way suggested by MathWorks.
sh 'matlab -batch ver'
For information refer:
use matlab -batch option
I have a simple bash script as follows that is part of a docker image.
test.sh,
#!/bin/bash
set -e
logit() {
log_date=`date +"%F %T"`
echo "[$log_date][INFO] $1"
}
waitForServerToStart() {
while true; do
logit "Testing .... 1"
netstat -anpt
logit "Testing .... 2"
netstat -anpt | grep tcp
logit "Testing .... 3"
sleep 5
logit "Testing .... 4"
done
}
waitForServerToStart
run.sh,
#!/bin/sh
/test.sh &
# Run forever
while true; do sleep 5; done
Dockerfile,
FROM openjdk:8u191-jre-alpine3.9
COPY files/run.sh /
COPY files/test.sh /
CMD ["/run.sh"]
If I run this container I only get the following output which leads me to believe somehow grep and "pipe" seem to get blocked.
[2019-03-06 11:10:45][INFO] Testing .... 1
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 172.17.0.2:58278 xxx.xxx.xx.xx:443 FIN_WAIT2 -
[2019-03-06 11:10:45][INFO] Testing .... 2
Can someone please shed some light around this ?
It works fine If I comment out netstat -anpt | grep tcp. I would then see the subsequent log lines and it would also continue in the loop.
[2019-03-06 11:25:36][INFO] Testing .... 3
[2019-03-06 11:25:41][INFO] Testing .... 4
This one has me puzzled! But I have a solution for you:
Use awk instead of grep
In test.sh use this instead:
netstat -anpt | awk /tcp/
So that the file looks like this:
#!/bin/bash
set -e
logit() {
log_date=`date +"%F %T"`
echo "[$log_date][INFO] $1"
}
waitForServerToStart() {
while true; do
logit "Testing .... 1"
netstat -anpt
logit "Testing .... 2"
netstat -anpt | awk /tcp/
logit "Testing .... 3"
sleep 5
logit "Testing .... 4"
done
}
waitForServerToStart
For a reason that I cannot explain - grep will not return when reading from the pipe when invoked from the script. I created your container locally, ran it and entered it - and the command netstat -anpt | grep tcp runs just fine and exits. If you replace it with netstat -anpt | cat in your test.sh script, then it will also pass just fine.
I looked all over the place for the someone with an identical issue with grep in a container from the distro you are using, the version etc. - but came up empty handed.
I believe that it may have to do with grep waiting for a EOF character that never lands - but I am not sure.
[Updated1] I have a shell which will change TCP kernel parameters in some functions, but now I need to make this shell run in Docker container, that means, the shell need to know it is running inside a container and stop configuring the kernel.
Now I'm not sure how to achieve that, here is the contents of /proc/self/cgroup inside the container:
9:hugetlb:/
8:perf_event:/
7:blkio:/
6:freezer:/
5:devices:/
4:memory:/
3:cpuacct:/
2:cpu:/docker/25ef774c390558ad8c4e9a8590b6a1956231aae404d6a7aba4dde320ff569b8b
1:cpuset:/
Any flags above can I use to figure out if this process is running inside a container?
[Updated2]: I have also noticed Determining if a process runs inside lxc/Docker, but it seems not working in this case, the content in /proc/1/cgroup of my container is:
8:perf_event:/
7:blkio:/
6:freezer:/
5:devices:/
4:memory:/
3:cpuacct:/
2:cpu:/docker/25ef774c390558ad8c4e9a8590b6a1956231aae404d6a7aba4dde320ff569b8b
1:cpuset:/
No /lxc/containerid
Docker creates .dockerenv and .dockerinit (removed in v1.11) files at the top of the container's directory tree so you might want to check if those exist.
Something like this should work.
#!/bin/bash
if [ -f /.dockerenv ]; then
echo "I'm inside matrix ;(";
else
echo "I'm living in real world!";
fi
To check inside a Docker container if you are inside a Docker container or not can be done via /proc/1/cgroup. As this post suggests you can to the following:
Outside a docker container all entries in /proc/1/cgroup end on / as you can see here:
vagrant#ubuntu-13:~$ cat /proc/1/cgroup
11:name=systemd:/
10:hugetlb:/
9:perf_event:/
8:blkio:/
7:freezer:/
6:devices:/
5:memory:/
4:cpuacct:/
3:cpu:/
2:cpuset:/
Inside a Docker container some of the control groups will belong to Docker (or LXC):
vagrant#ubuntu-13:~$ docker run busybox cat /proc/1/cgroup
11:name=systemd:/
10:hugetlb:/
9:perf_event:/
8:blkio:/
7:freezer:/
6:devices:/docker/3601745b3bd54d9780436faa5f0e4f72bb46231663bb99a6bb892764917832c2
5:memory:/
4:cpuacct:/
3:cpu:/docker/3601745b3bd54d9780436faa5f0e4f72bb46231663bb99a6bb892764917832c2
2:cpuset:/
We use the proc's sched (/proc/$PID/sched) to extract the PID of the process. The process's PID inside the container will differ then it's PID on the host (a non-container system).
For example, the output of /proc/1/sched on a container
will return:
root#33044d65037c:~# cat /proc/1/sched | head -n 1
bash (5276, #threads: 1)
While on a non-container host:
$ cat /proc/1/sched | head -n 1
init (1, #threads: 1)
This helps to differentiate if you are in a container or not. eg you can do:
if [[ ! $(cat /proc/1/sched | head -n 1 | grep init) ]]; then {
echo in docker
} else {
echo not in docker
} fi
Using Environment Variables
For my money, I prefer to set an environment variable inside the docker image that can then be detected by the application.
For example, this is the start of a demo Dockerfile config:
FROM node:12.20.1 as base
ENV DOCKER_RUNNING=true
RUN yarn install --production
RUN yarn build
The second line sets an envar called DOCKER_RUNNING that is then easy to detect. The issue with this is that in a multi-stage build, you will have to repeat the ENV line every time you FROM off of an external image. For example, you can see that I FROM off of node:12.20.1, which includes a lot of extra stuff (git, for example). Later on in my Dockerfile I then COPY things over to a new image based on node:12.20.1-slim, which is much smaller:
FROM node:12.20.1-slim as server
ENV DOCKER_RUNNING=true
EXPOSE 3000
COPY --from=base /build /build
CMD ["node", "server.js"]
Even though this image target server is in the same Dockerfile, it requires the ENV var to be defined again because it has a different base image.
If you make use of Docker-Compose, you could instead easily define an envar there. For example, your docker-compose.yml file could look like this:
version: "3.8"
services:
nodeserver:
image: michaeloryl/stackdemo
environment:
- NODE_ENV=production
- DOCKER_RUNNING=true
Thomas' solution as code:
running_in_docker() {
(awk -F/ '$2 == "docker"' /proc/self/cgroup | read non_empty_input)
}
Note
The read with a dummy variable is a simple idiom for Does this produce any output?. It's a compact method for turning a possibly verbose grep or awk into a test of a pattern.
Additional note on read
What works for me is to check for the inode number of the '/.'
Inside the docker, its a very high number.
Outside the docker, its a very low number like '2'.
I reckon this approach would also depend on the FileSystem being used.
Example
Inside the docker:
# ls -ali / | sed '2!d' |awk {'print $1'}
1565265
Outside the docker
$ ls -ali / | sed '2!d' |awk {'print $1'}
2
In a script:
#!/bin/bash
INODE_NUM=`ls -ali / | sed '2!d' |awk {'print $1'}`
if [ $INODE_NUM == '2' ];
then
echo "Outside the docker"
else
echo "Inside the docker"
fi
We needed to exclude processes running in containers, but instead of checking for just docker cgroups we decided to compare /proc/<pid>/ns/pid to the init system at /proc/1/ns/pid. Example:
pid=$(ps ax | grep "[r]edis-server \*:6379" | awk '{print $1}')
if [ $(readlink "/proc/$pid/ns/pid") == $(readlink /proc/1/ns/pid) ]; then
echo "pid $pid is the same namespace as init system"
else
echo "pid $pid is in a different namespace as init system"
fi
Or in our case we wanted a one liner that generates an error if the process is NOT in a container
bash -c "test -h /proc/4129/ns/pid && test $(readlink /proc/4129/ns/pid) != $(readlink /proc/1/ns/pid)"
which we can execute from another process and if the exit code is zero then the specified PID is running in a different namespace.
golang code, via the /proc/%s/cgroup to check a process in a docker,include the k8s cluster
func GetContainerID(pid int32) string {
cgroupPath := fmt.Sprintf("/proc/%s/cgroup", strconv.Itoa(int(pid)))
return getContainerID(cgroupPath)
}
func GetImage(containerId string) string {
if containerId == "" {
return ""
}
image, ok := containerImage[containerId]
if ok {
return image
} else {
return ""
}
}
func getContainerID(cgroupPath string) string {
containerID := ""
content, err := ioutil.ReadFile(cgroupPath)
if err != nil {
return containerID
}
lines := strings.Split(string(content), "\n")
for _, line := range lines {
field := strings.Split(line, ":")
if len(field) < 3 {
continue
}
cgroup_path := field[2]
if len(cgroup_path) < 64 {
continue
}
// Non-systemd Docker
//5:net_prio,net_cls:/docker/de630f22746b9c06c412858f26ca286c6cdfed086d3b302998aa403d9dcedc42
//3:net_cls:/kubepods/burstable/pod5f399c1a-f9fc-11e8-bf65-246e9659ebfc/9170559b8aadd07d99978d9460cf8d1c71552f3c64fefc7e9906ab3fb7e18f69
pos := strings.LastIndex(cgroup_path, "/")
if pos > 0 {
id_len := len(cgroup_path) - pos - 1
if id_len == 64 {
//p.InDocker = true
// docker id
containerID = cgroup_path[pos+1 : pos+1+64]
// logs.Debug("pid:%v in docker id:%v", pid, id)
return containerID
}
}
// systemd Docker
//5:net_cls:/system.slice/docker-afd862d2ed48ef5dc0ce8f1863e4475894e331098c9a512789233ca9ca06fc62.scope
docker_str := "docker-"
pos = strings.Index(cgroup_path, docker_str)
if pos > 0 {
pos_scope := strings.Index(cgroup_path, ".scope")
id_len := pos_scope - pos - len(docker_str)
if pos_scope > 0 && id_len == 64 {
containerID = cgroup_path[pos+len(docker_str) : pos+len(docker_str)+64]
return containerID
}
}
}
return containerID
}
Based on Dan Walsh's comment about using SELinux ps -eZ | grep container_t, but without requiring ps to be installed:
$ podman run --rm fedora:31 cat /proc/1/attr/current
system_u:system_r:container_t:s0:c56,c299
$ podman run --rm alpine cat /proc/1/attr/current
system_u:system_r:container_t:s0:c558,c813
$ docker run --rm fedora:31 cat /proc/1/attr/current
system_u:system_r:container_t:s0:c8,c583
$ cat /proc/1/attr/current
system_u:system_r:init_t:s0
This just tells you you're running in a container, but not which runtime.
Didn't check other container runtimes but https://opensource.com/article/18/2/understanding-selinux-labels-container-runtimes provides more info and suggests this is widely used, might also work for rkt and lxc?
What works for me, as long as I know the system programs/scrips will be running on, is confirming if what's running with PID 1 is systemd (or equivalent). If not, that's a container.
And this should be true for any linux container, not only docker.
Had the need for this capability in 2022 on macOS and only the answer by #at0S still works from all the other options.
/proc/1/cgroup only has the root directory in a container unless configured otherwise
/proc/1/sched showed the same in-container process number. The name was different (bash) but that's not very portable.
Environment variables work if you configure your container yourself, but none of the default environment variables helped
I did find an option not listed in the other answers: /proc/1/mounts included an overlay filesystem with "docker" in its path.