Docker logs, stderr - elasticsearch

Is it possible to separate docker logs between stderr \ stdout? Via fluentd\logstash etc.
The ultimate goal - sending logs to elasticsearch and filter it by stderr\stdout

If you want to separate docker logs into stdout processing and stderr processing in fluentd side, you can use rewite-tag-filter plugin with source value.
http://docs.fluentd.org/articles/out_rewrite_tag_filter

Maybe this is a duplicate of
https://github.com/docker/docker/issues/7440
Here is an example:
$ docker run -d --name foo busybox ls abcd
$ docker logs foo > stdout.log 2>stderr.log
$ cat stdout.log
$ cat stderr.log
ls: abcd: No such file or directory

See latest from issue #Opal posted.
# stdout
docker logs container_name 2>/dev/null
# stderr
docker logs container_name >/dev/null

Related

Keep colored logs when piping output with docker-compose v2.5.0

In docker-compose 1.25.3 I used to pipe the output of logs. E.g.,
docker-compose logs -ft | cat
The output is colored as expected.
In docker-compose 2.5.0, this does not happen anymore. Native output docker-compose logs -ft is still colored. However, when I run:
docker-compose logs -ft | cat
The piped output is not colored anymore. Why does this happen and how can I fix it?

Save output of bash command from Dockerfile after Docker container was launched

I have a Dockerfile with ubuntu image as a base.
FROM ubuntu
ARG var_name
ENV env_var_name=$var_name
ENTRYPOINT ["/bin/bash", "-c", "echo $env_var_name"]
I expect from this
executing of a simple bash script, which will take an environment variable from user keyboard input and output this value after running docker container. It goes right.
(part where i have a problem) saving values of environment variables to file + after every running of docker run --rm -e env_var_name=%valueOfVar% IMAGE-NAME i can see a list of entered from keyboard values.
My idea about part 2 were like
docker run --rm -e env_var_name=%valueOfVar% IMAGE-NAME > /directory/tosave/values.txt. That works, but only one last value saves, not a list of values.
How can i change Dockerfile to save values to a file, which Docker will see and from which Docker after running will read and ouyput values? May be i shouldn`t use ENTRYPOINT?
Appreciate for any possible help. I`ve stuck.
Emphasizing that output and save of environment variables expected.
Like #lojza hinted at, > overwrites files whereas >> appends to them which is why your command is clobbering the file instead of adding to it. So you could fix it with this:
docker run --rm -e env_var_name=%valueOfVar% IMAGE-NAME >> /directory/tosave/values.txt
Or using tee(1):
docker run --rm -e env_var_name=%valueOfVar% IMAGE-NAME | tee -a /directory/tosave/values.txt
To clarify though, the docker container is not writing to values.txt, your shell is what is redirecting the output of the docker run command to the file. If you want the file to be written to by docker you should mount a file or directory into it the container using -v and redirect the output of the echo there. Here's an example:
FROM ubuntu
ARG var_name
ENV env_var_name=$var_name
ENTRYPOINT ["/bin/bash", "-c", "echo $env_var_name | tee -a /data/values.txt"]
And then run it like so:
$ docker run --rm -e env_var_name=test1 -v "$(pwd):/data:rw" IMAGE-NAME
test1
$ docker run --rm -e env_var_name=test2 -v "$(pwd):/data:rw" IMAGE-NAME
test2
$ ls -l values.txt
-rw-r--r-- 1 root root 12 May 3 15:11 values.txt
$ cat values.txt
test1
test2
One more thing worth mentioning. echo $env_var_name is printing the value of the environment variable whose name is literally env_var_name. For example if you run the container with -e env_var_name=PATH it would print the literal string PATH and not the value of your $PATH environment variable. This does seem to be the desired outcome, but I thought it was worth explicitly spelling this out.

How to let Kubernetes pod run a local script

I want to run a local script within Kubernetes pod and then set the output result to a linux variable
Here is what I tried:
# if I directly run -c "netstat -pnt |grep ssh", I get output assigned to $result:
cat check_tcp_conn.sh
#!/bin/bash
result=$(kubectl exec -ti <pod_name> -- /bin/bash -c "netstat -pnt |grep ssh")
echo "result is $result"
What I want is something like this:
#script to be called:
cat netstat_tcp_conn.sh
#!/bin/bash
netstat -pnt |grep ssh
#script to call netstat_tcp_conn.sh:
cat check_tcp_conn.sh
#!/bin/bash
result=$(kubectl exec -ti <pod_name> --
/bin/bash -c "./netstat_tcp_conn.sh)
echo "result is $result
the result showed result is /bin/bash: ./netstat_tcp_conn.sh: No such file or directory.
How can I let Kubernetes pod execute netstat_tcp_conn.sh which is at my local machine?
You can use following command to execute your script in your pod:
kubectl exec POD -- /bin/sh -c "`cat netstat_tcp_conn.sh`"
You can copy local files into pod using kubectl command like kubectl cp /tmp/foo :/tmp/
Then you can change its permission and make it executable and run it using kubectl exec.

Run shell script in pod remotely openshift or kubernetes

I have one shell script I want to run that remotely in POD, how I can do that?
oc exec build-core-1-p4fr4 -- df -kh / <--- I want to use my script
any way to do this remotely, like we do
oc exec build-core-1-p4fr4 -- cat >> text << shell.sh <---- something like this
I checked oc rsh but didn't find anything specific there.
You can try the following command using -i option that allows to pass stdin to the container.
$ oc exec -i your_pod_name -- /bin/bash -s <<EOF
#!/bin/bash
date > /tmp/time
EOF
$ oc exec your_pod_name -- cat /tmp/time
Fri Nov 13 10:00:19 UTC 2020
$
Use oc exec -i to take script from stdin.
oc exec -i your_pod_name -- bash -s < your_script.sh

Ignore command output when grep'ing

I am trying to count occurrences of "POST" string inside docker logs
I am doing it like that:
docker logs 2c02 | grep "POST" -c
But I am getting not only the count of "POST" but also the full output of docker logs. Can I somehow ignore docker logs output?
docker prints to stderr. In bash you can do:
docker logs 2c02 |& grep "POST" -c

Resources