Correct way to use modules in Filebeat - elasticsearch

I'm slightly confused about the correct way to use Filebeat's modules, whilst running Filebeat in a Docker container. It appears that the Developers prefer the modules.d method, however it's not clear to me of their exact intentions.
Here is the relevant part of my filebeat.yml:
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
reload.period: 60s
And the filebeat container definition from my docker-compose.yml:
filebeat:
build:
context: filebeat/
args:
ELK_VERSION: $ELK_VERSION
container_name: filebeat
mem_limit: 2048m
labels:
co.elastic.logs/json.keys_under_root: true
co.elastic.logs/json.add_error_key: true
co.elastic.logs/json.overwrite_keys: true
volumes:
- type: bind
source: ./filebeat/config/filebeat.docker.yml
target: /usr/share/filebeat/filebeat.yml
read_only: true
- type: bind
source: /volume1/#docker/containers
target: /var/lib/docker/containers
read_only: true
- type: bind
source: /var/run/docker.sock
target: /var/run/docker.sock
read_only: true
- type: bind
source: /var/log
target: /host-log
read_only: true
- type: volume
source: filebeat-data
target: /usr/share/filebeat/data
user: root
depends_on:
- elasticsearch
- kibana
command: filebeat -e -strict.perms=false
networks:
- elk
depends_on:
- elasticsearch
With this configuration, I can docker exec into my container and activate modules (pulling in their default configuration), and setup pipelines and dashboards like so:
filebeat modules enable elasticsearch kibana system nginx
filebeat setup -e --pipelines
This all works fine, until I come to recreate my container, at which point the enabled modules are (unsurprisingly) disabled and I have to run this stuff again.
I tried to mount the modules.d directory on my local filesystem, expecting this to build the default modules.d config files (with their .disabled suffix) on my local filesystem, such that recreation of the container would persist the installed modules. To do so, the following was added as a mount to docker-compose.yml:
- type: bind
source: ./filebeat/config/modules.d
target: /usr/share/filebeat/modules.d
read_only: false
When I do this, and recreate the container, it spins up with an empty modules.d directory and filebeat modules list returns no modules at all, meaning none can be enabled.
My current workaround is to copy each individual monitor's config file, and mount it specifically, like so:
- type: bind
source: ./filebeat/config/modules.d/system.yml
target: /usr/share/filebeat/modules.d/system.yml
read_only: true
- type: bind
source: ./filebeat/config/modules.d/nginx.yml
target: /usr/share/filebeat/modules.d/nginx.yml
read_only: true
This is suboptimal for a number of reasons, chiefly, if I want to enable a new module, and for it to persist my container being recreated, I need to:
docker exec into the container
get the default config file for the module I want to use
create a file on the local filesystem for the module
edit the docker-compose.yml file with the new bind mounted module config
recreate the container with docker-compose up --detach
The way I feel this should work is:
I mount modules.d to my local filesystem
I recreate the container
modules.d gets populated with all the default module config files
I enable modules by filebeat modules enable blah or by renaming the module config file from my local filesystem (removing the .disabled suffix)
Enabled modules and their config survive container recreation
One way around this could be to copy (urgh) the whole modules.d directory from a running container to my local filesystem and mounting that wholesale. That feels wrong too.
What am I misunderstanding or misconfiguring here? How are other people doing this?

I build a custom image for each type of beat, and embed the .yml configuration in my image. Then I use the filebeat.module property of the configuration file to setup my modules inside of that file.
I think the intention of using the modules.d folder approach is that it makes it easier to understand your module configuration for a filebeat instance that is working with multiple files.
That is my intention of this 1 image to 1 module/file type approach that I use for sure. All of my logic/configuration is stored along with the service that I am monitoring and not in one central, monolithic location.
Another benefit to this approach is each individual filebeat only has access to the log files it needs. In the case of collecting the docker logs like you are, you need to run in in privileged mode to bind mount to /var/run/docker.sock. If I want to run your compose file but do not want to run privileged (or if I am using windows and cannot) then I loose all the other monitoring that you have built out.

Related

kubernetes pod start another while a job is running

I have a flask api and I am trying to improve it identifying which function calls in the api definition takes the longest time whenever call it. For that I am using a profiler as highlighted in this repo. Whenever I make the api call, this profiler generates a .prof file which I can use with snakeviz to visualize.
Now I am trying to run this on aws cluster in the same region where my database is stored to minimize network latency time. I can get the api server running and make the api calls, my question is how can I transfer the .prof file from kubernetes pod without disturbing the api server. Is there a way to start a separate shell that transfers file to say an s3 bucket whenever that file is created without killing off the api server.
If you want to automate this process or it's simply hard to figure out connectivity for running kubectl exec ..., one idea would be to use a sidecar container. So your pod contains two containers with a single emptyDir volume mounted into both. emptyDir is perhaps the easiest way to create a folder shared between all containers in a pod.
First container is your regular Flask API
Second container is watching for new files in shared folder. Whenever it finds a file there it uploads this file to S3
You will need to configure profiler so it dumps output into a shared folder.
One benefit of this approach is that you don't have to make any major modifications to the existing container running Flask.
The best option the sidecar container.
Pods that run multiple containers that need to work together. A Pod can encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources. These co-located containers form a single cohesive unit of service—for example, one container serving data stored in a shared volume to the public, while a separate sidecar container refreshes or updates those files. The Pod wraps these containers, storage resources, and an ephemeral network identity together as a single unit.
For example, you might have a container that acts as a web server for files in a shared volume, and a separate "sidecar" container that updates those files from a remote source.
Here's a link!
The sidecar creation is easy look this:
apiVersion: v1
kind: Pod
metadata:
name: demo
spec:
containers:
- image: busybox
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) 'Hi I am from Sidecar container'; sleep 5;done"]
name: sidecar-container
resources: {}
volumeMounts:
- name: var-logs
mountPath: /var/log
- image: nginx
name: main-container
resources: {}
ports:
- containerPort: 80
volumeMounts:
- name: var-logs
mountPath: /usr/share/nginx/html
dnsPolicy: Default
volumes:
- name: var-logs
emptyDir: {}
All you need is change the sidecar container command to your needs.

Windows 10 bind mounts in docker-compose not working

I'm using docker-compose to manage a multi container application. 1 of those containers needs access to the contents of a directory on the host.
This seems simple according to the various sources of documentation on docker and docker-compose but I'm struggling to get it working.
event_processor:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
build: ./Docker/event_processor
ports:
- "15672:15672"
entrypoint: python -u /src/event_processor/event_processor.py
networks:
- app_network
volumes:
- C/path/to/interesting/directory:/interesting_directory"
Running this I get the error message:
ERROR: Named volume
"C/path/to/interesting/directory:/interesting_directory:rw" is used in
service "event_processor" but no declaration was found in the
volumes section.
I understand from the docs that a top level declaration is only necessary if data is to be shared between containers
which isn't the case here.
The docs for docker-compose I linked above have an example which seems to do exactly what I need:
version: "3.2"
services:
web:
image: nginx:alpine
ports:
- "80:80"
volumes:
- type: volume
source: mydata
target: /data
volume:
nocopy: true
- type: bind
source: ./static
target: /opt/app/static
networks:
webnet:
volumes:
mydata:
However when I try, I get errors about the syntax:
ERROR: The Compose file '.\docker-compose.yaml' is invalid because:
services.audio_event_processor.volumes contains an invalid type, it
should be a string
So I tried to play along:
volumes:
- type: "bind"
source: "C/path/to/interesting/directory"
target: "/interesting_directory"
ERROR: The Compose file '.\docker-compose.yaml' is invalid because:
services.audio_event_processor.volumes contains an invalid type, it should be a string
So again the same error.
I tried the following too:
volumes:
- type=bind, source=C/path/to/interesting/directory,destination=/interesting_directory
No error, but attaching to the running container, I see the following two folders;
type=bind, source=C
So it seems that I am able to create a number of volumes with 1 string (though the forward slashes are cutting the string in this case) but I am not mapping it to the host directory.
I've read the docs but I think I'm missing something.
Can someone post an example of mounting a a windows directory from a host to a linux container so that the existing contents of the windows dir is available from the container?
OK so there were multiple issues here:
1.
I had
version: '3'
at the top of my docker-compose.yml. The long syntax described here wasn't implemented until 3.4 so I stopped receiving the bizarre syntax error when I updated this to:
version: '3.6'
2.
I use my my docker account on 2 windows PCs. Following a hint from another stackoverflow post, I reset Docker to the factory settings. I had to give docker the computer username and password with the notice that this was necessary to access the contents of the local filesystem - at this point I remembered doing this on another PC so I'm not sure whether the credentials were correct on this on. With the correct credentials for the current PC, I was able to bind-mount the volume with the expected results as follows:
version: '3.6'
event_processor:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
build: ./Docker/event_processor
ports:
- "15672:15672"
entrypoint: python -u /src/event_processor/event_processor.py
networks:
- app_network
volumes:
- type: bind
source: c:/path/to/interesting/directory
target: /interesting_directory
Now it works as expected. I'm not sure if it was the factory reset or the updated credentials that fixed it. I'll find out tomorrow when I use another PC and update.

How to do a simple edit in elasticsearch.yml in a docker container?

I am using docker-compose as in https://github.com/davidefiocco/dockerized-elasticsearch-indexer/blob/master/docker-compose.yml to initialize a containerized elasticsearch index.
Now, I would like to set a larger value for indices.query.bool.max_clause_count than the default setting using a elasticsearch.yml config file (this is to run some heavy queries as in Elasticsearch - set max_clause_count).
So far I tried to add in the docker-compose.yml a volume with:
services:
elasticsearch:
volumes:
- ./elasticsearch/config/elasticsearch.yml
(and variations thereof) trying to point to a elasticsearch.yml file (that I would like to ship with the rest of the files) with the right max_clause_count setting, but to no avail.
Can someone point me to the right direction?
You can mount the host's directory containing the elasticsearch.yml into the container using
services:
elasticsearch:
volumes:
- path_to/custom_elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
One workaround to perform that (trivial) modification to elasticsearch.yml in the container is to modify directly a relevant Dockerfile with the syntax
USER root
RUN echo "indices.query.bool.max_clause_count: 1000000" >> /usr/share/elasticsearch/config/elasticsearch.yml
so to append the desired custom value.

"message":"No living connections","node_env":"production"

I'am trying to install Kibana 4 in my machine but it's giving the following errors.
{"#timestamp":"2015-04-15T06:25:50.688Z","level":"error","node_env":"production","error":"Request error, retrying -- connect ECONNREFUSED"}
{"#timestamp":"2015-04-15T06:25:50.693Z","level":"warn","message":"Unable to revive connection: http://0.0.0.0:9200/","node_env":"production"}
{"#timestamp":"2015-04-15T06:25:50.693Z","level":"warn","message":"No living connections","node_env":"production"}
{"#timestamp":"2015-04-15T06:25:50.698Z","level":"fatal","message":"No Living connections","node_env":"production","error":{"message":"No Living connections","name":"Error","stack":"Error: No Living connections\n at sendReqWithConnection (/home/kibana-4.0.0-rc1-linux-x64/src/node_modules/elasticsearch/src/lib/transport.js:174:15)\n
The ECONNREFUSED is telling you that it can't connect to Elasticsearch. The http://0.0.0.0:9200/ tells you what it's trying to connect to.
You need to modify the config/kibana.yml and change the elasticsearch_url setting to point to your cluster. If you are running Elasticsearch on the same box, the correct value is http://localhost:9200.
Your elastic search is down.
In my case it was because the environment variable Java_Home was not set
correctly.You have to manually set it. These are the guides lines to do it :
Go to your PC Environments.
Create a new variable,with variable name Java_Home. The variable value should be java installation path.
Make sure your path has no spaces. If your Java is in Program Files(x86) you can use shortcut which is : progra~2 instead of Program Files(x86).
As a result you have something like this : C:\Progra~2\Java\jre1.8.0_131
There is another reason why this might happen in the case you are using AWS Elasticsearch service.
No grant right access policies for ES and not loading right AWS credential will be the root cause.
There is one more posibility, maybe your elasticsearch does not run properly as you want: please check this link and try to dockerize the elasticsearch.
for me this docker-compose.yml file can dockerize the elasticsearch:
services:
elasticsearch:
image: "${CREATED_IMAGE_NAME_PREFIX}:1"
container_name: efk_elastic
build:
context: ./elasticsearch
args:
EFK_VERSION: $EFK_VERSION
ELASTIC_PORT1: $ELASTIC_PORT1
ELASTIC_PORT2: $ELASTIC_PORT2
environment:
# node.name: node
# cluster.name: elasticsearch-default
ES_JAVA_OPTS: -Xms1g -Xmx1g
discovery.type: single-node
ELASTIC_PASSWORD: changeme
http.cors.enabled: "true"
http.cors.allow-credentials: "true"
http.cors.allow-headers: X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
http.cors.allow-origin: /https?:\/\/localhost(:[0-9]+)?/
hostname: elasticsearch
ports:
- "${ELASTIC_EXPOSED_PORT1}:$ELASTIC_PORT1"
- "$ELASTIC_EXPOSED_PORT2:${ELASTIC_PORT2}"
volumes:
# - type: bind
# source: ./elasticsearch/config/elasticsearch.yml
# target: /usr/share/elasticsearch/config/elasticsearch.yml
# read_only: true
- type: volume
source: elasticsearch_data
target: /usr/share/elasticsearch/data
networks:
- efk
please note that this is not complete. for more details please see my GitHub repository

Docker, mount volumes as readonly

I am working with Docker, and I want to mount a dynamic folder that changes a lot (so I would not have to make a Docker image for each execution, which would be too costly), but I want that folder to be read-only. Changing the folder owner to someone else works. However, chown requires root access, which I would prefer not to expose to an application.
When I use -v flag to mount, it gives whatever the username I give, I created a non-root user inside the docker image, however, all the files in the volume with the owner as the user that ran docker, changes into the user I give from the command line, so I cannot make read-only files and folders. How can I prevent this?
I also added mustafa ALL=(docker) NOPASSWD: /usr/bin/docker, so I could change to another user via terminal, but still, the files have permissions for my user.
You can specify that a volume should be read-only by appending :ro to the -v switch:
docker run -v volume-name:/path/in/container:ro my/image
Note that the folder is then read-only in the container and read-write on the host.
2018 Edit
According to the Use volumes documentation, there is now another way to mount volumes by using the --mount switch. Here is how to utilize that with read-only:
$ docker run --mount source=volume-name,destination=/path/in/container,readonly my/image
docker-compose
Here is an example on how to specify read-only containers in docker-compose:
version: "3"
services:
redis:
image: redis:alpine
read_only: true
docker-compose
Here is a proper way to specify read-only volume in docker-compose:
Long syntax
version: "3.2" # Use version 3.2 or above
services:
my_service:
image: my:image
volumes:
- type: volume
source: volume-name
target: /path/in/container
read_only: true
volumes:
volume-name:
https://docs.docker.com/compose/compose-file/compose-file-v3/#long-syntax-3
Short syntax
Add :ro to the volume mount definition:
version: "3.0" # Use version 3.0 or above
services:
my_service:
image: my:image
volumes:
- /path/on/host:/path/inside/container:ro
https://docs.docker.com/compose/compose-file/compose-file-v3/#short-syntax-3

Resources