I am trying to run ElasticSearch on Docker through docker-compose up. Whenever I try to start up my containers, I am getting this error:
Running as non-root...
elasticsearch | 2017-01-06 00:08:23,861 main ERROR Could not register mbeans java.security.AccessControlException: access denied ("javax.management.MBeanTrustPermission" "register")
elasticsearch | at java.security.AccessControlContext.checkPermission(AccessControlContext.jav a:472)
elasticsearch | at java.lang.SecurityManager.checkPermission(SecurityManager.java:585)
elasticsearch | at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.checkMBeanTrustPermission(DefaultMBeanServerInterceptor.java:1848)
elasticsearch | at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(Default MBeanServerInterceptor.java:322)
elasticsearch | at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
elasticsearch | at org.apache.logging.log4j.core.jmx.Server.register(Server.java:389)
elasticsearch | at org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:167)
elasticsearch | at org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:140)
elasticsearch | at org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:541)
elasticsearch | at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:258)
elasticsearch | at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:206)
elasticsearch | at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:220)
elasticsearch | at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:197)
elasticsearch | at org.elasticsearch.common.logging.LogConfigurator.configureStatusLogger(LogConfigurator.java:125)
elasticsearch | at org.elasticsearch.common.logging.LogConfigurator.configureWithoutConfig(LogConfigurator.java:67)
elasticsearch | at org.elasticsearch.cli.Command.main(Command.java:59)
elasticsearch | at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:89)
elasticsearch | at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:82)
elasticsearch |
elasticsearch | 2017-01-06 00:08:25,813 main ERROR RollingFileManager (/data/elasticsearch.log) java.io.FileNotFoundException: /data/elasticsearch.log (Permission denied) java.io.FileNotFoundException: /data/elasticsearch.log (Permission denied)
This is my docker-compose.yml file:
elasticsearch:
container_name: elasticsearch
image: "itzg/elasticsearch:5.1.1"
ports:
- "9200:9200"
- "9300:9300"
volumes:
- "./data/elasticsearch/:/data"
Related
This question already has answers here:
Communications link failure , Spring Boot + MySql +Docker + Hibernate
(6 answers)
Closed 3 years ago.
I have spring boot app
My Dockerfile is
FROM openjdk:8-jdk-alpine
EXPOSE 8080
ARG JAR_FILE=target/demo-0.0.1-SNAPSHOT.jar
ADD ${JAR_FILE} demo.jar
ENTRYPOINT ["java","-jar","/demo.jar"]
My docker compose file
# Docker Compose file Reference (https://docs.docker.com/compose/compose-file/)
version: '3.7'
# Define services
services:
# App backend service
app-server:
# Configuration for building the docker image for the backend service
build:
context: . # Use an image built from the specified dockerfile in the `polling-app-server` directory.
dockerfile: ./Dockerfile
container_name: empserver
ports:
- "3000:3000" # Forward the exposed port 8080 on the container to port 8080 on the host machine
restart: always
depends_on:
- db # This service depends on mysql. Start that first.
environment: # Pass environment variables to the service
SPRING_DATASOURCE_URL: jdbc:mysql://db:3306/employee_entries?useSSL=false&serverTimezone=UTC&useLegacyDatetimeCode=false
SPRING_DATASOURCE_USERNAME: root
SPRING_DATASOURCE_PASSWORD: root
# Database Service (Mysql)
db:
image: mysql:5.7
ports:
- "3306:3306"
restart: always
environment:
MYSQL_DATABASE: employee_entries
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
My docker net works
NETWORK ID NAME DRIVER SCOPE
b95e3d99b266 Default Switch ics local
7fff4f9713f8 demo_default nat local
fe8883b77d1d emp-mysql ics local
f464aab9064a nat nat local
a5bd5e8efe61 none null local
App is successfully running using java -jar target\demo-0.0.1-SNAPSHOT.jar
but when I am doing docker-compose up
I got below error
app-server_1 | Caused by: com.mysql.cj.exceptions.CJCommunicationsException: Communications link failure
app-server_1 |
app-server_1 | The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
app-server_1 | at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.8.0_212]
app-server_1 | at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[na:1.8.0_212]
app-server_1 | at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.8.0_212]
app-server_1 | at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[na:1.8.0_212]
app-server_1 | at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:61) ~[mysql-connector-java-8.0.19.jar!/:8.0.19]
app-server_1 | at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:105) ~[mysql-connector-java-8.0.19.jar!/:8.0.19]
app-server_1 | at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:151) ~[mysql-connector-java-8.0.19.jar!/:8.0.19]
app-server_1 | at com.mysql.cj.exceptions.ExceptionFactory.createCommunicationsException(ExceptionFactory.java:167) ~[mysql-connector-java-8.0.19.jar!/:8.0.19]
app-server_1 | at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:91) ~[mysql-connector-java-8.0.19.jar!/:8.0.19]
app-server_1 | at com.mysql.cj.NativeSession.connect(NativeSession.java:144) ~[mysql-connector-java-8.0.19.jar!/:8.0.19]
app-server_1 | at com.mysql.cj.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:956) ~[mysql-connector-java-8.0.19.jar!/:8.0.19]
app-server_1 | at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:826) ~[mysql-connector-java-8.0.19.jar!/:8.0.19]
app-server_1 | ... 56 common frames omitted
app-server_1 | Caused by: java.net.UnknownHostException: db
app-server_1 | at java.net.InetAddress.getAllByName0(InetAddress.java:1281) ~[na:1.8.0_212]
app-server_1 | at java.net.InetAddress.getAllByName(InetAddress.java:1193) ~[na:1.8.0_212]
app-server_1 | at java.net.InetAddress.getAllByName(InetAddress.java:1127) ~[na:1.8.0_212]
app-server_1 | at com.mysql.cj.protocol.StandardSocketFactory.connect(StandardSocketFactory.java:132) ~[mysql-connector-java-8.0.19.jar!/:8.0.19]
app-server_1 | at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:65) ~[mysql-connector-java-8.0.19.jar!/:8.0.19]
app-server_1 | ... 59 common frames omitted
I am able to access mysql database and tables but from docker compose it was not
any suggestion would really be helpful
You need to provide the container names to the services and use them when referring them from each other. In your environment section for app-server, the url for database points to 127.0.0.1 but the database is not running on same container as app-server so this will fail.
To make this work, provide container names to services for eg : my_mysql and my_app-server and use it in environment url as jdbc:mysql://my_mysql:3306.
Please see the modified file below:
# Docker Compose file Reference (https://docs.docker.com/compose/compose-file/)
version: '3.7'
# Define services
services:
# App backend service
app-server:
# Configuration for building the docker image for the backend service
build:
context: . # Use an image built from the specified dockerfile in the `polling-app-server` directory.
dockerfile: ./Dockerfile
container_name: my_app-server
ports:
- "3000:3000" # Forward the exposed port 8080 on the container to port 8080 on the host machine
restart: always
depends_on:
- db # This service depends on mysql. Start that first.
environment: # Pass environment variables to the service
SPRING_DATASOURCE_URL: jdbc:mysql://my_mysql:3306/employee_entries?useSSL=false&serverTimezone=UTC&useLegacyDatetimeCode=false
SPRING_DATASOURCE_USERNAME: root
SPRING_DATASOURCE_PASSWORD: root
# Database Service (Mysql)
db:
image: mysql:5.7
container_name: my_mysql
ports:
- "3306:3306"
restart: always
environment:
MYSQL_DATABASE: employees
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
network:
my-network:
I am facing an issue on docker container. When I execute the docker-compose up to start the application the Postgres container is not starting.
Error which I get after docker-compose up
/usr/local/bundle/gems/activerecord-4.2.0/lib/active_record/connection_adapters/postgresql_adapter.rb:651:in `initialize': could not translate host name "db" to address: Name or service not known (PG::ConnectionBad)
Now it is frequently happing. I tried with few steps as add ports for db container i.e 5432:5432. I used to start-stop the specific db container so that the connection should get re-established but it is not working.
Application details:
Rails Version: 4.2.0 Ruby version: 2.2.0
docker-compose.yml
version: '3.7'
services:
selenium:
image: selenium/standalone-chrome-debug:3.141.59-krypton
ports: ['4444:4444', '5900:5900']
logging:
driver: none
redis:
image: redis:3.0.0
elastic:
image: elasticsearch:1.5.2
db:
image: postgres:9.3.10
volumes:
- ./tmp/db:/var/lib/postgresql/data
- .:/home
XYZ:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
stdin_open: true
tty: true
volumes:
- XYZ-sync:/home:nocopy
ports:
- "3000:3000"
depends_on:
- db
- redis
- elastic
- selenium
environment:
- REDIS_URL=redis://redis:6379/0
- ELASTICSEARCH_URL=elastic://elastic:9200/0
- SELENIUM_HOST=selenium
- SELENIUM_PORT=4444
- TEST_APP_HOST=XYZ
- TEST_PORT=3000
db log
db_1 | LOG: database system was shut down at 2019-09-10 07:37:08 UTC
db_1 | LOG: MultiXact member wraparound protections are now enabled
db_1 | LOG: database system is ready to accept connections
db_1 | LOG: autovacuum launcher started
db_1 | LOG: received smart shutdown request
db_1 | LOG: autovacuum launcher shutting down
db_1 | LOG: shutting down
db_1 | LOG: database system is shut down
db_1 | LOG: database system was shut down at 2019-09-10 07:37:50 UTC
db_1 | LOG: MultiXact member wraparound protections are now enabled
db_1 | LOG: database system is ready to accept connections
db_1 | LOG: autovacuum launcher started
db_1 | LOG: database system was interrupted; last known up at 2019-09-10 07:38:31 UTC
db_1 | LOG: received smart shutdown request
db_1 | LOG: database system was interrupted; last known up at 2019-09-10 07:38:31 UTC
db_1 | LOG: database system was not properly shut down; automatic recovery in progress
db_1 | LOG: record with zero length at 0/1D8F0120
db_1 | LOG: redo is not required
db_1 | LOG: MultiXact member wraparound protections are now enabled
db_1 | LOG: autovacuum launcher started
db_1 | LOG: database system is ready to accept connections
db_1 | LOG: stats_timestamp 2019-09-10 08:02:39.288642+00 is later than collector's time 2019-09-10 08:02:39.189551+00 for database 0
db_1 | LOG: database system was interrupted; last known up at 2019-09-10 08:18:02 UTC
db_1 | FATAL: the database system is starting up
docker-compose ps output
xyz_db_1 /docker-entrypoint.sh postgres Up 5432/tcp
xyz_elastic_1 /docker-entrypoint.sh elas ... Up 9200/tcp, 9300/tcp
xyz_xyz_1 bash -c rm -f tmp/pids/ser ... Exit 1
xyz_redis_1 /entrypoint.sh redis-server Up 6379/tcp
xyz_selenium_1 /opt/bin/entry_point.sh Up 0.0.0.0:4444->4444/tcp, 0.0.0.0:5900->5900/tcp
database.yml
default: &default
adapter: postgresql
encoding: unicode
pool: 5
username: postgres
password:
host: db
development:
<<: *default
database: XYZ_development
test:
<<: *default
database: XYZ_test
development_migrate:
adapter: mysql2
encoding: utf8
database: xyz_ee
username: root
password:
host: localhost
pool: 5
Any help will be appreciated.
I resolved my issue with the help of #jayDosrsey suggestion.
The DB container get started before the main web container and hence it always gets failed to start the container and I again need to restart the web container.
Resolved this issue by adding the health check condition while starting the rails server.
XYZ:
build: .
command: bash -c "while !</dev/tcp/db/5432; do sleep 1; done; rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
...
Now I am able to start the container in sequence.
some weeks ago I created an elk stack (elasticsearch, logstash, kibana) to handle the load of logfiles better.
It all worked perfectly. Today I invoked some new Patterns into logstash and for some reason, I restarted via docker-compose down && docker-compose up -d.
Now elasticsearch doesn't start up anymore.
root#xyz:/srv/elk# docker-compose logs elasticsearch
Attaching to elk_elasticsearch_1
elasticsearch_1 | [2017-07-01T07:34:36,859][INFO ][o.e.n.Node ] [lw-e01] initializing ...
elasticsearch_1 | [2017-07-01T07:34:36,999][INFO ][o.e.e.NodeEnvironment ] [lw-e01] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/mapper/HDD-ELK)]], net usable_space [19.1gb], net total_space [49gb], spins? [possibly], types [ext4]
elasticsearch_1 | [2017-07-01T07:34:36,999][INFO ][o.e.e.NodeEnvironment ] [lw-e01] heap size [3.9gb], compressed ordinary object pointers [true]
elasticsearch_1 | [2017-07-01T07:34:37,635][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [lw-e01] uncaught exception in thread [main]
elasticsearch_1 | org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: Failed to created node environment
elasticsearch_1 | at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:127) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:114) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:67) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | at org.elasticsearch.cli.Command.main(Command.java:88) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | Caused by: java.lang.IllegalStateException: Failed to created node environment
elasticsearch_1 | at org.elasticsearch.node.Node.<init>(Node.java:265) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | at org.elasticsearch.node.Node.<init>(Node.java:242) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | at org.elasticsearch.bootstrap.Bootstrap$6.<init>(Bootstrap.java:242) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:242) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:360) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:123) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | ... 6 more
elasticsearch_1 | Caused by: java.io.IOException: failed to write in data directory [/usr/share/elasticsearch/data/nodes/0/indices/a94kXbSER2CE97qdPhgVLA/_state] write permission is required
elasticsearch_1 | at org.elasticsearch.env.NodeEnvironment.tryWriteTempFile(NodeEnvironment.java:1075) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | at org.elasticsearch.env.NodeEnvironment.assertCanWrite(NodeEnvironment.java:1047) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:277) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | at org.elasticsearch.node.Node.<init>(Node.java:262) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | at org.elasticsearch.node.Node.<init>(Node.java:242) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | at org.elasticsearch.bootstrap.Bootstrap$6.<init>(Bootstrap.java:242) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:242) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:360) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:123) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | ... 6 more
elasticsearch_1 | Caused by: java.nio.file.FileAlreadyExistsException: /usr/share/elasticsearch/data/nodes/0/indices/a94kXbSER2CE97qdPhgVLA/_state/.es_temp_file
elasticsearch_1 | at sun.nio.fs.UnixException.translateToIOException(UnixException.java:88) ~[?:?]
elasticsearch_1 | at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:?]
elasticsearch_1 | at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[?:?]
elasticsearch_1 | at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214) ~[?:?]
elasticsearch_1 | at java.nio.file.Files.newByteChannel(Files.java:361) ~[?:1.8.0_131]
elasticsearch_1 | at java.nio.file.Files.createFile(Files.java:632) ~[?:1.8.0_131]
elasticsearch_1 | at org.elasticsearch.env.NodeEnvironment.tryWriteTempFile(NodeEnvironment.java:1072) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | at org.elasticsearch.env.NodeEnvironment.assertCanWrite(NodeEnvironment.java:1047) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:277) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | at org.elasticsearch.node.Node.<init>(Node.java:262) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | at org.elasticsearch.node.Node.<init>(Node.java:242) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | at org.elasticsearch.bootstrap.Bootstrap$6.<init>(Bootstrap.java:242) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:242) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:360) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:123) ~[elasticsearch-5.4.0.jar:5.4.0]
elasticsearch_1 | ... 6 more
OK it Looks like a simple permissions, Problem, but also after a chown -R 1000.1000 elasticsearch/it crashes (AND set other ownerships).
The Setup: I Setup a Server with an LVM for the docker-compose Project. In the docker-compose.yml I described the three Services.
version: '3'
services:
elasticsearch:
image: my/elasticsearch/image:5.4.0
volumes:
- ./elasticsearch/data:/usr/share/elasticsearch/data
- ./elasticsearch/config:/usr/share/elasticsearch/config
- /etc/localtime:/etc/localtime:ro
environment:
ES_JAVA_OPTS: "-Xmx4g -Xms1g"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- nginx_net
logstash:
image: my/logstash/image:5.4.0
command: ["logstash", "-f", "/etc/logstash.conf"]
volumes:
- ./logstash.conf:/etc/logstash.conf:ro
- ./logstash.yml:/etc/logstash/logstash.yml:ro
- ./GeoDb/GeoLite2-City.mmdb:/GeoLite2-City.mmdb:ro
- ./patterns:/etc/logstash/patterns:ro
- /etc/localtime:/etc/localtime:ro
ports:
- "5044:5044"
environment:
LS_JAVA_OPTS: "-Xmx1g -Xms512m"
depends_on:
- elasticsearch
networks:
- nginx_net
kibana:
image: my/kibana/image:5.4.0
volumes:
- ./kibana/config/:/usr/share/kibana/config
- ./kibana/config/kibana.yml:/etc/kibana/kibana.yml
- /etc/localtime:/etc/localtime:ro
depends_on:
- elasticsearch
networks:
- nginx_net
networks:
nginx_net:
external: true
As you can see, I do not use the official Images directly, I install (at the Moment) XPack too, all three Images looking like this
FROM elasticsearch:5.4.0
RUN bin/elasticsearch-plugin install x-pack --batch
The scond thing I do different is I didn't use named volumes. That's because I like to have one folder containing the whole Project, also better for my LVM Management.
root#xyz:/srv/elk# ls -l
insgesamt 43488
-rw-r--r-- 1 root root 1514 Jul 1 09:34 docker-compose.yml
drwxr-xr-x 4 1000 1000 4096 Mai 18 17:43 elasticsearch
drwxr-xr-x 3 root root 4096 Mai 21 12:49 GeoDb
-rw-r--r-- 1 root root 25398754 Mai 21 12:49 GeoLite2-City.tar.gz
-rw-r--r-- 1 root root 19074950 Mai 21 12:03 GeoLiteCity.dat
drwxr-xr-x 3 root root 4096 Mai 14 16:20 kibana
-rw-r--r-- 1 root root 5523 Jul 1 09:02 logstash.conf
-rw-r--r-- 1 root root 4708 Jun 3 11:25 logstash.yml
drwx------ 2 root root 16384 Mai 17 23:40 lost+found
drwxr-xr-x 2 root root 4096 Jun 7 22:08 patterns
-rwxr-xr-x 1 root root 168 Mai 21 12:49 update-geoip.sh
root#xyz:/srv/elk# du -hs elasticsearch/
28G elasticsearch/
I read about plugins like local-persist to use named volumes but also specify the DIR to save the files to. But also I read, that docker recommends to not use plugins in production.
I would be pretty happy for any idea / link.
OK simple: run (in my case) rm elasticsearch/data/nodes/0/indices/a94kXbSER2CE97qdPhgVLA/_state/.es_temp_file in main Folder of docker-compose Project helped me start EL again...
To figure out the exactly path look at the java.nio.file.FileAlreadyExistsException
The above answer didn't work for me. It ended up being a memory issue with Docker. Boosting Docker's memory allotment resolved it.
Filebeat works well before I change the password of elasticsearch. By the way, I use docker-compose to start the service, here is some information about my filebeat.
Console log:
filebeat | 2017/05/11 05:21:33.020851 beat.go:285: INFO Home path: [/] Config path: [/] Data path: [//data] Logs path: [//logs]
filebeat | 2017/05/11 05:21:33.020903 beat.go:186: INFO Setup Beat:
filebeat; Version: 5.3.0
filebeat | 2017/05/11 05:21:33.021019 logstash.go:90: INFO Max Retries set to: 3
filebeat | 2017/05/11 05:21:33.021097 outputs.go:108: INFO Activated
logstash as output plugin.
filebeat | 2017/05/11 05:21:33.021908 publish.go:295: INFO Publisher name: fd2f326e51d9
filebeat | 2017/05/11 05:21:33.022092 async.go:63: INFO Flush Interval set to: 1s
filebeat | 2017/05/11 05:21:33.022104 async.go:64: INFO Max Bulk Size set to: 2048
filebeat | 2017/05/11 05:21:33.022220 modules.go:93: ERR Not loading modules. Module directory not found: /module
filebeat | 2017/05/11 05:21:33.022291 beat.go:221: INFO filebeat start running.
filebeat | 2017/05/11 05:21:33.022334 registrar.go:68: INFO No registry file found under: /data/registry. Creating a new registry file.
filebeat | 2017/05/11 05:21:33.022570 metrics.go:23: INFO Metrics logging every 30s
filebeat | 2017/05/11 05:21:33.025878 registrar.go:106: INFO Loading registrar data from /data/registry
filebeat | 2017/05/11 05:21:33.025918 registrar.go:123: INFO States Loaded from registrar: 0
filebeat | 2017/05/11 05:21:33.025970 crawler.go:38: INFO Loading Prospectors: 1
filebeat | 2017/05/11 05:21:33.026119 prospector_log.go:61: INFO Prospector with previous states loaded: 0
filebeat | 2017/05/11 05:21:33.026278 prospector.go:124: INFO Starting prospector of type: log; id: 5816422928785612348
filebeat | 2017/05/11 05:21:33.026299 crawler.go:58: INFO Loading and starting Prospectors completed. Enabled prospectors: 1
filebeat | 2017/05/11 05:21:33.026323 registrar.go:236: INFO Starting Registrar
filebeat | 2017/05/11 05:21:33.026364 sync.go:41: INFO Start sending events to output
filebeat | 2017/05/11 05:21:33.026394 spooler.go:63: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
filebeat | 2017/05/11 05:21:33.026731 log.go:91: INFO Harvester started for file: /data/logs/biz.log
filebeat | 2017/05/11 05:22:03.023313 metrics.go:39: INFO Non-zero metrics in the last 30s: filebeat.harvester.open_files=1
filebeat.harvester.running=1 filebeat.harvester.started=1 libbeat.publisher.published_events=98 registrar.writes=1
filebeat | 2017/05/11 05:22:08.028292 single.go:140: ERR Connecting error publishing events (retrying): dial tcp 47.93.121.126:5044: i/o timeout
filebeat | 2017/05/11 05:22:33.023370 metrics.go:34: INFO No non-zero metrics in the last 30s
filebeat | 2017/05/11 05:22:39.028840 single.go:140: ERR Connecting error publishing events (retrying): dial tcp 47.93.121.126:5044: i/o timeout
filebeat | 2017/05/11 05:23:03.022906 metrics.go:34: INFO No non-zero metrics in the last 30s
filebeat | 2017/05/11 05:23:11.029517 single.go:140: ERR Connecting error publishing events (retrying): dial tcp 47.93.121.126:5044: i/o timeout
filebeat | 2017/05/11 05:23:33.023450 metrics.go:34: INFO No non-zero metrics in the last 30s
filebeat | 2017/05/11 05:23:45.030202 single.go:140: ERR Connecting error publishing events (retrying): dial tcp 47.93.121.126:5044: i/o timeout
filebeat | 2017/05/11 05:24:03.022864 metrics.go:34: INFO No non-zero metrics in the last 30s
filebeat | 2017/05/11 05:24:23.030749 single.go:140: ERR Connecting error publishing events (retrying): dial tcp 47.93.121.126:5044: i/o timeout
filebeat | 2017/05/11 05:24:33.024029 metrics.go:34: INFO No non-zero metrics in the last 30s
filebeat | 2017/05/11 05:25:03.023338 metrics.go:34: INFO No non-zero metrics in the last 30s
filebeat | 2017/05/11 05:25:09.031348 single.go:140: ERR Connecting error publishing events (retrying): dial tcp 47.93.121.126:5044: i/o timeout
filebeat | 2017/05/11 05:25:33.023976 metrics.go:34: INFO No non-zero metrics in the last 30s
filebeat | 2017/05/11 05:26:03.022900 metrics.go:34: INFO No non-zero metrics in the last 30s
filebeat | 2017/05/11 05:26:11.032346 single.go:140: ERR Connecting error publishing events (retrying): dial tcp 47.93.121.126:5044: i/o timeout
filebeat | 2017/05/11 05:26:33.022870 metrics.go:34: INFO No non-zero metrics in the last 30s
filebeat.yml:
filebeat:
prospectors:
-
paths:
- /data/logs/*.log
input_type: log
document_type: biz-log
registry_file: /etc/registry/mark
output:
logstash:
enabled: true
hosts: ["logstash:5044"]
docker-compose.yml:
version: '2'
services:
filebeat:
build: ./
container_name: filebeat
restart: always
network_mode: "bridge"
extra_hosts:
- "logstash:47.93.121.126"
volumes:
- ./conf/filebeat.yml:/filebeat.yml
- /mnt/logs/appserver/app/biz:/data/logs
- ./registry:/data
Having had a similar issue, I eventually realised the culprit was not Filebeat but Logstash.
Logstash's SSL configuration didn't contain all required attributes. Setting it up using the following declaration solved the issue:
input {
beats {
port => "{{ logstash_port }}"
ssl => true
ssl_certificate_authorities => [ "{{ tls_certificate_authority_file }}" ]
ssl_certificate => "{{ tls_certificate_file }}"
ssl_key => "{{ tls_certificate_key_file }}"
ssl_verify_mode => "force_peer"
}
}
The above example works with Ansible, remember to replace placeholders between {{ and }} by the correct values.
The registry file stores the state and location information that Filebeat uses to track where it was last reading.
So you can try updating or deleting registry file
cd /var/lib/filebeat
sudo mv registry registry.bak
sudo service filebeat restart
I'm trying to compose the ELK architecture by using docker compose. The following is the compose file:
version: '2'
services:
elasticsearch_assets:
image: elasticsearch
volumes:
- ./elasticsearch/config:/usr/share/elasticsearch/config
- ./elasticsearch/data:/usr/share/elasticsearch/data
- ./elasticsearch/logs:/usr/share/elasticsearch/logs
command: /bin/true
elasticsearch:
image: elasticsearch
volumes_from:
- elasticsearch_assets:rw
depends_on:
- elasticsearch_assets
And here folliwing a screenshot of the project structure:
When I run docker-compose up I get the following error:
Starting elkdocker_elasticsearch_assets_1
Starting elkdocker_elasticsearch_1
Attaching to elkdocker_elasticsearch_assets_1, elkdocker_elasticsearch_1
elkdocker_elasticsearch_assets_1 exited with code 0
elasticsearch_1 | [2016-03-22 01:28:59,939][WARN ][bootstrap ] unable to install syscall filter: seccomp unavailable: your kernel is buggy and you should upgrade
elasticsearch_1 | Exception in thread "main" java.lang.IllegalStateException: Unable to access 'path.scripts' (/usr/share/elasticsearch/config/scripts)
elasticsearch_1 | Likely root cause: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/config/scripts
elasticsearch_1 | at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
elasticsearch_1 | at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
elasticsearch_1 | at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
elasticsearch_1 | at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
elasticsearch_1 | at java.nio.file.Files.createDirectory(Files.java:674)
elasticsearch_1 | at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781)
elasticsearch_1 | at java.nio.file.Files.createDirectories(Files.java:767)
elasticsearch_1 | at org.elasticsearch.bootstrap.Security.ensureDirectoryExists(Security.java:337)
elasticsearch_1 | at org.elasticsearch.bootstrap.Security.addPath(Security.java:314)
elasticsearch_1 | at org.elasticsearch.bootstrap.Security.addFilePermissions(Security.java:248)
elasticsearch_1 | at org.elasticsearch.bootstrap.Security.createPermissions(Security.java:212)
elasticsearch_1 | at org.elasticsearch.bootstrap.Security.configure(Security.java:118)
elasticsearch_1 | at org.elasticsearch.bootstrap.Bootstrap.setupSecurity(Bootstrap.java:196)
elasticsearch_1 | at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:167)
elasticsearch_1 | at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:285)
elasticsearch_1 | at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
elasticsearch_1 | Refer to the log for complete error details.
elkdocker_elasticsearch_1 exited with code 1
Do you have any idea why?
regarding this error
Unable to access 'path.scripts' error
just create a sub folder called scripts under the config folder
it will fix the error
./elasticsearch/config ==> mkdir ./elasticsearch/config/scripts
Change the docker compose to something like this
-v /elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
instead of pointing to empty directory