Cassandra: Cannot achieve consistency level QUORUM on a specific keyspace - elasticsearch
Actually, I'm using Elassandra which is a combination of Cassandra and Elasticsearch.
but the issue might came from Cassandra (from the logs said)
I have two nodes joined as a single datacenter DC1. And I'm trying to install Kibana on one of the node. My Kibana server always says "Kibana server is not ready yet" then I've found that the error is something around Cassandra consistency level.
My cassandra system_auth is set to
system_auth
WITH REPLICATION= {'class' : 'SimpleStrategy',
'DC1' :2 };
and here is the log from manual trigger Kibana service /usr/share/kibana/bin/kibana -c /etc/kibana/kibana.yml
FATAL [exception] org.apache.cassandra.exceptions.UnavailableException: Cannot achieve
consistency level QUORUM :: {"path":"/.kibana_1","query":{"include_type_name":true},"body":"
{\"mappings\":{\"doc\":{\"dynamic\":\"strict\",\"properties\":{\"config\":
{\"dynamic\":\"true\",\"properties\":{\"buildNum\":
{\"type\":\"keyword\"}}},\"migrationVersion\":
{\"dynamic\":\"true\",\"type\":\"object\"},\"type\":{\"type\":\"keyword\"},\"namespace\":
{\"type\":\"keyword\"},\"updated_at\":{\"type\":\"date\"},\"index-pattern\":{\"properties\":
{\"fieldFormatMap\":{\"type\":\"text\"},\"fields\":{\"type\":\"text\"},\"intervalName\":
{\"type\":\"keyword\"},\"notExpandable\":{\"type\":\"boolean\"},\"sourceFilters\":
{\"type\":\"text\"},\"timeFieldName\":{\"type\":\"keyword\"},\"title\":
{\"type\":\"text\"},\"type\":{\"type\":\"keyword\"},\"typeMeta\":
{\"type\":\"keyword\"}}},\"visualization\":{\"properties\":{\"description\":
{\"type\":\"text\"},\"kibanaSavedObjectMeta\":{\"properties\":{\"searchSourceJSON\":
{\"type\":\"text\"}}},\"savedSearchId\":{\"type\":\"keyword\"},\"title\":
{\"type\":\"text\"},\"uiStateJSON\":{\"type\":\"text\"},\"version\":
{\"type\":\"integer\"},\"visState\":{\"type\":\"text\"}}},\"search\":{\"properties\":
{\"columns\":{\"type\":\"keyword\"},\"description\":{\"type\":\"text\"},\"hits\":
{\"type\":\"integer\"},\"kibanaSavedObjectMeta\":{\"properties\":{\"searchSourceJSON\":
{\"type\":\"text\"}}},\"sort\":{\"type\":\"keyword\"},\"title\":{\"type\":\"text\"},\"version\":
{\"type\":\"integer\"}}},\"dashboard\":{\"properties\":{\"description\":
{\"type\":\"text\"},\"hits\":{\"type\":\"integer\"},\"kibanaSavedObjectMeta\":{\"properties\":
{\"searchSourceJSON\":{\"type\":\"text\"}}},\"optionsJSON\":{\"type\":\"text\"},\"panelsJSON\":
{\"type\":\"text\"},\"refreshInterval\":{\"properties\":{\"display\":
{\"type\":\"keyword\"},\"pause\":{\"type\":\"boolean\"},\"section\":
{\"type\":\"integer\"},\"value\":{\"type\":\"integer\"}}},\"timeFrom\":
{\"type\":\"keyword\"},\"timeRestore\":{\"type\":\"boolean\"},\"timeTo\":
{\"type\":\"keyword\"},\"title\":{\"type\":\"text\"},\"uiStateJSON\":
{\"type\":\"text\"},\"version\":{\"type\":\"integer\"}}},\"url\":{\"properties\":
{\"accessCount\":{\"type\":\"long\"},\"accessDate\":{\"type\":\"date\"},\"createDate\":
{\"type\":\"date\"},\"url\":{\"type\":\"text\",\"fields\":{\"keyword\":
{\"type\":\"keyword\",\"ignore_above\":2048}}}}},\"server\":{\"properties\":{\"uuid\":
{\"type\":\"keyword\"}}},\"kql-telemetry\":{\"properties\":{\"optInCount\":
{\"type\":\"long\"},\"optOutCount\":{\"type\":\"long\"}}},\"timelion-sheet\":{\"properties\":
{\"description\":{\"type\":\"text\"},\"hits\":{\"type\":\"integer\"},\"kibanaSavedObjectMeta\":
{\"properties\":{\"searchSourceJSON\":{\"type\":\"text\"}}},\"timelion_chart_height\":
{\"type\":\"integer\"},\"timelion_columns\":{\"type\":\"integer\"},\"timelion_interval\":
{\"type\":\"keyword\"},\"timelion_other_interval\":{\"type\":\"keyword\"},\"timelion_rows\":
{\"type\":\"integer\"},\"timelion_sheet\":{\"type\":\"text\"},\"title\":
{\"type\":\"text\"},\"version\":{\"type\":\"integer\"}}}}}},\"settings\":
{\"number_of_shards\":1,\"auto_expand_replicas\":\"0-1\"}}","statusCode":500,"response":"
{\"error\":{\"root_cause\":
[{\"type\":\"exception\",\"reason\":\"org.apache.cassandra.exceptions.UnavailableException:
Cannot achieve consistency level
QUORUM\"}],\"type\":\"exception\",\"reason\":\"org.apache.cassandra.exceptions.UnavailableExcept
ion: Cannot achieve consistency level QUORUM\",\"caused_by\":
{\"type\":\"unavailable_exception\",\"reason\":\"Cannot achieve consistency level
QUORUM\"}},\"status\":500}"}
there are no any indices named 'kibana_1' or any indices contains word kibana. but there are keyspaces named "_kibana_1" and "_kibana"
and that cause Kibana service unable to start
systemctl status kibana
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2020-09-10 16:26:14 CEST; 2s ago
Process: 16942 ExecStart=/usr/share/kibana/bin/kibana -c /etc/kibana/kibana.yml (code=exited, status=1
Main PID: 16942 (code=exited, status=1/FAILURE)
Sep 10 16:26:14 ns3053180 systemd[1]: kibana.service: Service hold-off time over, scheduling restart.
Sep 10 16:26:14 ns3053180 systemd[1]: kibana.service: Scheduled restart job, restart counter is at 3.
Sep 10 16:26:14 ns3053180 systemd[1]: Stopped Kibana.
Sep 10 16:26:14 ns3053180 systemd[1]: kibana.service: Start request repeated too quickly.
Sep 10 16:26:14 ns3053180 systemd[1]: kibana.service: Failed with result 'exit-code'.
Sep 10 16:26:14 ns3053180 systemd[1]: Failed to start Kibana.
I think this is your problem:
system_auth WITH REPLICATION= {'class' : 'SimpleStrategy', 'DC1' :2 };
The SimpleStrategy class does not accept datacenter/RF pairs as parameters. It has one parameter, which is simply replication_factor:
ALTER KEYSPACE system_auth WITH REPLICATION= {'class' : 'SimpleStrategy', 'replication_factor' :2 };
By contrast, the NetworkTopologyStrategy takes the parameters you have provided above:
ALTER KEYSPACE system_auth WITH REPLICATION= {'class' : 'NetworkTopologyStrategy', 'DC1' :2 };
IMO, there really isn't much of a need for SimpleStrategy. I never use it.
Note: If you're going to query at LOCAL_QUORUM, you should have at least 3 replicas. Or at the very least, an odd number capable of computing a majority. Because quorum of 2 is, well, 2. So querying at quorum with only 2 replicas doesn't really help you.
Related
can't start minio in ubuntu due to "Variable MINIO_VOLUMES not set in /etc/default/minio";
I am installing latest minio on ubuntu 18.04 following the minio installation instruction from here. after the installation, try to run it with sudo systemctl start minio.service but it didn't work with message. ...skipping... ● minio.service - MinIO Loaded: loaded (/etc/systemd/system/minio.service; disabled; vendor preset: enabled) Active: failed (Result: exit-code) since Thu 2022-12-08 17:03:45 CST; 2min 1s ago Docs: https://docs.min.io Process: 5072 ExecStart=/usr/local/bin/minio server $MINIO_OPTS $MINIO_VOLUMES (code=exited, status=1/FAILURE) Process: 5050 ExecStartPre=/bin/bash -c if [ -z "${MINIO_VOLUMES}" ]; then echo "Variable MINIO_VOLUMES not set in /etc/default/minio"; exit 1; fi (code=exited, status=0/SUCCES Main PID: 5072 (code=exited, status=1/FAILURE) 12月 08 17:03:45 nky systemd[1]: minio.service: Service hold-off time over, scheduling restart. 12月 08 17:03:45 nky systemd[1]: minio.service: Scheduled restart job, restart counter is at 5. 12月 08 17:03:45 nky systemd[1]: Stopped MinIO. 12月 08 17:03:45 nky systemd[1]: minio.service: Start request repeated too quickly. 12月 08 17:03:45 nky systemd[1]: minio.service: Failed with result 'exit-code'. 12月 08 17:03:45 nky systemd[1]: Failed to start MinIO. it is noted something wrong with 'MINIO_VOLUMES', but I have set the variable in the /etc/default/minio MINIO_ROOT_USER=myminioadmin MINIO_ROOT_PASSWORD=minio-secret-key-change-me # MINIO_VOLUMES sets the storage volume or path to use for the MinIO server. MINIO_VOLUMES="/mnt/data" what is wrong with my configuration?
There is nothing obvious wrong with your configuration but you did not post your service file. Almost always this is a permissions issue, you can change the systemd service user to root to test. Common issues after that are that the binary is not present in the location specified in the service file, or not executable.
can't start minio server in ubuntu with systemctl start minio
I configured a minio instance server on the ubuntu 18.04 with the guide from https://www.digitalocean.com/community/tutorials/how-to-set-up-an-object-storage-server-using-minio-on-ubuntu-18-04. after the installation, the server failed to start with the command "sudo systemctl start minio", the error is saying : root#iZbp1icuzly3aac0dmjz9aZ:~# sudo systemctl status minio ● minio.service - MinIO Loaded: loaded (/etc/systemd/system/minio.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Thu 2021-12-23 17:11:56 CST; 4s ago Docs: https://docs.min.io Process: 9085 ExecStart=/usr/local/bin/minio server $MINIO_OPTS $MINIO_VOLUMES (code=exited, status=1/FAILURE) Process: 9084 ExecStartPre=/bin/bash -c if [ -z "${MINIO_VOLUMES}" ]; then echo "Variable MINIO_VOLUMES not set in /etc/default/minio"; exit 1; fi (code=exited, status=0/SUCCESS) Main PID: 9085 (code=exited, status=1/FAILURE) Dec 23 17:11:56 iZbp1icuzly3aac0dmjz9aZ systemd[1]: minio.service: Main process exited, code=exited, status=1/FAILURE Dec 23 17:11:56 iZbp1icuzly3aac0dmjz9aZ systemd[1]: minio.service: Failed with result 'exit-code'. Dec 23 17:11:56 iZbp1icuzly3aac0dmjz9aZ systemd[1]: minio.service: Service hold-off time over, scheduling restart. Dec 23 17:11:56 iZbp1icuzly3aac0dmjz9aZ systemd[1]: minio.service: Scheduled restart job, restart counter is at 5. Dec 23 17:11:56 iZbp1icuzly3aac0dmjz9aZ systemd[1]: Stopped MinIO. Dec 23 17:11:56 iZbp1icuzly3aac0dmjz9aZ systemd[1]: minio.service: Start request repeated too quickly. Dec 23 17:11:56 iZbp1icuzly3aac0dmjz9aZ systemd[1]: minio.service: Failed with result 'exit-code'. Dec 23 17:11:56 iZbp1icuzly3aac0dmjz9aZ systemd[1]: Failed to start MinIO. It looks like the reason is the Variable MINIO_VOLUMES not set in /etc/default/minio. However, I double check the file from /etc/default/minio MINIO_ACCESS_KEY="minioadmin" MINIO_VOLUMES="/usr/local/share/minio/" MINIO_OPTS="-C /etc/minio --address localhost:9001" MINIO_SECRET_KEY="minioadmin" I have set the value MINIO_VOLUMES. I tried to start manually with minio server --address :9001 /usr/local/share/minio/, it works. now I don't know what goes wrong with starting the minio server by using the systemctl start minio
I'd recommend sticking to the official documentation wherever possible. It's intended for distributed deployments but the only real change is that your MINIO_VOLUMES will be for a single node/drive. I would recommend trying a combination of things here: Review minio.service and ensure the user/group exists Review file path permissions on the MINIO_VOLUMES value Now for the why: My guess without seeing further logs (journalctl -u minio would have been helpful here) is that this is a combination of two things: the minio.service user/group doesn't have rwx permissions on the /usr/local/share/minio path, you are missing an environment variable we recently introduced to prevent users from pointing at their root drive (this was intended as a safety measure, but somewhat complicates these kinds of smaller setups). Take a look at these lines in the minio.service file - I'm assuming that is what you are using based on the instructions in the DO guide. If you ls -al /usr/local/share/minio I would venture it has ROOT permissions for user and group and limited write access if any. Hope this helps - for further troubleshooting having at least 10-20 lines from journalctl is invaluable, as it would show the actual error and not just the final quit message.
ELK configuration for my application logs forward to elastic search using log stash
I am new in ELK configuration. https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elastic-stack-on-ubuntu-16-04 I have configure in my local machine and it is work fine. I want to forward my application logs file to elastic search using log-stash of file beats. When I have configure all things working fine for system logs. but I am not able to store my application log to elastic search. Please help me. This is my log file: service.log {"name":"service name", "hostname":"abc", "pid":4474, "userId":"123", "school_id":"123", "role":"student", "username":"mahi123", "serviceName":"loginService", "level":40, "msg":"successFully fetch trail log", "time":"2019-06-01T10:55:46.482Z","v":0}
Some troubleshooting steps to take care of when logs do not reach Elastisearch: Check your log parsing configuration file(usually made with the extension .conf). Make sure it's having the right path to scan logs from, right set of filters etc. To see if this .conf file is actually working, one can try: logstash -f <elasticsearch.conf file path> If this doesn't throw any error on console, that means you are good at this point and will have to move to next step. Check if Kibana indices are getting created. Run curl http://<hostipaddress or localhost>:9200/_cat/indices?v. If yes, go to Kibana Management and create index patterns. If not, check if your system has enough available memory to serve logstash and elastisearch. free -m would be helpful once you start logstash and elasticsearch services. Many a times, I have seen people trying ELK setup on a machine which has insufficient RAM(4GB sounds good for a standalone setup). Check your logstash and Elasticsearch services are up and running. If Elasticsearch is getting down or getting restarted during log parsing or indices creation, that's most probably due to lack of system resources. -bash-4.2# systemctl status elasticsearch �� elasticsearch.service - Elasticsearch Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2019-06-05 14:08:26 UTC; 1 weeks 0 days ago Docs: http://www.elastic.co Main PID: 1396 (java) CGroup: /system.slice/elasticsearch.service ������1396 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMS... Jun 05 14:08:26 cue-bldsvr4 systemd[1]: Started Elasticsearch. Jun 05 14:08:26 cue-bldsvr4 systemd[1]: Starting Elasticsearch... -bash-4.2# systemctl status logstash �� logstash.service - logstash Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2019-06-05 14:50:52 UTC; 1 weeks 0 days ago Main PID: 4320 (java) CGroup: /system.slice/logstash.service ������4320 /bin/java -Xms256m -Xmx1g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFrac... Jun 05 14:50:52 cue-bldsvr4 systemd[1]: Started logstash. Jun 05 14:50:52 cue-bldsvr4 systemd[1]: Starting logstash... Jun 05 14:51:08 cue-bldsvr4 logstash[4320]: Sending Logstash's logs to /var/log/logstash which is now configur...rties Hint: Some lines were ellipsized, use -l to show in full. -bash-4.2#
Issue with custom service systemd when start Apache Gobblin
Running /opt/gobblin/bin/gobblin-standalone.sh start directly everything works, the output in logs are fine. Running it through a systemd service, not works. Nothing are outputting in logs. [vagrant#localhost ~]$ sudo systemctl start gobblin [vagrant#localhost ~]$ sudo systemctl status gobblin ● gobblin.service - Gobblin Data Ingestion Framework Loaded: loaded (/usr/lib/systemd/system/gobblin.service; disabled; vendor preset: disabled) Active: failed (Result: start-limit) since Sun 2019-01-20 16:44:23 UTC; 693ms ago Docs: https://gobblin.readthedocs.io Process: 9673 ExecStop=/opt/gobblin/bin/gobblin-standalone.sh stop (code=exited, status=1/FAILURE) Process: 9671 ExecStart=/opt/gobblin/bin/gobblin-standalone.sh start (code=exited, status=1/FAILURE) Main PID: 9671 (code=exited, status=1/FAILURE) Jan 20 16:44:23 localhost.localdomain systemd[1]: gobblin.service: control process exited, code=exited status=1 Jan 20 16:44:23 localhost.localdomain systemd[1]: Unit gobblin.service entered failed state. Jan 20 16:44:23 localhost.localdomain systemd[1]: gobblin.service failed. Jan 20 16:44:23 localhost.localdomain systemd[1]: gobblin.service holdoff time over, scheduling restart. Jan 20 16:44:23 localhost.localdomain systemd[1]: Stopped Gobblin Data Ingestion Framework. Jan 20 16:44:23 localhost.localdomain systemd[1]: start request repeated too quickly for gobblin.service Jan 20 16:44:23 localhost.localdomain systemd[1]: Failed to start Gobblin Data Ingestion Framework. Jan 20 16:44:23 localhost.localdomain systemd[1]: Unit gobblin.service entered failed state. Jan 20 16:44:23 localhost.localdomain systemd[1]: gobblin.service failed. The code of /usr/lib/systemd/system/gobblin.service below: [Unit] Description=Gobblin Data Ingestion Framework Documentation=https://gobblin.readthedocs.io After=network.target [Service] Type=simple User=gobblin Group=gobblin WorkingDirectory=/opt/gobblin ExecStart=/opt/gobblin/bin/gobblin-standalone.sh start ExecStop=/opt/gobblin/bin/gobblin-standalone.sh stop Restart=on-failure [Install] WantedBy=multi-user.target
The trick is with Type=oneshot, RemainAfterExit=true and set the environments: [Unit] Description=Gobblin Data Ingestion Framework Documentation=https://gobblin.readthedocs.io After=network.target [Service] Type=oneshot User=gobblin Group=gobblin WorkingDirectory=/opt/gobblin Environment=JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/jre Environment=GOBBLIN_FWDIR=/opt/gobblin Environment=GOBBLIN_JOB_CONFIG_DIR=/etc/gobblin Environment=GOBBLIN_WORK_DIR=/var/lib/gobblin Environment=GOBBLIN_LOG_DIR=/var/log/gobblin ExecStart=/opt/gobblin/bin/gobblin-standalone.sh start ExecStop=/opt/gobblin/bin/gobblin-standalone.sh stop RemainAfterExit=true [Install] WantedBy=multi-user.target
Elasticsearch won't start and no logs
I've been trying to start ES for hours and I can't seem to be able to do so. The command sudo service elasticsearch status prints out : elasticsearch.service - Elasticsearch Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since ven. 2019-01-11 12:22:33 CET; 5min ago Docs: http://www.elastic.co Process: 16713 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -Des.pidfile=$PID_DIR/elasticsearch.pid -Des.default.path.home=$ES_HOME -Des.default.path.logs=$LOG_DIR -Des.default.path.data=$DATA_DIR -Des.default.confi Main PID: 16713 (code=exited, status=1/FAILURE) janv. 11 12:22:33 glamuse systemd[1]: Started Elasticsearch. janv. 11 12:22:33 glamuse systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE janv. 11 12:22:33 glamuse systemd[1]: elasticsearch.service: Unit entered failed state. janv. 11 12:22:33 glamuse systemd[1]: elasticsearch.service: Failed with result 'exit-code'. I've increased the memory and done all the fixes I could find on the internet, but I can't seem to figure out what's going on, there's not even a single log generated today... So I don't even have any trace of where the error could be. I'm using ES version 1.7.2 (yes, it's old, but that shouldn't be a problem as it does work, and no I can't upgrade because my Elastica uses this version, anyways ...) I'm using a vagrant machine, so it's a unix based system. My config is as follow (removed all the useless comments) : index.number_of_shards: 10 index.number_of_replicas: 1 bootstrap.mlockall: true network.bind_host: 0 network.host: 0.0.0.0 indices.recovery.max_bytes_per_sec: 200mb indices.store.throttle.max_bytes_per_sec : 200mb script.engine.groovy.inline.search: on script.engine.groovy.inline.aggs: on script.engine.groovy.inline.update: on index.query.bool.max_clause_count: 100000 I also have this conf : ES_HEAP_SIZE=4g MAX_OPEN_FILES=65535 MAX_LOCKED_MEMORY=unlimited START_DAEMON=true ES_USER=elasticsearch ES_GROUP=elasticsearch LOG_DIR=/var/log/elasticsearch DATA_DIR=/var/lib/elasticsearch WORK_DIR=/tmp/elasticsearch CONF_DIR=/etc/elasticsearch CONF_FILE=/etc/elasticsearch/elasticsearch.yml RESTART_ON_UPGRADE=true Any idea how can I debug this?