Running single node Consul (v1.8.4) on Ubuntu 18.04. consul service is up, I had set the ui to be true (default).
But when I try access http://192.168.37.128:8500/ui
This site can’t be reached 192.168.37.128 took too long to respond.
ui.json
{
"addresses": {
"http": "0.0.0.0"
}
}
consul.service file:
[Unit]
Description=Consul
Documentation=https://www.consul.io/
[Service]
ExecStart=/usr/bin/consul agent –server –ui –data-dir=/temp/consul –bootstrap-expect=1 –node=vault –bind=–config-dir=/etc/consul.d/
ExecReload=/bin/kill –HUP $MAINPID
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
systemctl status consul
● consul.service - Consul
Loaded: loaded (/etc/systemd/system/consul.service; disabled; vendor preset: enabled)
Active: active (running) since Sun 2020-10-04 19:19:08 CDT; 50min ago
Docs: https://www.consul.io/
Main PID: 9477 (consul)
Tasks: 9 (limit: 4980)
CGroup: /system.slice/consul.service
└─9477 /opt/consul/bin/consul agent -server -ui -data-dir=/temp/consul -bootstrap-expect=1 -node=vault -bind=1
agent.server.raft: heartbeat timeout reached, starting election: last-leader=
agent.server.raft: entering candidate state: node="Node at 192.168.37.128:8300 [Candid
agent.server.raft: election won: tally=1
agent.server.raft: entering leader state: leader="Node at 192.168.37.128:8300 [Leader]
agent.server: cluster leadership acquired
agent.server: New leader elected: payload=vault
agent.leader: started routine: routine="federation state anti-entropy"
agent.leader: started routine: routine="federation state pruning"
agent.leader: started routine: routine="CA root pruning"
agent: Synced node info
Shows bind at 192.168.37.128:8300
This issue was firewall, had to open firewall on 8500
sudo ufw allow 8500/tcp
Related
I have this systemd configuration file for fluentd
[Unit]
Description=Fluentd
Wants=network-online.target
After=network-online.target
[Service]
User=xxx
Group=users
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=/usr/local/rvm/gems/ruby-2.7.2/bin/fluentd --config /etc/fluent/fluent.conf
[Install]
WantedBy=multi-user.target
systemctl status outputs this:
xxx#test:/home/xxx # sudo systemctl status fluentd.service
● fluentd.service - Fluentd
Loaded: loaded (/etc/systemd/system/fluentd.service; enabled; vendor preset: disabled)
Active: activating (auto-restart) (Result: exit-code) since Wed 2023-02-08 20:51:05 UTC; 141ms ago
Process: 5286 ExecStart=/usr/local/rvm/gems/ruby-2.7.2/bin/fluentd --config /etc/fluent/fluent.conf (code=exited, status=127)
Main PID: 5286 (code=exited, status=127)
Feb 08 20:51:05 xenoss.io systemd[1]: Unit fluentd.service entered failed state.
Feb 08 20:51:05 xenoss.io systemd[1]: fluentd.service failed.
Warning: fluentd.service changed on disk. Run 'systemctl daemon-reload' to reload units.
But when I just run using this:
fluentd --config /etc/fluent/fluent.conf
I could successfully start it up, but with systemd it fails
Also which fluentd outputs:
/usr/local/rvm/gems/ruby-2.7.2/bin/fluentd
I have an AWS Linux 2 AMI EC2 instance.
When running systemctl --user status I get the message:
Failed to get D-Bus connection: No such file or directory
I then ran systemctl start dbus.socket, which gave me this message:
Failed to start dbus.socket: The name org.freedesktop.PolicyKit1 was not provided by any .service files See system logs and 'systemctl status dbus.socket' for details.
I then ran systemctl status dbus.socket -l which returned this:
dbus.socket - D-Bus System Message Bus Socket
Loaded: loaded (/usr/lib/systemd/system/dbus.socket; static; vendor preset: disabled)
Active: active (running) since Thu 2022-03-31 21:26:42 UTC; 14h ago
Listen: /run/dbus/system_bus_socket (Stream)
Mar 31 21:26:42 ip-10-0-0-193.ec2.internal systemd[1]: Listening on D-Bus System Message Bus Socket.
Mar 31 21:26:42 ip-10-0-0-193.ec2.internal systemd[1]: Starting D-Bus System Message Bus Socket.
Running sudo systemctl --user status gives a different error:
Failed to get D-Bus connection: Connection refused
I'm unsure of what to investigate next or what steps to take to resolve the issue.
I am changing the path of -
path.data: /var/log/elasticsearch to path.data: /data/elasticsearchdata/log/elasticsearch/
in elasticsearch.yml
file after creating the folder and moving the files/folders from ../elasticsearch to /data/elasticsearchdata/log/
but after doing the changes in - elasticsearch.yml I have run the command as -
sudo systemctl restart elasticsearch
But getting this error -
● elasticsearch.service - Elasticsearch
Loaded: loaded (/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2021-12-15 14:53:14 UTC; 7s ago
Docs: https://www.elastic.co
Process: 1678664 ExecStart=/usr/share/elasticsearch/bin/systemd-entrypoint -p ${PID_DIR}/elasticsearch.pid --quiet (code=exited, status=1/FAILURE)
Main PID: 1678664 (code=exited, status=1/FAILURE)
Dec 15 14:53:14 ip-10-10-6-161 systemd-entrypoint[1678664]: path.logs: /data/elasticsearchda ...
Can anyone let me know what I am missing ?
ONLY WAY to move your data is
setup repository (snapshot/restore)
create snapshot of all indices
shut down ELK cluster and edit path.data in elasticsearch.yml
start ELK cluster
restore snapshot
data should appear in the new location
I am installing tor in my ubuntu 18.04 as per link.After completing all the steps, i am getting this error
$ sudo service tor status
● tor.service - Anonymizing overlay network for TCP (multi-instance-master)
Loaded: loaded (/lib/systemd/system/tor.service; enabled; vendor preset: enabled)
Active: active (exited) since Fri 2018-07-06 11:47:19 IST; 13min ago
Main PID: 10894 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 4554)
CGroup: /system.slice/tor.service
Jul 06 11:47:19 aks-Vostro-1550 systemd[1]: Starting Anonymizing overlay network for TCP (multi-instance-master)...
Jul 06 11:47:19 aks-Vostro-1550 systemd[1]: Started Anonymizing overlay network for TCP (multi-instance-master).
My /lib/systemd/system/tor.service file is:
# This service is actually a systemd target,
# but we are using a service since targets cannot be reloaded.
[Unit]
Description=Anonymizing overlay network for TCP (multi-instance-master)
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/bin/true
ExecReload=/bin/true
[Install]
WantedBy=multi-user.target
I will be thankful for your help and support.
I have solved my problem in Ubuntu 18.04 using the suggestion given by link
I created a system service called cooltoo_storage on centos. I am able to start/stop/restart the service by running the command "service cooltoo_storage start/stop/restart". Now I want to configure it on ansible playbook. Below is my config for starting this service.
- name: start cooltoo_storage service
sudo: yes
service:
name: cooltoo_storage
state: started
After running the ansible-playbook, I got below error
msg: Job for cooltoo_storage.service failed because the control process exited with error code. See "systemctl status cooltoo_storage.service" and "journalctl -xe" for details.
FATAL: all hosts have already failed -- aborting
Below is the command output of "systemctl status cooltoo_storage.service",
● cooltoo_storage.service - LSB: cooltoo storage provider
Loaded: loaded (/etc/rc.d/init.d/cooltoo_storage)
Active: failed (Result: exit-code) since Mon 2016-05-02 11:39:07 CST; 1min 5s ago
Docs: man:systemd-sysv-generator(8)
Process: 26661 ExecStart=/etc/rc.d/init.d/cooltoo_storage start (code=exited, status=203/EXEC)
May 02 11:39:07 Cool-Too systemd[1]: Starting LSB: cooltoo storage provider...
May 02 11:39:07 Cool-Too systemd[26661]: Failed at step EXEC spawning /etc/rc.d/init.d/cooltoo_storage: Exec format error
May 02 11:39:07 Cool-Too systemd[1]: cooltoo_storage.service: control process exited, code=exited status=203
May 02 11:39:07 Cool-Too systemd[1]: Failed to start LSB: cooltoo storage provider.
May 02 11:39:07 Cool-Too systemd[1]: Unit cooltoo_storage.service entered failed state.
May 02 11:39:07 Cool-Too systemd[1]: cooltoo_storage.service failed.
How should I fix this issue?
The problem is irrelevant to Ansible.
Your service cooltoo_storage failed to start. Just make sure it works:
sudo systemctl restart cooltoo_storage.service
sudo systemctl status cooltoo_storage.service
And if not - fix it. Probably cooltoo_storage custom written service. Start investigating from checking out startup config for this specific service:
systemctl cat cooltoo_storage.service
and contents of: /etc/rc.d/init.d/cooltoo_storage