Unable to start Elasticsearch on Ubuntu 20 - elasticsearch

I am trying to start elasticsearch after installation. It throws error
Job for elasticsearch.service failed because a fatal signal was delivered to the control process.
See "systemctl status elasticsearch.service" and "journalctl -xe" for details.
After running: systemctl status elasticsearch.service .
● elasticsearch.service - Elasticsearch
Loaded: loaded (/lib/systemd/system/elasticsearch.service; disabled; vendor preset: enabled)
Active: failed (Result: signal) since Mon 2021-05-17 14:30:02 IST; 1min 56s ago
Docs: https://www.elastic.co
Process: 94558 ExecStart=/usr/share/elasticsearch/bin/systemd-entrypoint -p ${PID_DIR}/elasticsearch.pid --quiet (code=killed, signal=KILL)
Main PID: 94558 (code=killed, signal=KILL)
May 17 14:29:58 rohit-Lenovo-Legion-Y540-15IRH-PG0 systemd[1]: Starting Elasticsearch...
May 17 14:30:02 rohit-Lenovo-Legion-Y540-15IRH-PG0 systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
May 17 14:30:02 rohit-Lenovo-Legion-Y540-15IRH-PG0 systemd[1]: elasticsearch.service: Failed with result 'signal'.
May 17 14:30:02 rohit-Lenovo-Legion-Y540-15IRH-PG0 systemd[1]: Failed to start Elasticsearch.
In journalctl -xe, I am getting this
May 17 14:30:02 rohit-Lenovo-Legion-Y540-15IRH-PG0 kernel: Out of memory: Killed process 94558 (java) total-vm:9804148kB, anon-rss:5809744kB, file-rss:0kB, shmem-rss:0kB, UID:129 pgtables:11660kB oom_sc>
May 17 14:30:01 rohit-Lenovo-Legion-Y540-15IRH-PG0 CRON[94743]: pam_unix(cron:session): session opened for user root by (uid=0)
May 17 14:30:01 rohit-Lenovo-Legion-Y540-15IRH-PG0 CRON[94744]: (root) CMD ([ -x /etc/init.d/anacron ] && if [ ! -d /run/systemd/system ]; then /usr/sbin/invoke-rc.d anacron start >/dev/null; fi)
May 17 14:30:01 rohit-Lenovo-Legion-Y540-15IRH-PG0 CRON[94743]: pam_unix(cron:session): session closed for user root
May 17 14:30:02 rohit-Lenovo-Legion-Y540-15IRH-PG0 kernel: oom_reaper: reaped process 94558 (java), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
May 17 14:30:02 rohit-Lenovo-Legion-Y540-15IRH-PG0 systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- An ExecStart= process belonging to unit elasticsearch.service has exited.
--
-- The process' exit code is 'killed' and its exit status is 9.
May 17 14:30:02 rohit-Lenovo-Legion-Y540-15IRH-PG0 systemd[1]: elasticsearch.service: Failed with result 'signal'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit elasticsearch.service has entered the 'failed' state with result 'signal'.
May 17 14:30:02 rohit-Lenovo-Legion-Y540-15IRH-PG0 systemd[1]: Failed to start Elasticsearch.
-- Subject: A start job for unit elasticsearch.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit elasticsearch.service has finished with a failure.
--
-- The job identifier is 13124 and the job result is failed.
May 17 14:30:02 rohit-Lenovo-Legion-Y540-15IRH-PG0 sudo[94552]: pam_unix(sudo:session): session closed for user root
May 17 14:31:34 rohit-Lenovo-Legion-Y540-15IRH-PG0 kernel: [UFW BLOCK] IN=wlp0s20f3 OUT= MAC=90:78:41:e1:0c:67:ec:0d:e4:f9:4a:71:08:00 SRC=192.168.1.102 DST=192.168.1.108 LEN=390 TOS=0x00 PREC=0x00 TTL=>
May 17 14:31:35 rohit-Lenovo-Legion-Y540-15IRH-PG0 kernel: [UFW BLOCK] IN=wlp0s20f3 OUT= MAC=90:78:41:e1:0c:67:ec:0d:e4:f9:4a:71:08:00 SRC=192.168.1.102 DST=192.168.1.108 LEN=390 TOS=0x00 PREC=0x00 TTL=>
May 17 14:31:35 rohit-Lenovo-Legion-Y540-15IRH-PG0 kernel: [UFW BLOCK] IN=wlp0s20f3 OUT= MAC=90:78:41:e1:0c:67:ec:0d:e4:f9:4a:71:08:00 SRC=192.168.1.102 DST=192.168.1.108 LEN=390 TOS=0x00 PREC=0x00 TTL=>
May 17 14:31:37 rohit-Lenovo-Legion-Y540-15IRH-PG0 kernel: [UFW BLOCK] IN=wlp0s20f3 OUT= MAC=90:78:41:e1:0c:67:ec:0d:e4:f9:4a:71:08:00 SRC=192.168.1.102 DST=192.168.1.108 LEN=390 TOS=0x00 PREC=0x00 TTL=>
May 17 14:31:59 rohit-Lenovo-Legion-Y540-15IRH-PG0 systemd[1]: Started Run anacron jobs.
-- Subject: A start job for unit anacron.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit anacron.service has finished successfully.
--
-- The job identifier is 13197.
May 17 14:31:59 rohit-Lenovo-Legion-Y540-15IRH-PG0 anacron[94906]: Anacron 2.3 started on 2021-05-17
May 17 14:31:59 rohit-Lenovo-Legion-Y540-15IRH-PG0 anacron[94906]: Normal exit (0 jobs run)
May 17 14:31:59 rohit-Lenovo-Legion-Y540-15IRH-PG0 systemd[1]: anacron.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit anacron.service has successfully entered the 'dead' state.
My ES Configuration
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
#network.host: 192.168.0.1
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
Please help me to resolve this issue.

Related

cannot start the squid server

When you make start of squid proxy server:
this massage appears (Job for squid.service failed because the control process exited with error code. See "systemctl status squid.service" and "journalctl -xe" for details.)
How to solve it?
the output:
-- Logs begin at Mon 2022-10-10 14:56:31 +03. –
Oct 13 19:36:40 toj-mgt-uv-fp01..com.sa systemd[1]: Starting Squid Web Proxy Server...
-- Subject: A start job for unit squid.service has begun execution
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
-- A start job for unit squid.service has begun execution.
-- The job identifier is 62883.
Oct 13 19:36:40 toj-mgt-uv-fp01..com.sa squid[1051704]: 2022/10/13 19:36:40| ACL not found: fac
Oct 13 19:36:40 toj-mgt-uv-fp01..com.sa squid[1051704]: 2022/10/13 19:36:40| FATAL: Bungled /etc/squid/squid.conf line 6174: http_access allow fac
Oct 13 19:36:40 toj-mgt-uv-fp01..com.sa squid[1051704]: 2022/10/13 19:36:40| Squid Cache (Version 4.10): Terminated abnormally.
Oct 13 19:36:40 toj-mgt-uv-fp01..com.sa squid[1051704]: CPU Usage: 0.010 seconds = 0.000 user + 0.010 sys
Oct 13 19:36:40 toj-mgt-uv-fp01..com.sa squid[1051704]: Maximum Resident Size: 57888 KB
Oct 13 19:36:40 toj-mgt-uv-fp01..com.sa squid[1051704]: Page faults with physical i/o: 0
Oct 13 19:36:40 toj-mgt-uv-fp01..com.sa squid[1051704]: FATAL: Bungled /etc/squid/squid.conf line 6174: http_access allow fac
Oct 13 19:36:40 toj-mgt-uv-fp01..com.sa systemd[1]: squid.service: Control process exited, code=exited, status=1/FAILURE
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
-- An ExecStartPre= process belonging to unit squid.service has exited.
-- The process' exit code is 'exited' and its exit status is 1.
Oct 13 19:36:40 toj-mgt-uv-fp01..com.sa systemd[1]: squid.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
-- The unit squid.service has entered the 'failed' state with result 'exit-code'.
Oct 13 19:36:40 toj-mgt-uv-fp01.****.com.sa systemd[1]: Failed to start Squid Web Proxy Server.
-- Subject: A start job for unit squid.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
-- A start job for unit squid.service has finished with a failure.
-- The job identifier is 62883 and the job result is failed.
thanks and regards.

Problem running Elasticsearch for Magento project on Ubuntu

I'm trying set up Elasticsearch for Magento 2.4.2 project. I have installed Elasticsearch 7.9.3 and openjdk 11.0.10.
I got the error:
sudo systemctl start elasticsearch:
*Job for elasticsearch.service failed because the control process exited with error code.
See "systemctl status elasticsearch.service" and "journalctl -xe" for details.*
In the /etc/elasticsearch/elasticsearch.yml network settings are:
**network.host:** 127.0.0.1
**http.port:** 9200
journalctl -xe command result:
--
-- An ExecStart= process belonging to unit elasticsearch.service has exited.
--
-- The process' exit code is 'exited' and its exit status is 1.
Mar 10 16:10:46 -ThinkPad-P15s-Gen-1 systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit elasticsearch.service has entered the 'failed' state with result 'exit-code'.
Mar 10 16:10:46 -ThinkPad-P15s-Gen-1 systemd[1]: Failed to start Elasticsearch.
-- Subject: A start job for unit elasticsearch.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit elasticsearch.service has finished with a failure.
--
-- The job identifier is 10277 and the job result is failed.
Mar 10 16:10:46 -ThinkPad-P15s-Gen-1 sudo[21081]: pam_unix(sudo:session): session closed for user root
Mar 10 16:15:06 -ThinkPad-P15s-Gen-1 sudo[21568]: : TTY=pts/0 ; PWD=/home/; USER=root ; COMMAND=/usr/bin/nano /etc/elasticsearch/>
Mar 10 16:15:06 -ThinkPad-P15s-Gen-1 sudo[21568]: pam_unix(sudo:session): session opened for user root by (uid=0)
Mar 10 16:15:25 -ThinkPad-P15s-Gen-1 sudo[21568]: pam_unix(sudo:session): session closed for user root
Mar 10 16:15:52 -ThinkPad-P15s-Gen-1 wpa_supplicant[853]: wlp0s20f3: WPA: Group rekeying completed with ac:cf:85:db:37:ce [GTK=CCMP]
Looked through all the articles, but none of them helps to fix the situation. Сan someone suggest an idea for a solution to this problem ?
The error data you shared is just identifying the error as 'exited'. This brings me to the guess work to help you with the issue.
From the Elasticsearch.yaml configuration, the port on which the Elasticsearch exposes its API is on 9200. One possibility is that the port might be pre-occupied. Can you look at the port status on your system and confirm it.
Elasticsearch.service file has configurations for memory and CPU. May be the configured memory is not sufficient for it to start.
Hope this helps. For detailed and straight forward instructions on how to install the Magento 2.4.2 this blog written by me might help you.

ElasticSearch xpack.security.enabled: true Error on start

I want to set password to my elasticsearch. I have not paid or started my free-trial so I guess I am using basic plan as default.
I have followed official guide to install elasticsearch on ubuntu EC2.
I don't think I have installed OSS version but when i run:
/usr/share/elasticsearch$ sudo bin/elasticsearch-plugin list --verbose
Plugins directory: /usr/share/elasticsearch/plugins
it does not print xpack.
I tried Removing and Installing ElasticSearch clean just in case i have set something wrong.
Only thing I did to my elasticsearch.yml is adding: xpack.security.enabled:true
However starting elastic search by systemctl start elasticsearch.service outputs this error message:
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2019-11-13 02:53:44 UTC; 9min ago
Docs: http://www.elastic.co
Process: 20330 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet (code=exited, status=1/FAILURE)
Main PID: 20330 (code=exited, status=1/FAILURE)
Nov 13 02:53:44 ip-172-31-47-37 elasticsearch[20330]: at org.yaml.snakeyaml.scanner.ScannerImpl.needMoreTokens(ScannerImpl.java:280)
Nov 13 02:53:44 ip-172-31-47-37 elasticsearch[20330]: at org.yaml.snakeyaml.scanner.ScannerImpl.checkToken(ScannerImpl.java:225)
Nov 13 02:53:44 ip-172-31-47-37 elasticsearch[20330]: at org.yaml.snakeyaml.parser.ParserImpl$ParseBlockMappingKey.produce(ParserImpl.java:557)
Nov 13 02:53:44 ip-172-31-47-37 elasticsearch[20330]: at org.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:157)
Nov 13 02:53:44 ip-172-31-47-37 elasticsearch[20330]: at org.yaml.snakeyaml.parser.ParserImpl.getEvent(ParserImpl.java:167)
Nov 13 02:53:44 ip-172-31-47-37 elasticsearch[20330]: at com.fasterxml.jackson.dataformat.yaml.YAMLParser.nextToken(YAMLParser.java:340)
Nov 13 02:53:44 ip-172-31-47-37 elasticsearch[20330]: ... 13 more
Nov 13 02:53:44 ip-172-31-47-37 systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
Nov 13 02:53:44 ip-172-31-47-37 systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
Nov 13 02:53:44 ip-172-31-47-37 systemd[1]: Failed to start Elasticsearch
Also after I added xpack.security.enabled:true, listing plugin shows this error message:
/usr/share/elasticsearch$ sudo bin/elasticsearch-plugin list --verbose
Exception in thread "main" SettingsException[Failed to load settings from [elasticsearch.yml]]; nested: MarkedYAMLException[while scanning a simple key
in 'reader', line 90, column 1:
xpack.security.enabled:true
^
could not find expected ':'
in 'reader', line 91, column 1:
^
at [Source: sun.nio.ch.ChannelInputStream#6155d082; line: 37, column: 34]]; nested: ScannerException[while scanning a simple key
in 'reader', line 90, column 1:
xpack.security.enabled:true
^
could not find expected ':'
in 'reader', line 91, column 1:
^
];
at org.elasticsearch.common.settings.Settings$Builder.loadFromStream(Settings.java:1097)
at org.elasticsearch.common.settings.Settings$Builder.loadFromPath(Settings.java:1070)
at org.elasticsearch.node.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:83)
at org.elasticsearch.cli.EnvironmentAwareCommand.createEnv(EnvironmentAwareCommand.java:95)
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:125)
at org.elasticsearch.cli.MultiCommand.execute(MultiCommand.java:77)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:125)
at org.elasticsearch.cli.Command.main(Command.java:90)
at org.elasticsearch.plugins.PluginCli.main(PluginCli.java:47)
Caused by: com.fasterxml.jackson.dataformat.yaml.snakeyaml.error.MarkedYAMLException: while scanning a simple key
in 'reader', line 90, column 1:
xpack.security.enabled:true
^
could not find expected ':'
in 'reader', line 91, column 1:
^
at [Source: sun.nio.ch.ChannelInputStream#6155d082; line: 37, column: 34]
at com.fasterxml.jackson.dataformat.yaml.snakeyaml.error.MarkedYAMLException.from(MarkedYAMLException.java:27)
at com.fasterxml.jackson.dataformat.yaml.YAMLParser.nextToken(YAMLParser.java:343)
at org.elasticsearch.common.xcontent.json.JsonXContentParser.nextToken(JsonXContentParser.java:52)
at org.elasticsearch.common.settings.Settings.fromXContent(Settings.java:645)
at org.elasticsearch.common.settings.Settings.fromXContent(Settings.java:620)
at org.elasticsearch.common.settings.Settings.access$400(Settings.java:82)
at org.elasticsearch.common.settings.Settings$Builder.loadFromStream(Settings.java:1093)
... 9 more
Caused by: while scanning a simple key
in 'reader', line 90, column 1:
xpack.security.enabled:true
^
could not find expected ':'
in 'reader', line 91, column 1:
^
at org.yaml.snakeyaml.scanner.ScannerImpl.stalePossibleSimpleKeys(ScannerImpl.java:465)
at org.yaml.snakeyaml.scanner.ScannerImpl.needMoreTokens(ScannerImpl.java:280)
at org.yaml.snakeyaml.scanner.ScannerImpl.checkToken(ScannerImpl.java:225)
at org.yaml.snakeyaml.parser.ParserImpl$ParseBlockMappingKey.produce(ParserImpl.java:557)
at org.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:157)
at org.yaml.snakeyaml.parser.ParserImpl.getEvent(ParserImpl.java:167)
at com.fasterxml.jackson.dataformat.yaml.YAMLParser.nextToken(YAMLParser.java:340)
... 14 more
Here's my elasticsearch.yml:
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
xpack.security.enabled:true
What do i need to do to successfully launch elastic search?
Thank you in advance
Try to add a space after the colon for your xpack setting.
xpack.security.enabled: true
Syntax on yaml files can be pretty specific.
https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html
"A dictionary is represented in a simple key: value form (the colon must be followed by a space)"

Dante Socks5 proxy server doesn't start

I have installed Dante Proxy server by using following methods from the website. But the Server doesn't start and shows the following error. I have tried the steps from other websites also. I searched StackOverflow and saw the same issue in one question. but it has been solved yet. Anyone can solve it or suggest me any other alternative for SOCKS5 proxy server
Job for danted.service failed because the control process exited with error code. See "systemctl status danted.service" and "journalctl -xe" for details.
Error shown in systemctl status danted.service & journalctl -xe
steven#steven-VirtualBox:~$ systemctl status danted.service
● danted.service - LSB: SOCKS (v4 and v5) proxy daemon (danted)
Loaded: loaded (/etc/init.d/danted; bad; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2019-03-10 18:12:42 IST; 2min 59s ago
Docs: man:systemd-sysv-generator(8)
Process: 3400 ExecStart=/etc/init.d/danted start (code=exited, status=1/FAILURE)
Mar 10 18:12:41 steven-VirtualBox systemd[1]: Starting LSB: SOCKS (v4 and v5) proxy daemon (danted)...
Mar 10 18:12:42 steven-VirtualBox danted[3405]: error: /etc/danted.conf: problem on line 11 near token "eth0": could not resolve hostname "eth0
Mar 10 18:12:42 steven-VirtualBox systemd[1]: danted.service: Control process exited, code=exited status=1
Mar 10 18:12:42 steven-VirtualBox danted[3400]: Starting Dante SOCKS daemon:
Mar 10 18:12:42 steven-VirtualBox systemd[1]: Failed to start LSB: SOCKS (v4 and v5) proxy daemon (danted).
Mar 10 18:12:42 steven-VirtualBox systemd[1]: danted.service: Unit entered failed state.
Mar 10 18:12:42 steven-VirtualBox systemd[1]: danted.service: Failed with result 'exit-code'.
steven#steven-VirtualBox:~$ journalctl -xe
-- The result is failed.
Mar 10 18:11:40 steven-VirtualBox systemd[1]: danted.service: Unit entered failed state.
Mar 10 18:11:40 steven-VirtualBox systemd[1]: danted.service: Failed with result 'exit-code'.
Mar 10 18:12:40 steven-VirtualBox sudo[3397]: steven : TTY=pts/18 ; PWD=/home/steven ; USER=root ; COMMAND=/bin/systemctl restart danted
Mar 10 18:12:41 steven-VirtualBox sudo[3397]: pam_unix(sudo:session): session opened for user root by (uid=0)
Mar 10 18:12:41 steven-VirtualBox systemd[1]: Stopped LSB: SOCKS (v4 and v5) proxy daemon (danted).
-- Subject: Unit danted.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit danted.service has finished shutting down.
Mar 10 18:12:41 steven-VirtualBox systemd[1]: Starting LSB: SOCKS (v4 and v5) proxy daemon (danted)...
-- Subject: Unit danted.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit danted.service has begun starting up.
Mar 10 18:12:42 steven-VirtualBox danted[3405]: error: /etc/danted.conf: problem on line 11 near token "eth0": could not resolve hostname "eth0
Mar 10 18:12:42 steven-VirtualBox danted[3405]: alert: mother[1/1]: shutting down
Mar 10 18:12:42 steven-VirtualBox systemd[1]: danted.service: Control process exited, code=exited status=1
Mar 10 18:12:42 steven-VirtualBox danted[3400]: Starting Dante SOCKS daemon:
Mar 10 18:12:42 steven-VirtualBox sudo[3397]: pam_unix(sudo:session): session closed for user root
Mar 10 18:12:42 steven-VirtualBox systemd[1]: Failed to start LSB: SOCKS (v4 and v5) proxy daemon (danted).
-- Subject: Unit danted.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit danted.service has failed.
--
-- The result is failed.
Mar 10 18:12:42 steven-VirtualBox systemd[1]: danted.service: Unit entered failed state.
Mar 10 18:12:42 steven-VirtualBox systemd[1]: danted.service: Failed with result 'exit-code'.
Mar 10 18:12:50 steven-VirtualBox sudo[3407]: steven : TTY=pts/18 ; PWD=/home/steven ; USER=root ; COMMAND=/bin/systemctl status danted
Mar 10 18:12:50 steven-VirtualBox sudo[3407]: pam_unix(sudo:session): session opened for user root by (uid=0)
Mar 10 18:14:38 steven-VirtualBox sudo[3407]: pam_unix(sudo:session): session closed for user root
I had the same issue and came across your question. I fixed it by adding a systemd dependency of network-online.target to the danted.service, based on reading this https://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/
Here's how:
sudo systemctl edit danted.service
add this:
[Unit]
After=network-online.target
Wants=network-online.target
save & exit, run this for good measure
sudo systemctl daemon-reload
sudo systemctl enable danted.service
This line is the telltale:
Mar 10 18:12:42 steven-VirtualBox danted[3405]: error: /etc/danted.conf: problem on line 11 near token "eth0": could not resolve hostname "eth0
It looks like there is no interface called eth0.
I had the same issue, found out what the actual interface is called using ifconfig and swapped out eth0 for that.
Find the interface of your device from Terminal with netstat -rn and look at the Iface column. Install netstat with sudo apt install net-tools if you don't have it. Change the settings of external: eth0 to external: xxxx where of course xxxx being your Iface value, in the file /etc/danted.conf.
If you're just starting out and there's not yet saved rules in danted.conf you can simply delete the file with sudo rm /etc/danted.conf and then create a new with sudo nano /etc/danted.conf. If using firewall it is mandatory that you open the port 1080 with sudo ufw allow 1080. In the new empty file danted.conf, paste in
logoutput: syslog
user.privileged: root
user.unprivileged: nobody
# The listening network interface or address.
internal: 0.0.0.0 port=1080
# The proxying network interface or address.
external: xxxx #Replace xxxx with the device's Iface
# socks-rules determine what is proxied through the external interface.
socksmethod: username
# client-rules determine who can connect to the internal interface.
clientmethod: none
client pass {
from: 0.0.0.0/0 to: 0.0.0.0/0
}
socks pass {
from: 0.0.0.0/0 to: 0.0.0.0/0
}
Save the file and run
sudo systemctl restart danted.service
sudo systemctl status danted.service

Kubernetes Installation with Vagrant & CoreOS behind proxy

I am behind a proxy server and following the "Kubernetes Installation with Vagrant & CoreOS" steps listed here: https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html
After finalizing the install, when I run
$ kubectl get nodes
I get the error.
Unable to connect to the server: Service Unavailable
e1, c1, and w1 are up and I can issue $vagrant ssh each of them.
when I check the w1, I have seen docker service was not running with the error listed below.
----------------------------------------------------------------------------
-- Unit docker.service has failed.
--
-- The result is dependency.
Aug 19 04:09:25 w1 systemd[1]: docker.service: Job docker.service/start failed with result 'dependency'.
Aug 19 04:09:25 w1 systemd[1]: flanneld.service: Unit entered failed state.
Aug 19 04:09:25 w1 systemd[1]: flanneld.service: Failed with result 'exit-code'.
Aug 19 04:09:30 w1 systemd[1]: flanneld.service: Service hold-off time over, scheduling restart.
Aug 19 04:09:30 w1 systemd[1]: Stopped Network fabric for containers.
-- Subject: Unit flanneld.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit flanneld.service has finished shutting down.
Aug 19 04:09:30 w1 systemd[1]: Starting Network fabric for containers...
-- Subject: Unit flanneld.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit flanneld.service has begun starting up.
Aug 19 04:09:30 w1 rkt[6888]: image: using image from file /usr/lib/rkt/stage1-images/stage1-fly.aci
Aug 19 04:09:31 w1 rkt[6888]: image: searching for app image quay.io/coreos/flannel
Aug 19 04:09:31 w1 rkt[6888]: run: discovery failed
Aug 19 04:09:31 w1 systemd[1]: flanneld.service: Main process exited, code=exited, status=1/FAILURE
Aug 19 04:09:31 w1 systemd[1]: Failed to start Network fabric for containers.
-- Subject: Unit flanneld.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit flanneld.service has failed.
--
-- The result is failed.
Aug 19 04:09:31 w1 systemd[1]: flanneld.service: Unit entered failed state.
Aug 19 04:09:31 w1 systemd[1]: flanneld.service: Failed with result 'exit-code'.
----------------------------------------------------------------------------
I am guessing that problem is because of I am behind the proxy. Before running the install steps I issue the commands
$export "HTTP_PROXY=http://http-proxy.xxxxxx.com:8080"
$export "HTTPS_PROXY=http://http-proxy.xxxxxx.com:8080"
$export "http_proxy=http://http-proxy.xxxxxx.com:8080"
$export "https_proxy=http://http-proxy.xxxxxx.com:8080"
Do you know if this is enough for the installation behind proxy or do I need to add proxy settings to somewhere else.
Thank you in advance,
turgos
The variables you're exporting are valid only in your current shell session, they are not available to your flannel systemd unit.
Create the following drop-in inside the systemd unit directory, and then reload the daemon with systemctl daemon-reload, it should fix your issue with flannel:
/etc/systemd/system/flannel.service.d/proxy.conf:
[Service]
Environment="HTTP_PROXY=http://http-proxy.xxx:8080"
Environment="...
A similar example is available in the CoreOS documentation: Customizing Docker

Resources