How does my neo4j browser still works even when I stop neo4j service? - amazon-ec2

neo4j.service - Neo4j Graph Database
Loaded: loaded (/lib/systemd/system/neo4j.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Mar 06 13:26:43 ip-10-14-12-59 neo4j[12287]: 2020-03-06 13:26:43.564+0000 INFO ======== Neo4j 3.5.14 ========
Mar 06 13:26:43 ip-10-14-12-59 neo4j[12287]: 2020-03-06 13:26:43.572+0000 INFO Starting...
Mar 06 13:26:49 ip-10-14-12-59 neo4j[12287]: 2020-03-06 13:26:49.780+0000 INFO Bolt enabled on 0.0.0.0:7687.
Mar 06 13:26:51 ip-10-14-12-59 neo4j[12287]: 2020-03-06 13:26:51.153+0000 INFO Started.
Mar 06 13:26:52 ip-10-14-12-59 neo4j[12287]: 2020-03-06 13:26:52.131+0000 INFO Remote interface available at http://10.14.12.59:7474/
Mar 06 13:42:38 ip-10-14-12-59 systemd[1]: Stopping Neo4j Graph Database...
Mar 06 13:42:38 ip-10-14-12-59 neo4j[12287]: 2020-03-06 13:42:38.818+0000 INFO Neo4j Server shutdown initiated by request
Mar 06 13:42:38 ip-10-14-12-59 neo4j[12287]: 2020-03-06 13:42:38.832+0000 INFO Stopping...
Mar 06 13:42:38 ip-10-14-12-59 neo4j[12287]: 2020-03-06 13:42:38.884+0000 INFO Stopped.
Mar 06 13:42:39 ip-10-14-12-59 systemd[1]: Stopped Neo4j Graph Database.
But yet sudo lsof -i -P -n | grep LISTEN
Neither 7474 port is listening neither 7687 is listening

Your neo4j Browser session is connected to a different (running) neo4j instance (probably on your local host). You can use this Browser command to see the URL it is currently using:
:server status
You can run these two Browser commands to disconnect, and then connect to the correct instance (the second command will display a form):
:server disconnect
:server connect
Based on your logs, it looks like you want to set the Connect URL to bolt://10.14.12.59:7687.

Related

SonarQube on http:/localhost:9000 cannot be reached

I'm running SonarQube on CentOS 7. It is correctly running on the terminal, but if I try to access it through the browser (http:/localhost:9000 or http:/localhost:9001 ) it can't be reached. Can someone help me?
[root#192 logs]# systemctl status sonarqube
● sonarqube.service - SonarQube service
Loaded: loaded (/etc/systemd/system/sonarqube.service; enabled; vendor preset: disabled)
Active: active (running) since ven 2022-10-07 12:48:37 CEST; 3s ago
Process: 23246 ExecStop=/opt/sonarqube/bin/linux-x86-64/sonar.sh stop (code=exited, status=0/SUCCESS)
Process: 23275 ExecStart=/opt/sonarqube/bin/linux-x86-64/sonar.sh start (code=exited, status=0/SUCCESS)
Main PID: 23298 (java)
Tasks: 43
CGroup: /system.slice/sonarqube.service
├─23298 java -Xms8m -Xmx32m --add-exports=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens...
└─23321 /usr/lib/jvm/java-11-openjdk-11.0.16.1.1-1.el7_9.x86_64/bin/java -XX:+UseG1GC -Djava.io.tmpdir=/opt/sonarqube/temp -XX:ErrorFile...
ott 07 12:48:37 192.168.1.24 systemd[1]: Starting SonarQube service...
ott 07 12:48:37 192.168.1.24 sonar.sh[23275]: /usr/bin/java
ott 07 12:48:37 192.168.1.24 sonar.sh[23275]: Starting SonarQube...
ott 07 12:48:37 192.168.1.24 systemd[1]: Started SonarQube service.
Unable to connect localhost

Manage Trackmania Server with systemd

Hi i just set up a trackmania server which works fine when starting via command line. Now i want to manage it with systemd, so it starts on boot and gets restartet if it crashes.
Here is my systemd service file:
[Unit]
Description=Trackmania 2020 Server
After=network.target
[Service]
User=trackmania
Group=trackmania
Restart=always
RestartSec=30
WorkingDirectory=/home/trackmania/server
ExecStart=/home/trackmania/server/TrackmaniaServer /title=Trackmania /game_Settings=Matchsettings/tracklist.txt /dedicated_cfg=dedicated_cfg.txt
[Install]
WantedBy=multi-user.target
When starting the service, the status command returns:
* trackmania_server.service - Trackmania 2020 Server
Loaded: loaded (/etc/systemd/system/trackmania_server.service; disabled; vendor preset: enabled)
Active: activating (auto-restart) since Thu 2020-07-09 21:08:31 UTC; 29s ago
Process: 1759 ExecStart=/home/trackmania/server/TrackmaniaServer /title=Trackmania /game_Settings=Matchsettings/tracklist.txt /dedicated_cfg=dedicated_cfg.txt (code=exited, status=0/SUCCESS)
Main PID: 1759 (code=exited, status=0/SUCCESS)
When stopping the service this is returned:
* trackmania_server.service - Trackmania 2020 Server
Loaded: loaded (/etc/systemd/system/trackmania_server.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Jul 09 21:11:03 vps-zap558747-2 systemd[1]: Started Trackmania 2020 Server.
Jul 09 21:11:03 vps-zap558747-2 TrackmaniaServer[1847]: Starting Trackmania Date=2020-07-07_23_30 Svn=105917 GameVersion=3.3.0...
Jul 09 21:11:03 vps-zap558747-2 TrackmaniaServer[1847]: ManiaPlanet server daemon started with pid=1848 (parent=1847).
Jul 09 21:11:03 vps-zap558747-2 TrackmaniaServer[1847]: Configuration file : dedicated_cfg.txt
Jul 09 21:11:03 vps-zap558747-2 TrackmaniaServer[1847]: Loading system configuration...
Jul 09 21:11:03 vps-zap558747-2 TrackmaniaServer[1847]: ...system configuration loaded
Jul 09 21:11:04 vps-zap558747-2 TrackmaniaServer[1847]: Loading cache...
Jul 09 21:11:04 vps-zap558747-2 TrackmaniaServer[1847]: ...OK
Jul 09 21:11:04 vps-zap558747-2 systemd[1]: trackmania_server.service: Succeeded.
Jul 09 21:11:04 vps-zap558747-2 systemd[1]: Stopped Trackmania 2020 Server.
To me it looks like the server is started when i stop the service and well then immediately terminated again. What am i doing wrong? o.O
Try using the /nodaemon switch on the server command line

Kibana failed to start

Elasticsearch working with no issues on http://localhost:9200
And Operating system is Ubuntu 18.04
Here is the error log for Kibana
root#syed-MS-7B17:/var/log# journalctl -fu kibana.service
-- Logs begin at Sat 2020-01-04 18:30:58 IST. --
Apr 03 20:22:49 syed-MS-7B17 kibana[7165]: {"type":"log","#timestamp":"2020-04-03T14:52:49Z","tags":["fatal","root"],"pid":7165,"message":"{ Error: listen EADDRNOTAVAIL: address not available 7.0.0.1:5601\n at Server.setupListenHandle [as _listen2] (net.js:1263:19)\n at listenInCluster (net.js:1328:12)\n at GetAddrInfoReqWrap.doListen (net.js:1461:7)\n at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:61:10)\n code: 'EADDRNOTAVAIL',\n errno: 'EADDRNOTAVAIL',\n syscall: 'listen',\n address: '7.0.0.1',\n port: 5601 }"}
Apr 03 20:22:49 syed-MS-7B17 kibana[7165]: FATAL Error: listen EADDRNOTAVAIL: address not available 7.0.0.1:5601
Apr 03 20:22:50 syed-MS-7B17 systemd[1]: kibana.service: Main process exited, code=exited, status=1/FAILURE
Apr 03 20:22:50 syed-MS-7B17 systemd[1]: kibana.service: Failed with result 'exit-code'.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: kibana.service: Service hold-off time over, scheduling restart.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: kibana.service: Scheduled restart job, restart counter is at 2.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: Stopped Kibana.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: kibana.service: Start request repeated too quickly.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: kibana.service: Failed with result 'exit-code'.
Apr 03 20:22:53 syed-MS-7B17 systemd[1]: Failed to start Kibana.
I have resolved it myself after checking the /etc/hosts file
It was edited by mistake like below
7.0.0.1 localhost

Configure kibana with SSL

I want to configure Kibana, so, that I can access over https.
I did following changes in Kibana config file (/etc/kibana/kibana.yml):
server.host: 0.0.0.0
server.ssl.enabled: true
server.ssl.key: /etc/elasticsearch/privkey.pem // Using same SSL that I created for elasticsearch
server.ssl.certificate: /etc/elasticsearch/cert.pem // Using same SSL that I created for elasticsearch
elasticsearch.url: https://127.0.0.1:9200
elasticsearch.ssl.verificationMode: none
elasticsearch.username: kibanaserver
elasticsearch.password: kibanaserver
elasticsearch.requestHeadersWhitelist: ["securitytenant","Authorization"]
opendistro_security.multitenancy.enabled: true
opendistro_security.multitenancy.tenants.preferred: ["Private", "Global"]
opendistro_security.readonly_mode.roles: ["kibana_read_only"]
When I restart/start Kibana, it's giving me below error:
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; disabled; vendor preset: enabled)
Active: failed (Result: start-limit-hit) since Wed 2019-06-05 14:20:12 UTC; 382ms ago
Process: 32505 ExecStart=/usr/share/kibana/bin/kibana -c /etc/kibana/kibana.yml (code=exited, status=1/FAILURE)
Main PID: 32505 (code=exited, status=1/FAILURE)
Jun 05 14:20:11 mts-elk-test systemd[1]: kibana.service: Main process exited, code=exited, status=1/FAILURE
Jun 05 14:20:11 mts-elk-test systemd[1]: kibana.service: Unit entered failed state.
Jun 05 14:20:11 mts-elk-test systemd[1]: kibana.service: Failed with result 'exit-code'.
Jun 05 14:20:12 mts-elk-test systemd[1]: kibana.service: Service hold-off time over, scheduling restart.
Jun 05 14:20:12 mts-elk-test systemd[1]: Stopped Kibana.
Jun 05 14:20:12 mts-elk-test systemd[1]: kibana.service: Start request repeated too quickly.
Jun 05 14:20:12 mts-elk-test systemd[1]: Failed to start Kibana.
Jun 05 14:20:12 mts-elk-test systemd[1]: kibana.service: Unit entered failed state.
Jun 05 14:20:12 mts-elk-test systemd[1]: kibana.service: Failed with result 'start-limit-hit'.
root#mts-elk-test:/home/ronak# vi /etc/kibana/kibana.yml
I found the solution. There was a problem with file permission.
I copied cert.pem and privkey.pem files from elasticsearch directory to kibana and changed owner with kibana user:
chown kibana:kibana /etc/kibana/cert.pem
chown kibana:kibana /etc/kibana/privkey.pem
Changed path in kibana.yml file:
server.ssl.key: /etc/kibana/privkey.pem
server.ssl.certificate: /etc/kibana/cert.pem
Rstart kibana: service kibana restart
And it worked!

kapacitor not running indicate fail

help my apacitor is not runnning, actually im running influxdb in the same server that kapacitor and telegraf, but my kapacitor don't work
kapacitor.service - Time series data processing engine.
Loaded: loaded (/lib/systemd/system/kapacitor.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2019-01-03 17:56:38 UTC; 3s ago
Docs: https://github.com/influxdb/kapacitor
Process: 2502 ExecStart=/usr/bin/kapacitord -config /etc/kapacitor/kapacitor.conf $KAPACITOR_OPTS (code=exited, status=1/FAILURE)
Main PID: 2502 (code=exited, status=1/FAILURE)
Jan 03 17:56:38 ip-172-31-43-67 systemd[1]: kapacitor.service: Service hold-off time over, scheduling restart.
Jan 03 17:56:38 ip-172-31-43-67 systemd[1]: kapacitor.service: Scheduled restart job, restart counter is at 5.
Jan 03 17:56:38 ip-172-31-43-67 systemd[1]: Stopped Time series data processing engine..
Jan 03 17:56:38 ip-172-31-43-67 systemd[1]: kapacitor.service: Start request repeated too quickly.
Jan 03 17:56:38 ip-172-31-43-67 systemd[1]: kapacitor.service: Failed with result 'exit-code'.
Jan 03 17:56:38 ip-172-31-43-67 systemd[1]: Failed to start Time series data processing engine..
i did find the solution for myself:
[[influxdb]]
enabled = true
name = "localhost"
default = true
urls = ["http://localhost:8086"]
username = "user"
password = "password"
you must take in count that you will need has an user create in influxdb before

Resources