Manage Trackmania Server with systemd - systemd

Hi i just set up a trackmania server which works fine when starting via command line. Now i want to manage it with systemd, so it starts on boot and gets restartet if it crashes.
Here is my systemd service file:
[Unit]
Description=Trackmania 2020 Server
After=network.target
[Service]
User=trackmania
Group=trackmania
Restart=always
RestartSec=30
WorkingDirectory=/home/trackmania/server
ExecStart=/home/trackmania/server/TrackmaniaServer /title=Trackmania /game_Settings=Matchsettings/tracklist.txt /dedicated_cfg=dedicated_cfg.txt
[Install]
WantedBy=multi-user.target
When starting the service, the status command returns:
* trackmania_server.service - Trackmania 2020 Server
Loaded: loaded (/etc/systemd/system/trackmania_server.service; disabled; vendor preset: enabled)
Active: activating (auto-restart) since Thu 2020-07-09 21:08:31 UTC; 29s ago
Process: 1759 ExecStart=/home/trackmania/server/TrackmaniaServer /title=Trackmania /game_Settings=Matchsettings/tracklist.txt /dedicated_cfg=dedicated_cfg.txt (code=exited, status=0/SUCCESS)
Main PID: 1759 (code=exited, status=0/SUCCESS)
When stopping the service this is returned:
* trackmania_server.service - Trackmania 2020 Server
Loaded: loaded (/etc/systemd/system/trackmania_server.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Jul 09 21:11:03 vps-zap558747-2 systemd[1]: Started Trackmania 2020 Server.
Jul 09 21:11:03 vps-zap558747-2 TrackmaniaServer[1847]: Starting Trackmania Date=2020-07-07_23_30 Svn=105917 GameVersion=3.3.0...
Jul 09 21:11:03 vps-zap558747-2 TrackmaniaServer[1847]: ManiaPlanet server daemon started with pid=1848 (parent=1847).
Jul 09 21:11:03 vps-zap558747-2 TrackmaniaServer[1847]: Configuration file : dedicated_cfg.txt
Jul 09 21:11:03 vps-zap558747-2 TrackmaniaServer[1847]: Loading system configuration...
Jul 09 21:11:03 vps-zap558747-2 TrackmaniaServer[1847]: ...system configuration loaded
Jul 09 21:11:04 vps-zap558747-2 TrackmaniaServer[1847]: Loading cache...
Jul 09 21:11:04 vps-zap558747-2 TrackmaniaServer[1847]: ...OK
Jul 09 21:11:04 vps-zap558747-2 systemd[1]: trackmania_server.service: Succeeded.
Jul 09 21:11:04 vps-zap558747-2 systemd[1]: Stopped Trackmania 2020 Server.
To me it looks like the server is started when i stop the service and well then immediately terminated again. What am i doing wrong? o.O

Try using the /nodaemon switch on the server command line

Related

Failed to start Elasticsearch. Error opening log file '/gc.log': Permission denied

Dear StackOverflow community,
I was running Kibana/Elasticsearch without a problem until installing a Kibana plugin. Then the service failed and I noticed that the problem is that Elasticsearch stopped. I tried several ways to fix it, and then even reinstalled all. But the problem still avoiding to launch Elasticsearch, even with a fresh installation.
Installation on Debian 9 using apt install.
systemctl start elasticsearch.service
results on:
Exception in thread "main" java.lang.RuntimeException: starting java failed with [1]
[0.000s][error][logging] Error opening log file '/gc.log': Permission denied
Full log with journalctl -xe
-- Unit elasticsearch.service has begun starting up.
Feb 07 14:09:06 Debian-911-stretch-64-minimal kibana[576]: {"type":"log","#timestamp":"2020-02-07T13:09:06Z","tags":["warning","elasticsearch","admin"],"pid":576,"message":"Unable to revive connection: http://localhost:9200/"}
Feb 07 14:09:06 Debian-911-stretch-64-minimal kibana[576]: {"type":"log","#timestamp":"2020-02-07T13:09:06Z","tags":["warning","elasticsearch","admin"],"pid":576,"message":"No living connections"}
Feb 07 14:09:06 Debian-911-stretch-64-minimal kibana[576]: {"type":"log","#timestamp":"2020-02-07T13:09:06Z","tags":["warning","elasticsearch","admin"],"pid":576,"message":"Unable to revive connection: http://localhost:9200/"}
Feb 07 14:09:06 Debian-911-stretch-64-minimal kibana[576]: {"type":"log","#timestamp":"2020-02-07T13:09:06Z","tags":["warning","elasticsearch","admin"],"pid":576,"message":"No living connections"}
Feb 07 14:09:06 Debian-911-stretch-64-minimal elasticsearch[2312]: Exception in thread "main" java.lang.RuntimeException: starting java failed with [1]
Feb 07 14:09:06 Debian-911-stretch-64-minimal elasticsearch[2312]: output:
Feb 07 14:09:06 Debian-911-stretch-64-minimal elasticsearch[2312]: [0.000s][error][logging] Error opening log file '/gc.log': Permission denied
Feb 07 14:09:06 Debian-911-stretch-64-minimal elasticsearch[2312]: [0.000s][error][logging] Initialization of output 'file=/var/log/elasticsearch/gc.log' using options 'filecount=32,filesize=64m' failed.
Feb 07 14:09:06 Debian-911-stretch-64-minimal elasticsearch[2312]: error:
Feb 07 14:09:06 Debian-911-stretch-64-minimal elasticsearch[2312]: OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Feb 07 14:09:06 Debian-911-stretch-64-minimal elasticsearch[2312]: Invalid -Xlog option '-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m', see error log for details.
Feb 07 14:09:06 Debian-911-stretch-64-minimal elasticsearch[2312]: Error: Could not create the Java Virtual Machine.
Feb 07 14:09:06 Debian-911-stretch-64-minimal elasticsearch[2312]: Error: A fatal exception has occurred. Program will exit.
Feb 07 14:09:06 Debian-911-stretch-64-minimal elasticsearch[2312]: at org.elasticsearch.tools.launchers.JvmErgonomics.flagsFinal(JvmErgonomics.java:118)
Feb 07 14:09:06 Debian-911-stretch-64-minimal elasticsearch[2312]: at org.elasticsearch.tools.launchers.JvmErgonomics.finalJvmOptions(JvmErgonomics.java:86)
Feb 07 14:09:06 Debian-911-stretch-64-minimal elasticsearch[2312]: at org.elasticsearch.tools.launchers.JvmErgonomics.choose(JvmErgonomics.java:59)
Feb 07 14:09:06 Debian-911-stretch-64-minimal elasticsearch[2312]: at org.elasticsearch.tools.launchers.JvmOptionsParser.main(JvmOptionsParser.java:92)
Feb 07 14:09:06 Debian-911-stretch-64-minimal systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
Feb 07 14:09:06 Debian-911-stretch-64-minimal systemd[1]: Failed to start Elasticsearch.
-- Subject: Unit elasticsearch.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit elasticsearch.service has failed.
The mentioned gc.log file was not in that folder. And the permissions were:
drwxr-s--- 2 elasticsearch elasticsearch 4096 Jan 15 13:20 elasticsearch
I created the file and also played with permissions until having these:
-rwxrwxrwx 1 root elasticsearch 0 Feb 7 15:19 gc.log
...and even changed the ownership:
-rwxrwxrwx 1 root root 0 Feb 7 15:19 gc.log
But no success, I still having the same issue.
Thanks
Make sure you are running CMD as Administrator.
This error also happens if you are using docker & running the container as a different user. You have to add --group_add flag to docker command or set TAKE_FILE_OWNERSHIP environment variable as mentioned here
Using docker-compose:
user: 1007:1007
group_add:
- 0
Using docker:
--group-add 0
Firstly, I didn't know why gc.log file was not present. Have you changed the logs folder path or something? The gc.log path can be set in jvm.options file. By default ES logs and java garbage collection logs are fed into the logs folder inside $ES_HOME directory.
About user perspective, elastic search can't be run as root user. So from the ES directory details its showing you have an elasticsearch user created, and trying to run the cluster by that user.
The problem here can be solved by changing the permissions of files insdie the ES directory where all it belongs. Now the gc.log file is owned by root user and it cannot be accessed by the elasticsearch user.
Try this: sudo chown <user> <path/to/es/directory> -R
Here it becomes : sudo chown elasticsearch elasticsearch/ -R
If the issue still persists, check the jvm.options file whether its all configured correctly. Unless you change the -Xloggc:logs/gc.log option, the gc.log won't be pushing to /var/log.
Feb 09 17:09:02 server elasticsearch[2199]: Invalid -Xlog option '-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m', see error log for details.
Your log says, the option is given as file=/var/log/elasticsearch/gc.log. Correct any wrong configurations as per documentation : https://www.elastic.co/guide/en/elasticsearch/reference/master/jvm-options.html
sudo systemctl -l status elasticsearch.service
Returns this log:
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/elasticsearch.service.d
└─override.conf
Active: failed (Result: exit-code) since Sun 2020-02-09 17:09:02 CET; 2min 48s ago
Docs: http://www.elastic.co
Process: 2199 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet (code=exited, status=1/FAILURE)
Main PID: 2199 (code=exited, status=1/FAILURE)
Feb 09 17:09:02 server elasticsearch[2199]: Invalid -Xlog option '-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m', see error log for details.
Feb 09 17:09:02 server elasticsearch[2199]: Error: Could not create the Java Virtual Machine.
Feb 09 17:09:02 server elasticsearch[2199]: Error: A fatal exception has occurred. Program will exit.
Feb 09 17:09:02 server elasticsearch[2199]: at org.elasticsearch.tools.launchers.JvmErgonomics.flagsFinal(JvmErgonomics.java:118)
Feb 09 17:09:02 server elasticsearch[2199]: at org.elasticsearch.tools.launchers.JvmErgonomics.finalJvmOptions(JvmErgonomics.java:86)
Feb 09 17:09:02 server elasticsearch[2199]: at org.elasticsearch.tools.launchers.JvmErgonomics.choose(JvmErgonomics.java:59)
Feb 09 17:09:02 server elasticsearch[2199]: at org.elasticsearch.tools.launchers.JvmOptionsParser.main(JvmOptionsParser.java:92)
Feb 09 17:09:02 server systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
Feb 09 17:09:02 server systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
Feb 09 17:09:02 server systemd[1]: Failed to start Elasticsearch.
At this point I'm doing a fresh install. Not able to find the solution I need to continue working...

Configure kibana with SSL

I want to configure Kibana, so, that I can access over https.
I did following changes in Kibana config file (/etc/kibana/kibana.yml):
server.host: 0.0.0.0
server.ssl.enabled: true
server.ssl.key: /etc/elasticsearch/privkey.pem // Using same SSL that I created for elasticsearch
server.ssl.certificate: /etc/elasticsearch/cert.pem // Using same SSL that I created for elasticsearch
elasticsearch.url: https://127.0.0.1:9200
elasticsearch.ssl.verificationMode: none
elasticsearch.username: kibanaserver
elasticsearch.password: kibanaserver
elasticsearch.requestHeadersWhitelist: ["securitytenant","Authorization"]
opendistro_security.multitenancy.enabled: true
opendistro_security.multitenancy.tenants.preferred: ["Private", "Global"]
opendistro_security.readonly_mode.roles: ["kibana_read_only"]
When I restart/start Kibana, it's giving me below error:
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; disabled; vendor preset: enabled)
Active: failed (Result: start-limit-hit) since Wed 2019-06-05 14:20:12 UTC; 382ms ago
Process: 32505 ExecStart=/usr/share/kibana/bin/kibana -c /etc/kibana/kibana.yml (code=exited, status=1/FAILURE)
Main PID: 32505 (code=exited, status=1/FAILURE)
Jun 05 14:20:11 mts-elk-test systemd[1]: kibana.service: Main process exited, code=exited, status=1/FAILURE
Jun 05 14:20:11 mts-elk-test systemd[1]: kibana.service: Unit entered failed state.
Jun 05 14:20:11 mts-elk-test systemd[1]: kibana.service: Failed with result 'exit-code'.
Jun 05 14:20:12 mts-elk-test systemd[1]: kibana.service: Service hold-off time over, scheduling restart.
Jun 05 14:20:12 mts-elk-test systemd[1]: Stopped Kibana.
Jun 05 14:20:12 mts-elk-test systemd[1]: kibana.service: Start request repeated too quickly.
Jun 05 14:20:12 mts-elk-test systemd[1]: Failed to start Kibana.
Jun 05 14:20:12 mts-elk-test systemd[1]: kibana.service: Unit entered failed state.
Jun 05 14:20:12 mts-elk-test systemd[1]: kibana.service: Failed with result 'start-limit-hit'.
root#mts-elk-test:/home/ronak# vi /etc/kibana/kibana.yml
I found the solution. There was a problem with file permission.
I copied cert.pem and privkey.pem files from elasticsearch directory to kibana and changed owner with kibana user:
chown kibana:kibana /etc/kibana/cert.pem
chown kibana:kibana /etc/kibana/privkey.pem
Changed path in kibana.yml file:
server.ssl.key: /etc/kibana/privkey.pem
server.ssl.certificate: /etc/kibana/cert.pem
Rstart kibana: service kibana restart
And it worked!

How to resolve the starting error in sonarqube?

While giving sudo update-rc.d -f sonar remove I'm getting the below error
insserv: warning: script 'K01sonarqube' missing LSB tags and overrides
insserv: warning: script 'sonarqube' missing LSB tags and overrides
insserv: warning: script 'sonar' missing LSB tags and overrides
While starting sonarqube i'm getting
● sonarqube.service - SonarQube service
Loaded: loaded (/etc/systemd/system/sonarqube.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Fri 2018-10-19 12:48:24 IST; 14s ago
Process: 11420 ExecStop=/opt/sonarqube/sonarqube-6.7.5/bin/linux-x86-64/sonar.sh stop (code=exited, status=0/SUCCESS)
Process: 11372 ExecStart=/opt/sonarqube/sonarqube-6.7.5/bin/linux-x86-64/sonar.sh start (code=exited, status=0/SUCCESS)
Main PID: 984 (code=exited, status=1/FAILURE)
Oct 19 12:48:24 master-VB systemd[1]: Started SonarQube service.
Oct 19 12:48:24 master-VB systemd[1]: sonarqube.service: Service hold-off time over, scheduling restart.
Oct 19 12:48:24 master-VB systemd[1]: Stopped SonarQube service.
Oct 19 12:48:24 master-VB systemd[1]: sonarqube.service: Start request repeated too quickly.
Oct 19 12:48:24 master-VB systemd[1]: Failed to start SonarQube service.

Fail2Ban: Service failed when log files symlink to another device

I am using a Raspberry Pi. To reduce I/O on my SD-Card I symlink all important log files to an external USB-mounted Harddrive.
Example:
ln -s /media/usb-device/logs/auth.log /var/log/auth.log
The logging works fine. But fail2ban seems not to like that. When I enable my ssh-monitoring in my /etc/fail2ban/jail.local file,
# [sshd]
enabled = true
bantime = 3600
fail2ban crash during executing this command systemctl restart fail2ban.service
I have tried to hardcode the path:
# logpath = %(sshd_log)s
logpath = /media/usb-devive/logs/auth.log
But fail2ban throws the same error:
fail2ban.service - Fail2Ban Service
Loaded: loaded (/lib/systemd/system/fail2ban.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2018-04-28 20:42:33 CEST; 45s ago
Docs: man:fail2ban(1)
Process: 3014 ExecStop=/usr/bin/fail2ban-client stop (code=exited, status=0/SUCCESS)
Process: 3045 ExecStart=/usr/bin/fail2ban-client -x start (code=exited, status=255)
Main PID: 658 (code=killed, signal=TERM)
Apr 28 20:42:33 raspberrypi systemd[1]: fail2ban.service: Service hold-off time over, scheduling restart.
Apr 28 20:42:33 raspberrypi systemd[1]: Stopped Fail2Ban Service.
Apr 28 20:42:33 raspberrypi systemd[1]: fail2ban.service: Start request repeated too quickly.
Apr 28 20:42:33 raspberrypi systemd[1]: Failed to start Fail2Ban Service.
Apr 28 20:42:33 raspberrypi systemd[1]: fail2ban.service: Unit entered failed state.
Apr 28 20:42:33 raspberrypi systemd[1]: fail2ban.service: Failed with result 'exit-code'.
Any ideas?
"devive" in the logpath is spelt incorrectly

Running docker on virtual Server- Possible or not?

I'm trying to run/install docker on my vServer and can't find information if it's even possible.. I tried CentOS(6&7), Ubuntu, Debian, and fedora now and I'm just not able to get the docker daemon to run.
docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled)
Active: failed (Result: exit-code) since So 2015-04-05 17:12:23 EDT; 16s ago
Docs: http://docs.docker.com
Process: 956 ExecStart=/usr/bin/docker -d $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $INSECURE_REGISTRY (code=exited, status=1/FAILURE)
Main PID: 956 (code=exited, status=1/FAILURE)
Apr 05 17:12:23 vvs.valentinsavenko.com systemd[1]: Starting Docker Applicati...
Apr 05 17:12:23 vvs.valentinsavenko.com docker[956]: time="2015-04-05T17:12:2...
Apr 05 17:12:23 vvs.valentinsavenko.com docker[956]: time="2015-04-05T17:12:2...
Apr 05 17:12:23 vvs.valentinsavenko.com docker[956]: time="2015-04-05T17:12:2...
Apr 05 17:12:23 vvs.valentinsavenko.com docker[956]: inappropriate ioctl for ...
Apr 05 17:12:23 vvs.valentinsavenko.com docker[956]: time="2015-04-05T17:12:2...
Apr 05 17:12:23 vvs.valentinsavenko.com docker[956]: time="2015-04-05T17:12:2...
Apr 05 17:12:23 vvs.valentinsavenko.com systemd[1]: docker.service: main proc...
Apr 05 17:12:23 vvs.valentinsavenko.com systemd[1]: Failed to start Docker Ap...
Apr 05 17:12:23 vvs.valentinsavenko.com systemd[1]: Unit docker.service enter...
Hint: Some lines were ellipsized, use -l to show in full.
[root#vvs ~]# systemctl status docker.service -l
docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled)
Active: failed (Result: exit-code) since So 2015-04-05 17:12:23 EDT; 33s ago
Docs: http://docs.docker.com
Process: 956 ExecStart=/usr/bin/docker -d $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $INSECURE_REGISTRY (code=exited, status=1/FAILURE)
Main PID: 956 (code=exited, status=1/FAILURE)
Apr 05 17:12:23 vvs.valentinsavenko.com systemd[1]: Starting Docker Application Container Engine...
Apr 05 17:12:23 vvs.valentinsavenko.com docker[956]: time="2015-04-05T17:12:23-04:00" level="info" msg="+job serveapi(unix:///var/run/docker.sock)"
Apr 05 17:12:23 vvs.valentinsavenko.com docker[956]: time="2015-04-05T17:12:23-04:00" level="info" msg="WARNING: You are running linux kernel version 2.6.32-042stab094.8, which might be unstable running docker. Please upgrade your kernel to 3.8.0."
Apr 05 17:12:23 vvs.valentinsavenko.com docker[956]: time="2015-04-05T17:12:23-04:00" level="info" msg="+job init_networkdriver()"
Apr 05 17:12:23 vvs.valentinsavenko.com docker[956]: inappropriate ioctl for device
Apr 05 17:12:23 vvs.valentinsavenko.com docker[956]: time="2015-04-05T17:12:23-04:00" level="info" msg="-job init_networkdriver() = ERR (1)"
Apr 05 17:12:23 vvs.valentinsavenko.com docker[956]: time="2015-04-05T17:12:23-04:00" level="fatal" msg="inappropriate ioctl for device"
Apr 05 17:12:23 vvs.valentinsavenko.com systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Apr 05 17:12:23 vvs.valentinsavenko.com systemd[1]: Failed to start Docker Application Container Engine.
Apr 05 17:12:23 vvs.valentinsavenko.com systemd[1]: Unit docker.service entered failed state.
On every system there is a different problem and I'm wasting hours and hours on not solving them ..
http://kb.odin.com/en/125115
This post suggests that it might not work at all on vServer with old kernels, like in my case..
Did anybody actually manage to use docker on a vServer and if yes, which Kernel does your host-system have?
I have a cheap server at https://www.netcix.de if that's important.
The installation page has a section "Check kernel dependencies" which clearly mentions the minimum kernel level to be expected for Docker to run:
Docker in daemon mode has specific kernel requirements. For details, check your distribution in Installation.
A 3.10 Linux kernel is the minimum requirement for Docker. Kernels older than 3.10 lack some of the features required to run Docker containers. These older versions are known to have bugs which cause data loss and frequently panic under certain conditions.
The latest minor version (3.x.y) of the 3.10 (or a newer maintained version) Linux kernel is recommended. Keeping the kernel up to date with the latest minor version will ensure critical kernel bugs get fixed
So if your distros have a kernel too old, or some other requirements not respected (as listed in Installation), that would explain why the docker daemon fails.

Resources