How to Set the Correct Permissions to Launch Neo4J on AWS EC2 via Its Bash Script? - bash

I'm trying to launch Neo4J graph database on AWS using their AIM image (enteprise 3.3.9)
However, the server fails to launch the instance automatically how it's supposed to.
When I try to relaunch it using
systemctl restart neo4j
It also fails.
When I do
systemctl cat neo4j
I find the /etc/neo4j/pre-neo4j.sh file, which is apparently launched on the instance's startup, which, in turn launches Neo4J (when it's supposed to work):
[Unit]
Description=Neo4j Graph Database
After=network-online.target
Wants=network-online.target
[Service]
ExecStart=/etc/neo4j/pre-neo4j.sh
Restart=on-failure
User=neo4j
Group=neo4j
Environment="NEO4J_CONF=/etc/neo4j" "NEO4J_HOME=/var/lib/neo4j"
LimitNOFILE=60000
TimeoutSec=120
SuccessExitStatus=143
[Install]
WantedBy=multi-user.target
So then I launch it manually via the bash script using the sudo prefix and then it starts up fine.
sudo /etc/neo4j/pre-neo4j.sh
The documentation on deploying Neo4J on an AWS server doesn't mention anything about permissions if you use their image. So what can be the problem?
I don't want to have manually launch the DB using the sudo — is it possible to resolve this problem by modifying the bash script itself?
..
The file /etc/neo4j/pre-neo4j.sh sets some environmental parameters and then launches neo4j via:
/usr/share/neo4j/bin/neo4j console

Based on the comments.
The solution was to use
journalctl -u neo4j
to inspect the logs associated with the failed start of neo4j. This enabled to identify the root cause, and subsequently, to fix the issue.

Related

User systemd service restarting only when SSH-ing into the machine

I have a strange situation with a web service hosted on a debian instance, that sometimes stops, and does not restart automatically. However, when SSH-ing into the machine, the service seems to restart automatically.
I originally wanted the service to always be up and restart, could you help me figure out what's wrong ? I may have misunderstood how systemctl --user services are meant to run.
The service in question is a Rails application running with passenger standalone, but I believe the problem might just be a misconfiguration in the systemd file.
My systemd file
# .config/systemd/user/my_service.service
[Unit]
Description=passenger with rails server for my_service (production)
After=syslog.target network.target
[Service]
Type=forking
PrivateTmp=yes
WorkingDirectory=/websites/xxx/current
PIDFile=/websites/xxx/shared/tmp/pids/passenger.8080.pid
ExecStart=/home/outscale/.asdf/shims/bundle exec passenger start /websites/xxx/current
ExecStop=/home/outscale/.asdf/shims/bundle exec passenger stop /websites/xxx/current
MemoryAccounting=true
MemoryLimit=3584M
Restart=always
RestartSec=1
TimeoutStopSec=30
KillMode=mixed
StandardInput=null
SyslogIdentifier=%p
# Environment
Environment="RAILS_ENV=production"
Environment="NODE_ENV=production"
[Install]
WantedBy=default.target
I have copied this installed the service using
systemctl --user daemon-reload
systemctl --user enable my_service
Was I meant to use something else, like systemctl --global enable unit ? I want my service to run with the "outscale" user installing the service (otherwise my version manager asdf does not work as expected)
I found the solution to my problem there. I had misunderstood the behavior of the --user flag (VS using the User= property in the service file)
I was running under debian 11 and as stated in the mentioned answer, my service would not necessarily shut down after logging out of ssh, but only at some point (not clear if it happened when my service crashed for the first time or some sort of garbage collection)
And the service would boot up again magically when SSHing in the instance as a reaction to a user login and starting all the services.
So the fix was to reimplement the services using User= and without the --user flag to make it a globally available service.

How to properly override generated systemd unit file to start after a ZFS mount has mounted

I'm using Ubuntu 18.04.4 LTS which uses systemd, but the squid package packaged with this version of Ubuntu is configured to start via init.d. It starts and runs via systemctl start squid.service if I start it manually after the system has booted.
However, I'm using a ZFS mount point ("/media") to store the cache data, and during the boot process squid is starting before this mount point is active. Consequently I'm getting the error "Failed to verify one of the swap directories". Full output of systemctl status squid is here
I'd like to tell systemd to wait until after media.mount has completed in the most minimally invasive way possible (e.g. without modifying the /etc/init.d/squid file that is maintained by the package). To that end I created the /etc/systemd/system/squid.service.d/override.conf file like so:
% cat /etc/systemd/system/squid.service.d/override.conf
[Unit]
Wants=network.target network-online.target nss-lookup.target media.mount
After=network.target network-online.target nss-lookup.target media.mount
[Install]
WantedBy=multi-user.target
But squid is still starting too early.
Is what I want to do possible? Or do I have to bite the bullet and define a native /etc/systemd/system/squid.service file and remove the /etc/init.d/squid init script?

WebLogic in Docker: how to start managed servers automatically

I am learning Docker, and creating an image for Oracle WebLogic 12.2.1.4 server.
My image is ready, working fine. It contains
an admin server
two managed servers
When I run my image with docker run -d -p 7001:7001 --name WL oracle/weblogic-12.2.1.4.0:1.0 the admin server starts automatically because I added the following line at the end of my Dockerfile:
CMD /u01/oracle/user_projects/domains/$DOMAIN_NAME/startWebLogic.sh
But I need to start managed servers manually. I need to login into the container and start them by hand:
docker exec -it WL /bin/bash
./startManagedWebLogic.sh MANAGED_SERVER_1 http://localhost:7001 &
./startManagedWebLogic.sh MANAGED_SERVER_2 http://localhost:7001 &
This is not what I want. I want to start managed servers automatically after admin server is up and running.
I was thinking about to create a new bash script, copy it into the image and use it to boot up the admin and managed servers. Like this:
start-wls-domain.sh
#!/bin/bash
/u01/oracle/user_projects/domains/$DOMAIN_NAME/startWebLogic.sh &
# there are a more sophisticated way to check the status of the admin server but it is okay for test
sleep 60
./startManagedWebLogic.sh MANAGED_SERVER_1 http://localhost:7001 &
./startManagedWebLogic.sh MANAGED_SERVER_2 http://localhost:7001 &
This script can be called from Dockerfile with CMD command.
But with this solution, I lost the ability to see the output on the default Docker log. The docker logs WL -f will display nothing.
Another issue with this bash script solution is if this script finished the container will stop running. Do I need an infinite loop at the end of this script?
If possible I would like to have a solution without start-wls-domain.sh.
What is the best and easiest way to start Weblogic managed servers automatically within a Docker container?
I followed the suggestions and I run different servers in different containers. That way I was able to start properly the server.
I published the solution on Github, here.

Running terminal and entering Commands on Startup (Raspberry Pi)

On startup i am trying to make The Pi open terminal run a source command "source env/bin/activate" and then run the command "google-assistant-demo' all while the terminal is still open. This part is crucial as the google assistant development software i am using requires the console to remain open.
This is for a personal assistant product i am using and i have tried creating a executable sh script on startup but that can only run one command and the terminal closes afterwards.
source env/bin/activate
google-assistant-demo
When i try to edit the startup config file the terminal opens for a second and instantly closes.
Execute script on start-up
Here you can find a page full of wonderful solutions to run a script at the boot startup of the system. Within a script, you can do almost everything you want (for instance run the command you were speaking about source env/bin/activate).
Here another useful link.
How to run a Linux Program on Startup
January 2, 2017 Tim How To, Raspberry Pi
Here are the steps to have a program or script start on boot on a linux machine using Systemctl. I’m currently using this start several services on my raspberry pi. DigitalOcean wrote an article that goes into more detail on Systemctl.
Run this command
sudo nano /etc/systemd/system/YOUR_SERVICE_NAME.service
Paste in the command below. Press ctrl + x then y to save and exit
Description=GIVE_YOUR_SERVICE_A_DESCRIPTION
Wants=network.target
After=syslog.target network-online.target
[Service]
Type=simple
ExecStart=YOUR_COMMAND_HERE
Restart=on-failure
RestartSec=10
KillMode=process
[Install]
WantedBy=multi-user.target
Reload services
sudo systemctl daemon-reload
Enable the service
sudo systemctl enable YOUR_SERVICE_NAME
Start the service
sudo systemctl start YOUR_SERVICE_NAME
Check the status of your service
systemctl status YOUR_SERVICE_NAME
Reboot your device and the program/script should be running. If it crashes it will attempt to restart.
Here the link to the original post. However, it seems that you did not check on Google (or other) before: the web is full of such information and many of them are amazing!

EC2 user-data not starting my application

I am using user-data of ec2 instance to power up my auto scale instances and run the application. I am running node js application.
But it is not working properly. I have debugged and checked the instance cloud monitor output. So it says
pm2 command not found
After reading and investigating a lot I have found that the path for the command as root is not there.
As EC2 user-data when it tries to run it finds the path
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
After ssh as ec2-user it is
/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/ec2-user/.local/bin:/home/ec2-user/bin
After ssh as sudo su it is
/root/.nvm/versions/node/v10.15.3/bin:/sbin:/bin:/usr/sbin:/usr/bin
It works only for the last path.
So what is the way or script to run the command as root during launch of the instance provided by user-data?
All thought to start your application with userdata is not recommended, because as per AWS documentation they are not assuring that instance will only come up after successful execution of user data. Even if user data failed it will spin up your instance.
For your problem, I assume if you give the complete absolute path of the binary, It will work.
/root/.nvm/versions/node/v10.15.3/bin/pm2
Better solution for this approach, create a service file for your application startup and start application with systemd or service.

Resources