I am using user-data of ec2 instance to power up my auto scale instances and run the application. I am running node js application.
But it is not working properly. I have debugged and checked the instance cloud monitor output. So it says
pm2 command not found
After reading and investigating a lot I have found that the path for the command as root is not there.
As EC2 user-data when it tries to run it finds the path
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
After ssh as ec2-user it is
/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/ec2-user/.local/bin:/home/ec2-user/bin
After ssh as sudo su it is
/root/.nvm/versions/node/v10.15.3/bin:/sbin:/bin:/usr/sbin:/usr/bin
It works only for the last path.
So what is the way or script to run the command as root during launch of the instance provided by user-data?
All thought to start your application with userdata is not recommended, because as per AWS documentation they are not assuring that instance will only come up after successful execution of user data. Even if user data failed it will spin up your instance.
For your problem, I assume if you give the complete absolute path of the binary, It will work.
/root/.nvm/versions/node/v10.15.3/bin/pm2
Better solution for this approach, create a service file for your application startup and start application with systemd or service.
Related
I have got an assignment. The assignment is "Write a shell script to install and configure docker swarm(one master/leader and one node) and automate the process using Jenkins." I am new to this technology and finding it difficult to proceed. Can anyone help me in explaining step-by-step process of how to proceed?
#Rajnish Kumar Singh, Have you tried to check resources online? I understand you are very new to this technology, but googling some key words like
what is docker swarm
what is jenkins , etc would definitely helps
Having said that, Basically you need to do below set of steps to complete your assignment
Pre-requisites
2 or more - Ubuntu 20.04 Server
(You can use any linux distros like ubuntu, Redhat etc, But make sure your install and execute commands change accordingly.
Here we need two nodes mainly to configure the master and worker node cluster)
Eg :
manager --- 132.92.41.4
worker --- 132.92.41.5
You can create these nodes in any of public cloud providers like AWS EC2 instances or GCP VMs etc
Next, You need to do below set of steps
Configure Hosts
Install Docker-ce
Docker Swarm Initialization
You can refer this article for more info https://www.howtoforge.com/tutorial/ubuntu-docker-swarm-cluster/
This completes first part of your assignment.
Next, You can create one small shell script and include all those install and configuration commands in that script. Basically shell script is collection of set of linux commands. Instead of running each commands separately , you will run script alone and all set up will be done for you.
You can create small script using touch command
touch docker-swarm-install.sh
Specify proper privileges to script to make it executable
chmod +x docker-swarm-install.sh
Next include all your install + configure commands, which you have used earlier to do docker swarm set up in scripts (You can refer above shared link)
Now, when your script is ready, you can configure this script in jenkins job and whenever jenkins job is run, script will get execute and docker swarm cluster will be created
You need a jenkins server. Jenkins is open source software, you can install it in any of public cloud instance (Aws EC2)
Reference : https://devopsarticle.com/how-to-install-jenkins-on-aws-ec2-ubuntu-20-04/
Next once installation is completed. You need to configure job in jenkins
Reference : https://www.toolsqa.com/jenkins/jenkins-build-jobs/
Add your 'docker-swarm-install.sh' as build step in created job
Reference : https://faun.pub/jenkins-jobs-hands-on-for-the-different-use-cases-devops-b153efb483c7
If all set up is successful and now when you run your jenkins job, your docker swarm cluster must be get created.
I'm trying to launch Neo4J graph database on AWS using their AIM image (enteprise 3.3.9)
However, the server fails to launch the instance automatically how it's supposed to.
When I try to relaunch it using
systemctl restart neo4j
It also fails.
When I do
systemctl cat neo4j
I find the /etc/neo4j/pre-neo4j.sh file, which is apparently launched on the instance's startup, which, in turn launches Neo4J (when it's supposed to work):
[Unit]
Description=Neo4j Graph Database
After=network-online.target
Wants=network-online.target
[Service]
ExecStart=/etc/neo4j/pre-neo4j.sh
Restart=on-failure
User=neo4j
Group=neo4j
Environment="NEO4J_CONF=/etc/neo4j" "NEO4J_HOME=/var/lib/neo4j"
LimitNOFILE=60000
TimeoutSec=120
SuccessExitStatus=143
[Install]
WantedBy=multi-user.target
So then I launch it manually via the bash script using the sudo prefix and then it starts up fine.
sudo /etc/neo4j/pre-neo4j.sh
The documentation on deploying Neo4J on an AWS server doesn't mention anything about permissions if you use their image. So what can be the problem?
I don't want to have manually launch the DB using the sudo — is it possible to resolve this problem by modifying the bash script itself?
..
The file /etc/neo4j/pre-neo4j.sh sets some environmental parameters and then launches neo4j via:
/usr/share/neo4j/bin/neo4j console
Based on the comments.
The solution was to use
journalctl -u neo4j
to inspect the logs associated with the failed start of neo4j. This enabled to identify the root cause, and subsequently, to fix the issue.
iam using centos 7 and i want to make hadoop services working by dynamic way
i mean i want to automate the process of issuing start-all.sh command
to be executed automatically after boot of centos
i had made the following script and let it called start-hadoop.sh (/root/start-hadoop.sh)
/etc/rc.d/init.d/functions
/usr/local/hadoop/etc/hadoop/hadoop-env.sh
/usr/local/hadoop/sbin/start-dfs.sh
which sould start hadoop-services
i had trired to give this script execute permissions as root user and to put this script directory in the file rc.local
which is found under the path /etc/rc.d/rc.local of course after giving it
execute permissions but it is not executed at boot
any help please and are there any better ways
Short version:
How do I make init.d scripts run after cloud-init has run the userdata script on an EC2?
Long version:
Our deployment process is to construct AMIs with everything installed on them (tomcat, nginx, application etc), but with certain configuration values missed out. At boot time, the userdata script adds in the missing configuration values, and then the application stack can start up
Our current EC2s are based on an old version of the official Debian AMIs, which have the script ec2-run-user-data. This script runs at boot, and downloads and runs the EC2s userdata. When constructing the AMI, I simple edit the init.d scripts for tomcat, nginx etc to include ec2-run-user-data in their "Required-Start:" line, so they start up after the userdata has been run.
Unfortunately that approach is no longer viable, as we want to start using the hvm base AMIs, which have cloud-init installed rather than ec2-run-user-data. But I can't figure out how cloud-init works well enough to work out how to make the process work.
As far as I can tell, the userdata script is run by the cloud-final step, but cloud-final has $all in it's "Required-Start:" line. I could remove it, but I don't know what consequences that might have.
I've tried making tomcat etc run after cloud-init or cloud-config, but the userdata hasn't run by then. Also, it looks like cloud-init and cloud-config start processes then exit, which might explain why cloud-final needs to have $all in Required-Start
More Info:
We use the 'baked AMI' approach, where we create an AMI with all the packages/applications installed, then tell the existing Autoscaling Groups to replace their EC2s with new ones based on the new AMI (via CloudFormation). Some configuration information isn't known at baking time, but must be inserted via the userdata script.
When our tomcat app starts up it expects to read in the file /etc/appname/application.conf. That file has the text <<REPLACE_THIS>> in it. Tomcat will fail to start up if it tries to run before <<REPLACE_TIME>> has been replaced
The userdata script is something like:
#!/bin/bash
sed -i 's!<<REPLACE_TIME>>!{New value to use, determined at deploy time}!' /etc/appname/application.conf
The default Required-Start for tomcat is "$local_fs $remote_fs $network". At baking time, I change that to "$local_fs $remote_fs $network ec2-run-user-data"
By doing all that, the text in /etc/appname/application.conf gets replaced before tomcat runs. But as I said above, I want to change to using cloud-init, and I can't figure out what I need to do to make tomcat start after cloud-init has run the userdata. I get the impression that cloud-init doesn't run the userdata until very late in the process. I could change the userdata script to contain "/etc/init.d/tomcat restart" at the end, but it seems a bit dumb to have tomcat fail to start then get restarted.
need pretty trivial task
i have server, which in crontab every night will run "something" what will launch new EC2 instance, deploy there code (ruby script), run it, upon completion of the script shutdown the instance.
how to do it the best?
thanks.
Here's an approach that can accomplish this without any external computer/cron job:
EC2 AutoScaling supports schedules for running instances. You could use this to start an instance at a particular time each night.
The instance could be of an AMI that has a startup script that does the setup and running of the job. Or, you could specify a user-data script be passed to the instance that does this job for you.
The script could terminate the instance when it has completed running.
If you are running EBS boot instance, then shutdown -h now in your script will terminate the instance if you specify instance-initiated-shutdown-behavior of terminate.