How to configure docker swarm using jenkins? - shell

I have got an assignment. The assignment is "Write a shell script to install and configure docker swarm(one master/leader and one node) and automate the process using Jenkins." I am new to this technology and finding it difficult to proceed. Can anyone help me in explaining step-by-step process of how to proceed?

#Rajnish Kumar Singh, Have you tried to check resources online? I understand you are very new to this technology, but googling some key words like
what is docker swarm
what is jenkins , etc would definitely helps
Having said that, Basically you need to do below set of steps to complete your assignment
Pre-requisites
2 or more - Ubuntu 20.04 Server
(You can use any linux distros like ubuntu, Redhat etc, But make sure your install and execute commands change accordingly.
Here we need two nodes mainly to configure the master and worker node cluster)
Eg :
manager --- 132.92.41.4
worker --- 132.92.41.5
You can create these nodes in any of public cloud providers like AWS EC2 instances or GCP VMs etc
Next, You need to do below set of steps
Configure Hosts
Install Docker-ce
Docker Swarm Initialization
You can refer this article for more info https://www.howtoforge.com/tutorial/ubuntu-docker-swarm-cluster/
This completes first part of your assignment.
Next, You can create one small shell script and include all those install and configuration commands in that script. Basically shell script is collection of set of linux commands. Instead of running each commands separately , you will run script alone and all set up will be done for you.
You can create small script using touch command
touch docker-swarm-install.sh
Specify proper privileges to script to make it executable
chmod +x docker-swarm-install.sh
Next include all your install + configure commands, which you have used earlier to do docker swarm set up in scripts (You can refer above shared link)
Now, when your script is ready, you can configure this script in jenkins job and whenever jenkins job is run, script will get execute and docker swarm cluster will be created
You need a jenkins server. Jenkins is open source software, you can install it in any of public cloud instance (Aws EC2)
Reference : https://devopsarticle.com/how-to-install-jenkins-on-aws-ec2-ubuntu-20-04/
Next once installation is completed. You need to configure job in jenkins
Reference : https://www.toolsqa.com/jenkins/jenkins-build-jobs/
Add your 'docker-swarm-install.sh' as build step in created job
Reference : https://faun.pub/jenkins-jobs-hands-on-for-the-different-use-cases-devops-b153efb483c7
If all set up is successful and now when you run your jenkins job, your docker swarm cluster must be get created.

Related

Run a PowerShell script on Azure AKS nodes,

I have a PowerShell script that I want to run on some Azure AKS nodes (running Windows) to deploy a security tool. There is no daemon set for this by the software vendor. How would I get it done?
Thanks a million
Abdel
Similar question has been asked here. User philipwelz has written:
Hey,
although there could be ways to do this, i would recommend that you dont. The reason is that your AKS setup should not allow execute scripts inside container directly on AKS nodes. This would imply a huge security issue IMO.
I suggest to find a way the execute your script directly on your nodes, for example with PowerShell remoting or any way that suits you.
BR,
Philip
This user is right. You should avoid executing scripts on your AKS nodes. In your situation if you want to deploy Prisma cloud you need to go with the following doc. You are right that install scripts work only on Linux:
Install scripts work on Linux hosts only.
But, for the Windows and Mac software you have specific yaml files:
For macOS and Windows hosts, use twistcli to generate Defender DaemonSet YAML configuration files, and then deploy it with kubectl, as described in the following procedure.
The entire procedure is described in detail in the document I have quoted. Pay attention to step 3 and step 4. As you can see, there is no need to run any powershell script:
STEP 3:
Generate a defender.yaml file, where:
The following command connects to Console (specified in [--address](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#)) as user <ADMIN> (specified in [--user](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#)), and generates a Defender DaemonSet YAML config file according to the configuration options passed to [twistcli](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#). The [--cluster-address](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#) option specifies the address Defender uses to connect to Console.
$ <PLATFORM>/twistcli defender export kubernetes \
--user <ADMIN_USER> \
--address <PRISMA_CLOUD_COMPUTE_CONSOLE_URL> \
--cluster-address <PRISMA_CLOUD_COMPUTE_HOSTNAME>
- <PLATFORM> can be linux, osx, or windows.
- <ADMIN_USER> is the name of a Prisma Cloud user with the System Admin role.
and then STEP 4:
kubectl create -f ./defender.yaml
I think that the above answer is not completely correct.
The twistcli command, does not export daemonset for Windows Nodes. The "PLATFORM" option, is for choosing the OS of the computer that the command will run.
After testing, I have made the conclusion that there is no Docker Image for Prisma Cloud for Windows Kubernetes Nodes, as it is deployed as a service at Windows OS, and not Container (as in Linux). Wrapping up, the Daemonset is not working at the Windows Hosts
I believe the only solution is this -> Windows
This is the Powershell script that WytrzymaƂy Wiktor has mentioned.
Unfortunately this cannot be automated easily, as you have to deploy an Azure VM per AKS Cluster (at the same network), and RDP to the AKS Windows Node and run the script.
If anyone has another suggestion or solution, feel free to share.

EC2 user-data not starting my application

I am using user-data of ec2 instance to power up my auto scale instances and run the application. I am running node js application.
But it is not working properly. I have debugged and checked the instance cloud monitor output. So it says
pm2 command not found
After reading and investigating a lot I have found that the path for the command as root is not there.
As EC2 user-data when it tries to run it finds the path
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
After ssh as ec2-user it is
/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/ec2-user/.local/bin:/home/ec2-user/bin
After ssh as sudo su it is
/root/.nvm/versions/node/v10.15.3/bin:/sbin:/bin:/usr/sbin:/usr/bin
It works only for the last path.
So what is the way or script to run the command as root during launch of the instance provided by user-data?
All thought to start your application with userdata is not recommended, because as per AWS documentation they are not assuring that instance will only come up after successful execution of user data. Even if user data failed it will spin up your instance.
For your problem, I assume if you give the complete absolute path of the binary, It will work.
/root/.nvm/versions/node/v10.15.3/bin/pm2
Better solution for this approach, create a service file for your application startup and start application with systemd or service.

Run build of Machine A from Machine B using Jenkins

I have 2 computers. Com-A, Com-B.
I have build automation functional script using selenium webdriver,Testng and maven on Com-A.
Com-A has installed everything with Jenkins but Com-B has only jenkins , Can I run build which is deployed in Com-A from Com-B? Or I will have to install all necessary software to Com-B also?
Your current setup is good enough to kick off the build remotely from Com-B to Com-A.
Please make sure the jenkins server deployed on Com-B has properly configured as Master and other nodes (ex : Com-A) as slaves.
To ensure the configurations, please follow the steps given below :
Step 1: Go to Manage Jenkins page and select Manage Nodes link
Step 2: On Manage Node page, you can see a list of nodes if already configured. Else there will be only one node named as Master by default which represents the host.
Step 3: To add new Node, give a name (ex: selenium-slave1) and select Dump Slave option to add a node as customized slave.
Step 4(a): After adding the node, configure the node as shown below.
Step 4(b): Make sure while setting the Launch Method field, Launch slave agents on Unix machines via ssh has been selected (this will communicate via ssh between master and slave nodes).
Step 4(c): configure advanced fields which are highlighted as per your settings and click save.
Step 5: Finally a new node has been added as slave and configured successfully.
Step 6(a): Now Configure a new Job to schedule it whenever it need to be run.
Step 6(b): Add a new maven job since your project has been configured using maven.
Note : Will add the job config soon.
You can make Com-A a slave machine in jenkins. Com-B will become master and mark the build to always run from Com-A node.
Refer https://wiki.jenkins-ci.org/display/JENKINS/Step+by+step+guide+to+set+up+master+and+slave+machines
First of all, u don't need to install all thing in COM B.
Connect with COM A with ssh(secure shell) command and than execute ur project using shell or bash script. In jenkins, u will found all build step under Build option.
i use the below command to run my project using shell script:
ssh -l user comAIpaddress(ex. 192.192.192.192) sh SciptLocationInComA.shell
this command first connect with another machine and than execute the shell script to run the project.
Run a java project using shell or bash script is quite easy..... :)

How do I make things (e.g. tomcat) run after cloud-init has run the userdata script?

Short version:
How do I make init.d scripts run after cloud-init has run the userdata script on an EC2?
Long version:
Our deployment process is to construct AMIs with everything installed on them (tomcat, nginx, application etc), but with certain configuration values missed out. At boot time, the userdata script adds in the missing configuration values, and then the application stack can start up
Our current EC2s are based on an old version of the official Debian AMIs, which have the script ec2-run-user-data. This script runs at boot, and downloads and runs the EC2s userdata. When constructing the AMI, I simple edit the init.d scripts for tomcat, nginx etc to include ec2-run-user-data in their "Required-Start:" line, so they start up after the userdata has been run.
Unfortunately that approach is no longer viable, as we want to start using the hvm base AMIs, which have cloud-init installed rather than ec2-run-user-data. But I can't figure out how cloud-init works well enough to work out how to make the process work.
As far as I can tell, the userdata script is run by the cloud-final step, but cloud-final has $all in it's "Required-Start:" line. I could remove it, but I don't know what consequences that might have.
I've tried making tomcat etc run after cloud-init or cloud-config, but the userdata hasn't run by then. Also, it looks like cloud-init and cloud-config start processes then exit, which might explain why cloud-final needs to have $all in Required-Start
More Info:
We use the 'baked AMI' approach, where we create an AMI with all the packages/applications installed, then tell the existing Autoscaling Groups to replace their EC2s with new ones based on the new AMI (via CloudFormation). Some configuration information isn't known at baking time, but must be inserted via the userdata script.
When our tomcat app starts up it expects to read in the file /etc/appname/application.conf. That file has the text <<REPLACE_THIS>> in it. Tomcat will fail to start up if it tries to run before <<REPLACE_TIME>> has been replaced
The userdata script is something like:
#!/bin/bash
sed -i 's!<<REPLACE_TIME>>!{New value to use, determined at deploy time}!' /etc/appname/application.conf
The default Required-Start for tomcat is "$local_fs $remote_fs $network". At baking time, I change that to "$local_fs $remote_fs $network ec2-run-user-data"
By doing all that, the text in /etc/appname/application.conf gets replaced before tomcat runs. But as I said above, I want to change to using cloud-init, and I can't figure out what I need to do to make tomcat start after cloud-init has run the userdata. I get the impression that cloud-init doesn't run the userdata until very late in the process. I could change the userdata script to contain "/etc/init.d/tomcat restart" at the end, but it seems a bit dumb to have tomcat fail to start then get restarted.

Smartfoxserver 2X linux 64 running on EC2 via dotcloud - how to install?

I am currently trying to deploy smartfoxserver 2X on EC2 using dotcloud. I have been able to detect the private ip of the amazon web instance, and using the dotcloud tools I have been able to determine the correct port. However, I have difficulty installing the server proper via the command line so that I can log into it using the AdminTool.
My postinstall is fairly straightforward:
./SFS2X/sfs2x-service start-launchd
I find that on 'dotcloud push' there is a fair amount of promising output in my cygwin terminal, but the push hangs after saying that the sfs2x-service has been launched correctly, until timeout.
Consequently, my question is, has anyone found a way to install SFS2X on EC2 via dotcloud successfully? I managed to have partial success with SFS Pro, with a complete push to dotcloud, by calling ./jre/bin/java -jar installer.jar in my postinstall. Do I need to do extra legwork and build an installer jar for SFS2X? Is there a way that would be best to do this?
I do understand that there is a standard approach to deployment with SFS2X using RightScale on EC2, however I am interested in deployment using the dotcloud platform.
Thanks in advance.
The reason why it is hanging is because you are trying to start your process in the postinstall, and this is not the correct place to do that. The postinstall script is suppose to finish, if it doesn't the deployment will time out, and then get cancelled.
Once the postinstall script is finished, it will then finish the rest of your deployment.
See this page for more information about dotCloud postinstall script:
http://docs.dotcloud.com/0.9/guides/hooks/#post-install
Pay attention to this warning at the end.
Warning:
If your post-install script returns an error (non-zero exit code), or if it runs for more than 10 minutes, the platform will consider that your build has failed, and the new version of your code will not be deployed.
Instead of putting this in the postinstall script, you should add it as a background process, so that it starts up once the deployment process is complete.
See this page for more information on adding background processes to dotCloud services:
http://docs.dotcloud.com/0.9/guides/daemons/
TL;DR: You need to create a supervisord.conf file, and add it to the root of your project, and add your service to that.
Example (you will need to change to fit your situation):
[program:smartfoxserver]
command = /home/dotcloud/current/SFS2X/sfs2x-service start-launchd
Also, make sure you have the correct dotCloud service specified in your dotcloud.yml in order to have the correct binary and libraries installed for what your smartfoxserver application.

Resources