I want to deploy an Ubuntu VM on Azure and automatically execute a few lines of Bash code right after the VM is deployed. The Bash code is supposed to install PowerShell on the VM. To do this, I use this Bicep file. Below you can see an extract of that Bicep file where I specify what Bash code I want to be executed post deployment.
resource deploymentscript 'Microsoft.Compute/virtualMachines/runCommands#2022-08-01' = {
parent: virtualMachine
name: 'postDeploymentPSInstall'
location: location
properties: {
source: {
script: '''sudo apt-get update &&\
sudo apt-get install -y wget apt-transport-https software-properties-common &&\
wget -q "https://packages.microsoft.com/config/ubuntu/$(lsb_release -rs)/packages-microsoft-prod.deb" &&\
sudo dpkg -i packages-microsoft-prod.deb &&\
sudo apt-get update &&\
sudo apt-get install -y powershell &&\
pwsh'''
}
}
}
I searched for solutions on the web but only found conflicting explanations. I made the code above with the help of this tutorial. The only difference I see is that I'm using Bash and not PowerShell like the blog post author. Thanks for your help.
To deploy an Ubuntu VM on Azure and automatically execute a few lines of Bash code right after the VM is deployed:
I tried to create a Linux VM and used run command to install PowerShell inside the VM while deployment and was able to achieve the desired results by running below bicep file.
#description('Name of the Network Security Group')
param networkSecurityGroupName string = 'SecGroupNet'
var publicIPAddressName = '${vmName}PublicIP'
var networkInterfaceName = '${vmName}NetInt'
var osDiskType = 'Standard_LRS'
var subnetAddressPrefix = '10.1.0.0/24'
var addressPrefix = '10.1.0.0/16'
var linuxConfiguration = {
disablePasswordAuthentication: true
ssh: {
publicKeys: [
{
path: '/home/${adminUsername}/.ssh/authorized_keys'
keyData: adminPassword
}
]
}
}
resource nic 'Microsoft.Network/networkInterfaces#2021-05-01' = {
name: networkInterfaceName
location: location
properties: {
ipConfigurations: [
{
name: 'ipconfig1'
properties: {
subnet: {
id: subnet.id
}
privateIPAllocationMethod: 'Dynamic'
publicIPAddress: {
id: publicIP.id
}
}
}
]
networkSecurityGroup: {
id: nsg.id
}
}
}
resource nsg 'Microsoft.Network/networkSecurityGroups#2021-05-01' = {
name: networkSecurityGroupName
location: location
properties: {
securityRules: [
{
name: 'SSH'
properties: {
priority: 1000
protocol: 'Tcp'
access: 'Allow'
direction: 'Inbound'
sourceAddressPrefix: '*'
sourcePortRange: '*'
destinationAddressPrefix: '*'
destinationPortRange: '22'
}
}
]
}
}
resource vnet 'Microsoft.Network/virtualNetworks#2021-05-01' = {
name: virtualNetworkName
location: location
properties: {
addressSpace: {
addressPrefixes: [
addressPrefix
]
}
}
}
resource subnet 'Microsoft.Network/virtualNetworks/subnets#2021-05-01' = {
parent: vnet
name: subnetName
properties: {
addressPrefix: subnetAddressPrefix
privateEndpointNetworkPolicies: 'Enabled'
privateLinkServiceNetworkPolicies: 'Enabled'
}
}
resource publicIP 'Microsoft.Network/publicIPAddresses#2021-05-01' = {
name: publicIPAddressName
location: location
sku: {
name: 'Basic'
}
properties: {
publicIPAllocationMethod: 'Dynamic'
publicIPAddressVersion: 'IPv4'
dnsSettings: {
domainNameLabel: dnsLabelPrefix
}
idleTimeoutInMinutes: 4
}
}
resource vm 'Microsoft.Compute/virtualMachines#2021-11-01' = {
name: vmName
location: location
properties: {
hardwareProfile: {
vmSize: vmSize
}
storageProfile: {
osDisk: {
createOption: 'FromImage'
managedDisk: {
storageAccountType: osDiskType
}
}
imageReference: {
publisher: 'Canonical'
offer: 'UbuntuServer'
sku: ubuntuOSVersion
version: 'latest'
}
}
networkProfile: {
networkInterfaces: [
{
id: nic.id
}
]
}
osProfile: {
computerName: vmName
adminUsername: adminUsername
adminPassword: adminPassword
linuxConfiguration: ((authenticationType == 'password') ? null : linuxConfiguration)
}
}
}
resource deploymentscript 'Microsoft.Compute/virtualMachines/runCommands#2022-03-01' = {
parent: vm
name: 'linuxscript'
location: location
properties: {
source: {
script: '''# Update the list of packages
sudo apt-get update;
#Install pre-requisite packages.
sudo apt-get install -y wget apt-transport-https software-properties-common;
#Download the Microsoft repository GPG keys
wget -q "https://packages.microsoft.com/config/ubuntu/$(lsb_release -rs)/packages-microsoft-prod.deb";
#Register the Microsoft repository GPG keys
sudo dpkg -i packages-microsoft-prod.deb;
#Update the list of packages after we added packages.microsoft.com
sudo apt-get update;
#Install PowerShell
sudo apt-get install -y powershell;
#Start PowerShell
pwsh'''
}
}
}
output adminUsername string = adminUsername
output hostname string = publicIP.properties.dnsSettings.fqdn
output sshCommand string = 'ssh $ {adminUsername}#${publicIP.properties.dnsSettings.fqdn}'
Deployed Successfully:
From Azure Portal:
After the deployment, When I ssh’d into my VM and ran Pwsh to check if PowerShell was installed
Installed successfully:
Refer MSDoc, run command template-MSDoc
Your problem is misunderstanding what the && does.
The shell will attempt to run semi-simultaneously all parts, some possibly clobbering others or not having necessary preconditions in place before starting!
Replace all instances of "&&\" with ";\" and your script should work, meaning the commands will run sequentially, waiting for the previous line to complete before attempting the subsequent lines.
Related
We have a UI automation script created using Cypress/JavaScript. Scripts run perfectly fine on the local machine. We created a Jenkins job and are trying to run scripts in linux docker container. Pls see below Jenkinsfile for same.
#Library('jenkins-shared-libraries#v2') _
pipeline {
agent {
kubernetes {
yaml podYamlLinux(
customContainerYamls: [
'''
- name: nodeimg
image: node
options: --user 1001
imagePullPolicy: Always
resources:
requests:
memory: "100Mi"
cpu: "100m"
limits:
memory: "1Gi"
cpu: "500m"
tty: true
command:
- cat
securityContext:
privileged: true
'''
]
)
}}
stages {
// Install and verify Cypress
stage('installation') {
steps {
container('nodeimg') {
sh 'npm i'
sh 'npm install cypress --save-dev'
}
}
}
stage('Cypress Test') {
steps {
echo "Running Tests"
container('nodeimg') {
sh 'npm run cypressVerify'
}
}
}
}
post {
// shutdown the server running in the background
always {
echo 'Stopping local server'
publishHTML([allowMissing: false, alwaysLinkToLastBuild: false, keepAll: true,
reportDir: 'cypress/report', reportFiles: 'index.html', reportName: 'HTML
Report', reportTitles: ''])
}
}
}
I have attached pic of package.json file as well.
I have tried different configurations to resolve this but currently getting error message "/tmp/cypressVerify-b91242e3.sh: 1: cypress: Permission denied script returned exit code 126".
It would be great if community help me resolve this.
I had to create a jenkins job to automate certain tasks that will perform certain operations like Updating the public site, Changing public version to latest public release, Updating Software on public site and Restarting Server these include certain operations such as copy files to a tmp folder, log in to a an on-prem server, go to the folder and unzip the file etc.
I have created the jenkinsfile as follows:
pipeline {
options {
skipDefaultCheckout()
timestamps()
}
parameters {
string(name: 'filename', defaultValue: 'abc', description: 'Enter the file name that needs to be copied')
string(database: 'database', defaultValue: 'abc', description: 'Enter the database that needs to be created')
choice(name: 'Run', choices: '', description: 'Data migration')
}
agent {
node { label 'aws && build && linux && ubuntu' }
}
triggers {
pollSCM('H/5 * * * *')
}
stages {
stage('Clean & Clone') {
steps {
cleanWs()
checkout scm
}
}
stage('Updating the public site'){
steps{
sh "scp ./${filename}.zip <user>#<server name>:/tmp"
sh "ssh <user>#<server name>"
sh "cp ./tmp/${filename}.zip ./projects/xyz/xyz-site/"
sh "cd ./projects/xyz/xyz-site/ "
sh "unzip ./${filename}.zip"
sh "cp -R ./${filename}/* ./"
}
stage('Changing public version to latest public release') {
steps {
sh "scp ./${filename}.sql.gz <user>#<server name>:/tmp"
sh "ssh <user>#<server name>"
sh "mysql -u root -p<PASSWORD>"
sh "show databases;"
sh "create database ${params.database};"
sh "GRANT ALL PRIVILEGES ON <newdb>.* TO 'ixxyz'#'localhost' WITH GRANT OPTION;"
sh "exit;"
sh "zcat tmp/${filename}.sql.gz | mysql -u root -p<PASSWORD> <newdb>"
sh "db.default.url="jdbc:mysql://localhost:3306/<newdb>""
sh "ps aux|grep monitor.sh|awk '{print "kill "$2}' |bash"
}
}
stage('Updating Software on public site') {
steps {
sh "scp <user>#<server>:/tmp/abc<version>_empty_h2.zip"
sh "ssh <user>#<server name>"
sh "su <user>"
sh "mv tmp/<version>_empty_h2.zip ./xyz/projects/xyz"
sh "cd xyz/projects/xyz"
sh "cp latest/conf/local.conf <version>_empty_h2/conf/"
}
}
stage('Restarting Server') {
steps {
sh "rm latest/RUNNING_PID"
sh "bash reload.sh"
sh "nohup bash monitor.sh &"
}
}
}
}
Is there a way I can dynamically obtain the zip filename in the root folder? I used ${filename}.zip , but it doesn't seem to work.
Also, is there a better way to perform these operations using jenkins? Any help is much appreciated.
You could write all your steps in one shell script for each stage and execute under one stage.
Regarding filename.zipeither you can take this as a parameter and pass this value to your stages. OR You can also use find command as a shell command or shell script to find .zip files in a current directory. find <dir> -iname \*.zip find . -iname \*.zip .
Example:
pipeline {
options {
skipDefaultCheckout()
timestamps()
}
parameters {
string(name: 'filename', defaultValue: 'abc', description: 'Enter the file name that needs to be copied')
choice(name: 'Run', choices: '', description: 'Data migration')
}
stage('Updating the public site'){
steps{
sh "scp ./${params.filename}.zip <user>#<server name>:/tmp"
...
}
}
}
For executing script at a certain location based on your question , you could use dir with path where your scripts are placed.
OR you can also give the path directly sh label: 'execute script', script:"C:\\Data\\build.sh"
stage('Your stage name'){
steps{
script {
// Give path where your scripts are placed
dir ("C:\\Data") {
sh label: 'execute script', script:"build.sh <Your Arguments> "
...
}
}
}
}
Hi below is my requirement.
Using terraform script i create a linux vm and post that using ansible play book i install some softwares. so i have this scripts separately with me and it is working fine.
What i want to do is that as soon as the terraform script creates the vm i want to invoke ansible script from the terraform script and install the softwares from ansible script.
I tried the below code but it did not work
provisioner "remote-exec" {
inline = ["sudo dnf -y install python"]
connection {
type = "ssh"
user = "fedora"
private_key = "${file(var.ssh_key_private)}"
}
}
provisioner "local-exec" {
command = "ansible-playbook -u fedora -i '${self.public_ip},' --private-key ${var.ssh_key_private} provision.yml"
}
So here i am not sure how the ansible get installed in the vm currently i am doing it manually and how thisansible script will get invoked from terraform
Error: Unknown root level key: provisioner
provisioner can only be used within a resource, e.g.:
resource "aws_instance" "web" {
# ...
provisioner "remote-exec" {
inline = ["sudo dnf -y install python"]
connection {
type = "ssh"
user = "fedora"
private_key = "${file(var.ssh_key_private)}"
}
}
provisioner "local-exec" {
command = "ansible-playbook -u fedora -i '${self.public_ip},' --private-key ${var.ssh_key_private} provision.yml"
}
}
i am new to using jenkins and docker. Currently I ran into an error where my jenkinsfile doesnt have permission to docker.sock. Is there a way to fix this? Dried out of ideas
things i've tried:
-sudo usermod -aG docker $USER //usermod not found
-sudo setfacl --modify user:******:rw /var/run/docker.sock //setfacl not found
-chmod 777 /var/run/docker.sock //still receiving this error after reboot
-chown -R jenkins:jenkins /var/run/docker.sock //changing ownership of '/var/run/docker.sock': Operation not permitted
error image:
def gv
pipeline {
agent any
environment {
CI = 'true'
VERSION = "$BUILD_NUMBER"
PROJECT = "foodcore"
IMAGE = "$PROJECT:$VERSION"
}
tools {
nodejs "node"
'org.jenkinsci.plugins.docker.commons.tools.DockerTool' 'docker'
}
parameters {
choice(name: 'VERSION', choices: ['1.1.0', '1.2.0', '1.3.0'], description: '')
booleanParam(name: 'executeTests', defaultValue: true, description: '')
}
stages {
stage("init") {
steps {
script {
gv = load "script.groovy"
CODE_CHANGES = gv.getGitChanges()
}
}
}
stage("build frontend") {
steps {
dir("client") {
sh 'npm install'
echo 'building client'
}
}
}
stage("build backend") {
steps {
dir("server") {
sh 'npm install'
echo 'building server...'
}
}
}
stage("build docker image") {
steps {
sh 'docker build -t $IMAGE .'
}
}
// stage("deploy") {
// steps {
// script {
// docker.withRegistry(ECURL, ECRCRED) {
// docker.image(IMAGE).push()
// }
// }
// }
// }
}
// post {
// always {
// sh "docker rmi $IMAGE | true"
// }
// }
}
docker.sock permissions will be lost if you restart system or docker service.
To make it persistence setup a cron to change ownership after each reboot
#reboot chmod 777 /var/run/docker.sock
and When you restart the docker, make sure to run the below command
chmod 777 /var/run/docker.sock
Or you can put a cron for it also, which will execute in each every 5 minutes.
Looooong time lurker and first time poster here o/
I am currently trying to build an AWS EC2 instance with an EBS block device attached, which then needs MongoDB installed.
So I have gone the route of building the EC2 instance and attaching the EBS volume, but the remote-exec I need to run on the instance needs a host IP to connect to, to run the MongoDB install commands.
It just keeps timing out on the SSH, no matter what I try. Now I am probably just missing a step or going about this the wrong way, but hopefully you can help.
Any help would be GREATLY appreciated. :D
Below is the code sample I have slapped together:
provider "aws" {
region = "eu-west-1"
access_key = "xxxxxxx"
secret_key = "xxxxxxxx"
}
resource "tls_private_key" "mongo" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "generated_key" {
key_name = "MongoKey"
public_key = "${tls_private_key.mongo.public_key_openssh}"
}
data "aws_ami" "ubuntu" {
most_recent = true
owners = ["099720109477"]
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-*"]
}
}
resource "aws_instance" "web" {
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t2.micro"
key_name = "MongoKey"
monitoring = true
associate_public_ip_address = true
root_block_device {
volume_size = 40
}
ebs_block_device {
volume_size = 100
device_name = "xvda"
}
tags = {
Name = "MongoDB"
}
provisioner "remote-exec" {
connection {
type = "ssh"
user = "ubuntu"
host = "MongoDB"
}
inline = [
"sudo apt-get install gnupg",
"wget -qO - https://www.mongodb.org/static/pgp/server-4.2.asc | sudo apt-key add -",
"echo deb [ arch=amd64,arm64,s390x ] http://repo.mongodb.com/apt/ubuntu xenial/mongodb-enterprise/4.2 multiverse | sudo tee /etc/apt/sources.list.d/mongodb-enterprise.list",
"sudo apt-get update",
"sudo apt-get install -y mongodb-enterprise",
"sudo service mongod start",
"sudo service mongod status"
]
}
Can you try changing your connection section as below
connection {
type = "ssh"
user = "ubuntu"
host = "${aws_instance.web.private_ip}"
private_key = "${tls_private_key.mongo.private_key_pem}"
}
If you continue to face connection issues/difficulties with your remote-exec approach, I would recommend considering the user-data parameter as a replacement.
the user-data script will run during the initialization of your instance so you do not have to open any ssh sessions to provision your resource.
You can accomplish this by updating your aws_instance resource to something like this:
resource "aws_instance" "web" {
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t2.micro"
key_name = "MongoKey"
monitoring = true
associate_public_ip_address = true
root_block_device {
volume_size = 40
}
ebs_block_device {
volume_size = 100
device_name = "xvda"
}
tags = {
Name = "MongoDB"
}
user_data = << EOF
#! /bin/bash
sudo apt-get install gnupg,
wget -qO - https://www.mongodb.org/static/pgp/server-4.2.asc | sudo apt-key add -,
echo deb [ arch=amd64,arm64,s390x ] http://repo.mongodb.com/apt/ubuntu xenial/mongodb-enterprise/4.2 multiverse | sudo tee /etc/apt/sources.list.d/mongodb-enterprise.list,
sudo apt-get update,
sudo apt-get install -y mongodb-enterprise,
sudo service mongod start,
sudo service mongod status
EOF
}
Hope this helps.
more userdata examples