Nomad job using exec fails when running any bash command - nomad

I have tried everything and I just can’t get an exec type job to run. I tried it on 3 different clusters and it fails on all.
The job prunes docker containers and just runs docker system prune -a.
This is the config section:
driver = "exec"
config {
command = "bash"
args = ["-c",
" docker system prune -a "]
}
No logs and containers are not pruned:
job "docker-cleanup" {
type = "system"
constraint {
attribute = "${attr.kernel.name}"
operator = "="
value = "linux"
}
datacenters = ["dc1"]
group "docker-cleanup" {
restart {
interval = "24h"
attempts = 0
mode = "fail"
}
task "docker-system-prune" {
driver = "exec"
config {
command = "bash"
args = ["-c",
" docker system prune -a "]
}
resources {
cpu = 100
memory = 50
network {
mbits = 1
}
}
}
}
}
What am I doing wrong?

I would suggest you provide the output to make it easier to analyze.
One thing you can try, is to add the full path to the bash executable.
driver = "exec"
config {
command = "/bin/bash"
args = ["-c",
" docker system prune -a "]
}
Further you are missing the "--force" parameter on system prune, without it - docker system prune asks for confirmation.
docker system prune --all --force

As I know all args should be provided separately:
driver = "exec"
config {
command = "/bin/bash"
args = [
"-c", "docker", "system", "prune", -a "
]
}

Related

Templatefile and Bash script

I need to be able to run bash script as userdata for launchtemplate and this is how I try to do it :
resource "aws_launch_template" "ec2_launch_template" {
name = "ec2_launch_template"
image_id = data.aws_ami.latest_airbus_ami.id
instance_type = var.instance_type[terraform.workspace]
iam_instance_profile {
name = aws_iam_instance_profile.ec2_profile.name
}
vpc_security_group_ids = [data.aws_security_group.default-sg.id, aws_security_group.allow-local.id] # the second parameter should be according to the user
monitoring {
enabled = true
}
block_device_mappings {
device_name = "/dev/sda1"
ebs {
volume_size = 30
encrypted = true
volume_type = "standard"
}
}
tags = {
Name = "${var.app_name}-${terraform.workspace}-ec2-launch-template"
}
#user_data = base64encode(file("${path.module}/${terraform.workspace}-script.sh")) # change the base encoder as well
user_data = base64encode(templatefile("${path.module}/script.sh", {app_name = var.app_name, env = terraform.workspace, high_threshold = var.high_threshold, low_threshold = var.low_threshold})) # change the base encoder as well
}
as you can see, I pass parameters as map in the "templatefile" function, I managed to retrieve them doing this :
#!/bin/bash -xe
# Activate logs for everything
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
# Retrieve variables from Terraform
app_name = ${app_name}
environment = ${env}
max_memory_perc= ${high_threshold}
min_memory_perc= ${low_threshold}
instance_id=$(wget -q -O - http://169.254.169.254/latest/meta-data/instance-id)
ami_id=$(wget -q -O - http://169.254.169.254/latest/meta-data/ami-id)
instance_type=$(wget -q -O - http://169.254.169.254/latest/meta-data/instance-type)
scale_up_name=$${app_name}"-"$${environment}"-scale-up"
scale_down_name=$${app_name}"-"$${environment}"-scale-down"
Then, when I look at launchtemplate in AWS console, I can see that the values used in parameters are filled in :
app_name = test-app
environment = prod
max_memory_perc= 80
min_memory_perc= 40
the problem that I have is, when I run that, I get this error :
+ app_name = test-app
/var/lib/cloud/instances/scripts/part-001: line 7: app_name: command not found
I assume there is a problem with interpretation or something like that but cannot put the finger on it
any ideas ?
Thanks
As they said, it was a problem with spaces, it's fixed now
thanks

Jenkins build output set as current build description

I want to set current build description in my jenkins job from #bash output
Jenkins build output set as current build description
For example to set revision and branch from string and choice parameters I do it like this:
parameters {
string(defaultValue: "", description: '11.00', name: 'REVISION')
choice(name: 'BRANCH', choices: 'trunk\nupdate', description: 'Branch')
}
stage('Set build') {
steps {
script {
// Set build parameters
currentBuild.description = "$REVISION $BRANCH"
}
}
}
Let's say that I want get my diskspace % #bash execution and put it in the description...
stage('bash') {
steps {
script {
sh '''
DISK_SIZE="$(df -h --output='pcent' /mnt | grep -v "Use%")
}
currentBuild.description = "$DISK_SIZE"
}
}
I want in the build description for example to put my disk%. In this case I expect to in the description %30
Or to put some other staff that are generated from current build.
You can tell your sh command to return its stdout using the returnStdout option.
myOutput = sh(script: '$(df -h --output='pcent' /mnt | grep -v "Use%")', returnStdout: true)
currentBuild.description = myOutput

Env variable value got reset to original even after assigning the pom version number in jenkins script

I have a scenario where i have to read the maven pom versions for different components and assign the version to docker image(TAG). But after i read the pom, assigned it to some global variable it will reset to original value in groovy jenkins script. Below is the sample. HMAP_VERSION value will 1.2.1 but when it is used in the line: sh "docker login -u ${ART_USERNAME} -p ${ART_PASSWORD} test.com" the value will be UNINITIALISED.
Can somebody tell me what might have gone wrong? This will work with single maven file which is read in env block as below:
environment {
CLOADER_VERSION = readMavenPom().getVersion()
}
Below is the sample of what im tring to do.
#! groovy
environment {
HMAP_VERSION = "UNINITIALISED"
CLOADER_VERSION = "UNINITIALISED"
}
stages {
stage('Build Cloader') {
steps {
checkout([$class: 'GitSCM' "rest is removed")
dir('isa-casloader') {
script {
CLOADER_VERSION = readMavenPom().getVersion()
}
container('build') {
sh '/opt/apache-maven/bin/mvn -s settings.xml -B clean install -DskipTests=true'
}
}
}
}
stage ('Build Casloader Docker Image') {
steps {
dir('isa-casloader') {
container('tools') {
echo("CLOADER_VERSION=${CLOADER_VERSION}")
withCredentials() {
sh "docker login -u ${ART_USERNAME} -p ${ART_PASSWORD} testing.com"
sh 'docker build -t testing.com:${CLOADER_VERSION} .'
sh 'docker push testing.com:${CLOADER_VERSION}'
}
}
}
}
}
stage ('Build Heat Map Docker Image') {
steps {
checkout([$class: 'GitSCM', "rest is commented"])
dir('apps') {
container('tools') {
script {
def pom = readMavenPom file: 'pom-docker.xml'
HMAP_VERSION = pom.version
}
echo("HMAP_VERSION=${HMAP_VERSION}")
withCredentials() {
sh "docker login -u ${ART_USERNAME} -p ${ART_PASSWORD} test.com"
sh 'docker build -t test.com:${HMAP_VERSION} .'
sh 'docker push test.com:${HMAP_VERSION}'
}}}}}}}
By my read of your code, you're mixing environment variables with variables within the Groovy context.
These lines create environment variables, which are accessible in the shell as $HMAP_VERSION and $CLOADER_VERSION:
environment {
HMAP_VERSION = "UNINITIALISED"
CLOADER_VERSION = "UNINITIALISED"
}
However, you're populating a Groovy variable here:
script {
CLOADER_VERSION = readMavenPom().getVersion()
}
To instead populate the environment variable, you'd want to use env.CLOADER_VERSION instead.
This changes what context the variables are evaluated in when you're calling out to shell using the sh directive:
1-> sh "docker login -u ${ART_USERNAME} -p ${ART_PASSWORD} testing.com"
2-> sh 'docker build -t testing.com:${CLOADER_VERSION} .'
3-> sh 'docker push testing.com:${CLOADER_VERSION}'
In line number 1 above, the command is quoted using a double quotes (") which means that the variables ART_USERNAME and ART_PASSWORD are evaluating in the context of the Groovy script.
However, in lines 2 and 3 the commands are quoted using a single quote (') which means that those variables are being evaluated by the shell (likely /bin/sh) and therefore using the values from the environment.
The easiest fix would be to ensure that values you want exposed in the shell are always accessed using the env. prefix in the Groovy context:
// set environment for CLOADER_VERSION
env.CLOADER_VERSION = readMavenPom().getVersion()
// print value of environment variable CLOADER_VERSION
echo("CLOADER_VERSION=${env.CLOADER_VERSION}")
// set environment for HMAP_VERSION
env.HMAP_VERSION = pom.version
// print value of environment variable HMAP_VERSION
echo("HMAP_VERSION=${env.HMAP_VERSION}")
Cheers.
Thanks for the response. My issue got resolved. In docker context as shown below,
withCredentials() {
sh "docker login -u ${ART_USERNAME} -p ${ART_PASSWORD} testing.com"
sh 'docker build -t testing.com:${CLOADER_VERSION} .'
sh 'docker push testing.com:${CLOADER_VERSION}'
}
Login command is proper which is inside double quotes, but the next statements were in single quotes. So variables latest value was not getting resolved. When i change the statements to be inside double quotes, it worked!!
Below is the proper command:
withCredentials() {
sh "docker login -u ${ART_USERNAME} -p ${ART_PASSWORD} testing.com"
sh "docker build -t testing.com:${CLOADER_VERSION} ."
sh "docker push testing.com:${CLOADER_VERSION}"
}
Thanks you.

Issue using Terraform EC2 Userdata

I am deploying a bunch of EC2 instances that require a mount called /data, this is a seperate disk that I am attaching using volume attach in AWS.
Now when I did the following manually it works fine, so the script I use works however when adding it via userdata I am seeing issues and the mkfs command is not happening.
If you see my terraform config:
resource "aws_instance" "riak" {
count = 5
ami = "${var.aws_ami}"
vpc_security_group_ids = ["${aws_security_group.bastion01_sg.id}","${aws_security_group.riak_sg.id}","${aws_security_group.outbound_access_sg.id}"]
subnet_id = "${element(module.vpc.database_subnets, 0)}"
instance_type = "m4.xlarge"
tags {
Name = "x_riak_${count.index}"
Role = "riak"
}
root_block_device {
volume_size = 20
}
user_data = "${file("datapartition.sh")}"
}
resource "aws_volume_attachment" "riak_data" {
count = 5
device_name = "/dev/sdh"
volume_id = "${element(aws_ebs_volume.riak_data.*.id, count.index)}"
instance_id = "${element(aws_instance.riak.*.id, count.index)}"
}
And then the partition script is as follows:
#!/bin/bash
if [ ! -d /data ];
then mkdir /data
fi
/sbin/mkfs -t ext4 /dev/xvdh;
while [ -e /dev/xvdh ] ; do sleep 1 ; done
mount /dev/xvdh /data
echo "/dev/xvdh /data ext4 defaults 0 2" >> /etc/fstab
Now when I do this via terraform the mkfs doesn't appear to happen and I see no obvious errors in the syslog. If I copy the script manually and just bash script.sh the mount is created and works as expected.
Has anyone got any suggestions here?
Edit: It's wort noting adding this in AWS gui under userdata also works fine.
You could try with remote_exec instead of user_data.
User_data relates on cloud-init which can act differently depending on images of your cloud provider.
And also i'm not sure it's a good idea to exec a script that would wait for some time before executing in the cloud-init section => this may lead to VM considering launch has failed because of a timeout (depending on your cloud provider).
Remote_exec may be better here because you will be able to wait until your /dev/xvdh is attached
See here
resource "aws_instance" "web" {
# ...
provisioner "file" {
source = "script.sh"
destination = "/tmp/script.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/script.sh",
"/tmp/script.sh args",
]
}
}

executing shell script and using its output as input to next gradle task

I am using gradle for build and release, so my gradle script executes a shell script. The shell script outputs an ip address which has to be provided as an input to my next gradle ssh task. I am able to get the output and print on the console but not able to use this output as an input to next task.
remotes {
web01 {
def ip = exec {
commandLine './returnid.sh'
}
println ip --> i am able to see the ip address on console
role 'webServers'
host = ip --> i tried referring as $ip '$ip' , both results into syntax error
user = 'ubuntu'
password = 'ubuntu'
}
}
task checkWebServers1 << {
ssh.run {
session(remotes.web01) {
execute 'mkdir -p /home/ubuntu/abc3'
}
}
}
but it results in error "
What went wrong:
Execution failed for task ':checkWebServers1'.
java.net.UnknownHostException: {exitValue=0, failure=null}"
Can anyone please help me use the output variable in proper syntax or provide some hints which could help me.
Thanks in advance
The reason it's not working is the fact, that exec call return is ExecResult (here is it's JavaDoc description) and it's not a text output of the execution.
If you need to get the text output, then you've to specify the standardOutput property of the exec task. This could be done so:
remotes {
web01 {
def ip = new ByteArrayOutputStream()
exec {
commandLine './returnid.sh'
standardOutput = ip
}
println ip
role 'webServers'
host = ip.toString().split("\n")[2].trim()
user = 'ubuntu'
password = 'ubuntu'
}
}
Just note, the ip value by default would have a multiline output, include the command itself, so it has to be parsed to get the correct output, For my Win machine, this could be done as:
ip.toString().split("\n")[2].trim()
Here it takes only first line of the output.

Resources