How should I run a mysql command with nomad and arguments? - nomad

I am interested in using nomad to initialize a mysql database on a windows server that is without docker. I tried to create a job using the "exec" driver, "mysql" command, and an args argument with the host, username, password, etc.
job "gavin-setup" {
datacenters = ["dc1"]
group "gavin-setup-group" {
task "setup" {
driver = "exec"
config = {
command = "mysql"
args = [
"-hlocalhost",
"-ugavin",
"-pgavin_secret",
"gavin_database",
"<",
"C:\\gavin\\config\\create.sql"
]
}
}
}
}
The arguments aren't being passed along. I also tried removing the args and just using the command which did not work:
mysql -hlocalhost -ugavin -pgavin_secret gavin_database < C:\\gavin\\config\\create.sql
Is it possible to do this kind of database initialization setup on a first-time run of the application? Should nomad be capable of doing this? If not - should I be using some other process to do this kind of setup?

I ended up using the full path to the mysql.exe and discovered that nomad logs -stderr <assoc-id> was really helpful.
job "gavin-setup" {
datacenters = ["dc1"]
group "gavin-setup-group" {
task "create_db" {
driver = "exec"
config = {
command = "C:/Program Files (x86)/MySQL/MySQL Server 5.6/bin/mysql.exe"
args = [
"-uroot",
"-pgavin_secret",
"-e",
"create database if not exists gavin_db;"
]
}
}
}
}

Related

Jenkins function of shared library executed on wrong node

I have a simple shared library and want to call a function which creates a file.
My pipeline script looks like this:
#Library('jenkins-shared-library')_
pipeline {
agent {
node {
label 'node1'
}
}
stages {
stage('test') {
steps {
script {
def host = sh(script: 'hostname', returnStdout: true).trim()
echo "Hostname is: ${host}"
def testlib = new TestLibrary(script:this)
testlib.createFile("/tmp/file1")
}
}
}
}
}
This pipeline job is triggered by another job which runs on the built-in master node.
The stage 'test' is correctly executed on the 'node1'.
Problem: The created file "/tmp/file1" is created on the jenkins master, instead of "node1"
I also tried it without the shared library and load a groovy script
directly in a step:
pipeline {
agent {
node {
label 'node1'
}
}
stages {
stage('test') {
steps {
script {
def script = load "/path/to/script.groovy"
script.createFile("/tmp/file1")
}
}
}
}
}
This also creates the file on the master node, instead on "node1".
Is there no way of loading external libs or classes and execute them on the node where the stage is running on ? I dont want to place all the code directly into the step.
Ok I found it myself, groovy scripts run by definition always on the master node. No way to run scripts or shared-library code on a node other than master.

Terraform template_cloudinit_config multiple part execution order is wrong

I am using the terraform to build my ec2-instances as part of instance bootstrap, added cloud-init config to run multiple userdata scripts. but the content_type = "text/x-shellscript" always executed first. I verified the cat /var/log/cloud-init-output.log file. it shows the shell script is invoked first. How do I config the shell script to run at last?
data "template_cloudinit_config" "myapp_cloudinit_config" {
gzip = false
base64_encode = false
# Main cloud-config configuration file.
part {
content_type = "text/cloud-config"
content = "${data.template_file.base_bootstrap_file.rendered}"
merge_type = "list(append)+dict(recurse_array)+str()"
}
part {
content_type = "text/cloud-config"
content = "${module.template_file_appsec_init.appsec_user_data_rendered}"
merge_type = "list(append)+dict(recurse_array)+str()"
}
part {
content_type = "text/x-shellscript"
content = "${module.template_file_beat_init.beat_user_data_rendered}"
}
}
Shell script looks like below
module " template_file_beat_init" {
source = "url" #the source url contains the zip file which includes the below shell script
}
#!/bin/sh
deploy_the_app() {
//invoke ansible playbook execution
}
deploy_the_app
Cloud provider: AWS
OS : RHEL 8.3
cloud-init --version: /usr/bin/cloud-init 19.4
Terraform v0.11.8

How can I run a shell script on multiple VMWare vm's created by terraform module?

I am using this module to spin up multiple vm's on my vmware cluster, https://registry.terraform.io/modules/Terraform-VMWare-Modules/vm/vsphere/1.6.0, and I want to run a shell script on all of the vms after using a null resource. With what i currently have, it complains that the host was not given a string, which makes sense. Here is my null resource:
# main.tf
module "jenkins-linuxvm-centos7" {
source = "Terraform-VMWare-Modules/vm/vsphere"
...
}
resource "null_resource" "vm" {
triggers = {
vm_ips = join(",", module.jenkins-linuxvm-centos7.Linux-ip)
}
# export TF_VAR_root_password=<pass>
connection {
type = "ssh"
host = module.jenkins-linuxvm-centos7.Linux-ip
user = "root"
password = var.vm_root_password
port = "22"
agent = false
}
provisioner "file" {
source = "resize_disk.sh"
destination = "/tmp/resize_disk.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/resize_disk.sh",
"/tmp/resize_disk.sh"
]
}
}
Do I need to use a dynamic block somehow? Or how can I modify host = module.jenkins-linuxvm-centos7.Linux-ip to include all the hosts I want to run it on?
You have to run it in a For_Each loop... Below is an example code where i am looping against the sql_var map variable. you will have to do it against the output of IPs --> module.jenkins-linuxvm-centos7.Linux-ip... you will be able to reference the IP of each machine as something like each.value i guess. I dont know how your output looks like, so guessing. If you are new to loops, here is one nice tuto.
https://blog.boltops.com/2020/10/04/terraform-hcl-loops-with-count-and-for-each
resource "null_resource" "instance" {
for_each = var.sql_var
provisioner "local-exec" {
command = "echo ${each.key} >> hello.txt"
}
}

Unable to connect to AWS Docdb

We are trying to connect to aws docdb from errbit, but of no luck. This is the connection string from docdb:
mongodb://user:<insertYourPassword>#dev-docdb-cluster.cluster-xxxx.us-east-1.docdb.amazonaws.com:27017/?ssl=true&ssl_ca_certs=rds-combined-ca-bundle.pem&replicaSet=rs0
We are able to connect to Atlas db though, the connection string format we are using for atlas is something like this:
mongodb://user:pass#cluster-shard-00-00-xxx.mongodb.net:27017,cluster-shard-00-01-xxx.mongodb.net:27017,cluster-shard-00-02-xxx.mongodb.net:27017/errbit?ssl=true&replicaSet=Cluster-shard-0&authSource=admin&w=majority
You will need to change config/mongo.rb file to be as follows:
log_level = Logger.const_get Errbit::Config.log_level.upcase
Mongoid.logger.level = log_level
Mongo::Logger.level = log_level
Mongoid.configure do |config|
uri = if Errbit::Config.mongo_url == 'mongodb://localhost'
"mongodb://localhost/errbit_#{Rails.env}"
else
Errbit::Config.mongo_url
end
config.load_configuration(
clients: {
default: {
uri: uri,
options: { ssl_ca_cert: Rails.root.join('rds-combined-ca-bundle.pem') }
}
},
options: {
use_activesupport_time_zone: true
}
)
end
You can notice that this is exactly the same as current one except for I added:
options: { ssl_ca_cert: Rails.root.join('rds-combined-ca-bundle.pem') }
It did work for me after doing this :) Of course you need rds-combined-ca-bundle.pem file to be present at your Rails root folder.
wget https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem
Yes, I've had to create a docker image with the following code:
FROM errbit/errbit:latest
LABEL maintainer="Tarek N. Elsamni <tarek.samni+stackoverflow#gmail.com>"
WORKDIR /app
RUN wget https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem
COPY ["mongo.rb", "/app/config/"]

Terraform execute script before lambda creation

I have a terraform configuration that correctly creates a lambda function on aws with a zip file provided.
My problem is that I always have to package the lambda first (I use serverless package method for this), so I would like to execute a script that package my function and move the zip to the right directory before terraform creates the lambda function.
Is that possible? Maybe using a combination of null_resource and local-exec?
You already proposed the best answer :)
When you add a depends_on = ["null_resource.serverless_execution"] to your lambda resource, you can ensure, that packaging will be done before uploading the zip file.
Example:
resource "null_resource" "serverless_execution" {
provisioner "local-exec" {
command = "serverless package ..."
}
}
resource "aws_lambda_function" "update_lambda" {
depends_on = ["null_resource.serverless_execution"]
filename = "${path.module}/path/to/package.zip"
[...]
}
https://www.terraform.io/docs/provisioners/local-exec.html
The answer is already given, but I was looking for a way to install NPM modules on the fly, zip and then deploy Lambda function along with timeout if your lambda function size is large. So here is my finding may help someone else.
#Install NPM module before creating ZIP
resource "null_resource" "npm" {
provisioner "local-exec" {
command = "cd ../lambda-functions/loadbalancer-to-es/ && npm install --prod=only"
}
}
# Zip the Lamda function on the fly
data "archive_file" "source" {
type = "zip"
source_dir = "../lambda-functions/loadbalancer-to-es"
output_path = "../lambda-functions/loadbalancer-to-es.zip"
depends_on = ["null_resource.npm"]
}
# Created AWS Lamdba Function: Memory Size, NodeJS version, handler, endpoint, doctype and environment settings
resource "aws_lambda_function" "elb_logs_to_elasticsearch" {
filename = "${data.archive_file.source.output_path}"
function_name = "someprefix-alb-logs-to-elk"
description = "elb-logs-to-elasticsearch"
memory_size = 1024
timeout = 900
timeouts {
create = "30m"
}
runtime = "nodejs8.10"
role = "${aws_iam_role.role.arn}"
source_code_hash = "${base64sha256(data.archive_file.source.output_path)}"
handler = "index.handler"
# source_code_hash = "${base64sha256(file("/elb-logs-to-elasticsearch.zip"))}"
environment {
variables = {
ELK_ENDPOINT = "someprefix-elk.dns.co"
ELK_INDEX = "test-web-server-"
ELK_REGION = "us-west-2"
ELK_DOCKTYPE = "elb-access-logs"
}
}
}

Resources