Can we have IP address of newly created ec2 instance dynamically in a declarative Jenkins pipeline job - jenkins-pipeline

I have created multiple ec2 instances using terraform by jenkins pipeline(i.e Infra Pipeline) and after the instances got created, Now i have the below pipeline(Build Pipeline) where I have to move some files to one instance and that instance Name(tags) is "main". Is there any possibility for us to get the IP Address of the main instance dynamically in to jenkins pipeline.
pipeline {
agent any
stages {
stage('Test Copy') {
steps {
sshagent(['test']) {
sh "scp -o StrictHostKeyChecking=no symfony-deploy.yaml ubuntu#<**target server public IP**>:/home/ubuntu/test"
}
}
}
}
}
I hope my question is clear, Can someone please help me with this.

Related

How to execute some automation scripts after provisioning resources via terraform

Consider this use-case please: As part of our test framework, we have to deploy some resources, then executing some script before we can start using the resource for testing. A typical example is the AirView RDS module. The RDS is often provisioned with flyway module, which has an SSM document for creating the DB. What we had been doing is call the RDS module and the flyway module, apply them in a terraform workspace. Once they are successfully deployed (i.e. applied), a human would need to go through the AWS console and execute the script that creates NGCS database (this is for example). After that it's ready to be used for testing. I would like to find a way to avoid this human interaction step. So the order of creation and actions should be:
Provision DB cluster
Provision utility EC2 instance (where the flyway script can run)
Execute flyway
How can that be done in an automated way? Further, if I have few resources that also need similar set up (may not be flyway, but some kind of scripts), how can I control the sequence of activities (from creating resources to running scripts on them)?
Try to use terraform provisioners. aws_instance resource, which I suppose you are using fully supports this feature. With provisioner, you can run any command you want just after instance creation.
Don't forget to apply connection settings. You can read more here and here
Finally, you should get something close to this one:
resource "aws_instance" "my_instance" {
ami = "${var.instance_ami}"
instance_type = "${var.instance_type}"
subnet_id = "${aws_subnet.my_subnet.id}"
vpc_security_group_ids = ["${aws_security_group.my_sg.id}"]
key_name = "${aws_key_pair.ec2key.key_name}"
provisioner "remote-exec" {
inline = [
"my commands",
]
}
connection {
type = "ssh"
user = "ec2-user"
password = ""
private_key = "${file("~/.ssh/id_rsa")}"
}
}
You need to remember that provisioner are last resort.
From the docs, have you tried user_data?

Terraform, how to ignore failed servers while provisioning?

I am running terraform with a private openstack cloud to bootstrap new servers. When I try to create new servers (using any method) during the busiest times of operation (weekdays in the afternoon) usually half of the servers fail (and this has nothing to do with terraform). The issue is when one of the servers I try to provision fails to complete a provisioner "remote-exec" block without errors (because of my private cloud) my whole terraform apply stops.
I want terraform to totally ignore these failed servers when I run terraform apply so that if I try to provision 20 server and only 1 of them launches successfully, then that one server will run through all the commands I specify in my resource block.
Is there something like an ignore_failed_resources = true line I can add to my resources so that terraform will ignore the servers that fail and run the successful ones to completion?
There's no simple config switch that you can enable to achieve this. Could you be more specific about the problem that's causing the "remote-exec" to fail?
If it's because it's simply refusing to connect, you could switch out your "remote-exec" for a "local-exec" and wrap your command in a script, passing in the hostname of your server as a parameter. The script would then handle initiating the SSH connection and running the required commands. Make sure the script fails gracefully with an exit code of 0 if the error occurs, so that terraform will think that the script worked correctly.
resource "aws_instance" "web" {
provisioner "local-exec" {
command = "./myremotecommand.sh ${aws_instance.web.private_ip}"
}
}
I'm guessing you already figured out a solution, but adding my solution here for anyone who will encounter this in the future.
Why not simply do in the command part this?
provisioner "local-exec" {
command = "my command || true"
}
This way, it'll always return code 0, so if the shell ignore the failure, terraform will ignore it as well.
If you look at terraform local provisioner doc
you can put on_failure = continue
So it will continue at failure.

How to SSH into jenkins host executing code

I have a jenkins job set up and I'm having some issues getting around my company's proxy. I'd like to SSH into the jenkins slave executing this job.
Where could I find the host name in jenkins?
So I can add the proper proxy into the settings.xml file. It should be located {home}/.m2/settings.xml
Inspired by this answer. Running this groovy script on jenkins script console will list the slaves and their ip address.
import hudson.util.RemotingDiagnostics;
print_ip = 'println InetAddress.localHost.hostAddress';
print_hostname = 'println InetAddress.localHost.canonicalHostName';
// here it is - the shell command, uname as example
uname = 'def proc = "uname -a".execute(); proc.waitFor(); println proc.in.text';
for (slave in hudson.model.Hudson.instance.slaves) {
println slave.name;
println RemotingDiagnostics.executeGroovy(print_ip, slave.getChannel());
}
Manage Jenkins > Manage Nodes

Kafka on EC2 instance for integration testing

I'm trying to set up some integration tests for part of our project that makes use of Kafka. I've chosen to use the spotify/kafka docker image which contains both kafka and Zookeeper.
I can run my tests (and they pass!) on a local machine if I run the kafka container as described at that project site. When I try to run it on my ec2 build server, however, the container dies. The final fatal error is "INFO gave up: kafka entered FATAL state, too many start retries too quickly".
My suspicion is that it doesn't like the address passed in. I've tried using both the public and the private ip address that ec2 provides, but the results are the same either way, just as with localhost.
Any help would be appreciated. Thanks!
It magically works now even though I'm still doing exactly what I was doing before. However, in order to help others who might come along, I will post what I did to get it to work.
I created the following batch file and have jenkins run this as a build step.
#!/bin/bash
if ! docker inspect -f 1 test_kafka &>/dev/null
then docker run -d --name test_kafka -p 2181:2181 -p 9092:9092 --env ADVERTISED_HOST=localhost --env ADVERTISED_PORT=9092 spotify/kafka
fi
even though the localhost resolves to the private ip address, it seems to take it now. The if block is just to test if the container already exists and reuse it otherwise.

How to read Elastic ip of an aws instance when created through vagrant and chef-solo

I am using vagrant with chef-solo for creating and provisioning a test environment, with an elastic ip assigned. I want to read the elastic ip of the test environment and return it to jenkins, where jenkins uses this ip and deploys the war into this machine for a functional testing.
Is this possible to do?
Yes it's possible to get the system public IP. You can get the information by accessing the instance metadata. Here is the command by which you can get the public IP associated with your instance.
GET http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:29:96:8f:6a:2d/public-ipv4s
Change the MAC address as yours.
Thanks linuxnewbee. But i found another way.
$vagrant ssh
$ ec2metadata | grep public-hostname
This command returns publich-hostname ec2-xx-xx-xxx-xxx.compute-1.amazonaws.com
Now I have to pass this ip to jenkins, which is yet to be done.

Resources