Is it possible to use existing LetsEncrypt certificates (.pem format) in Traefik?
I have Traefik/Docker set up to generate acme.json - can I import my existing certificates for a set of domains?
Eventually I found the correct solution - not to use Traefik's ACME integration but instead to simply mount a network volume (EFS) containing certificates as issued by certbot in manual mode.
Why was this my chosen method? Because I'm mounting that certificate-holding NFS volume on two servers (blue and green). These servers are the live & staging servers for the web servers. At any time one will be "live" and the other can be either running a release candidate or otherwise in a "hot standby" role.
For this reason, better to make a separation of concerns and have a third server run as a dedicated "certificate manager". This t2.nano server will basically never be touched and has the sole responsibility of running certbot once a week, writing the certificates into an NFS mount that is shared (in read-only mode) by the two web servers.
In this way, Traefik runs on both blue and green servers to takes care of its primary concern of proxying web traffic, and simply points to the certbot-issued certificate files. For those who found this page and could benefit from the same solution, here is the relevant extract from my traefik.toml file:
defaultEntryPoints = ["https","http"]
[docker]
watch = true
exposedbydefault = false
swarmMode = true
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "/cert.pem"
keyFile = "/privkey.pem"
Here is the relevant section from my Docker swarm stack file:
version: '3.2'
volumes:
composer:
networks:
traefik:
external: true
services:
proxy:
image: traefik:latest
command: --docker --web --docker.swarmmode --logLevel=DEBUG
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik.toml:/traefik.toml
- "./certs/live/example.com/fullchain.pem:/cert.pem"
- "./certs/live/example.com/privkey.pem:/privkey.pem"
networks:
- traefik
And finally here is the command that cron runs once a week on the dedicated cert server, configured to use ACME v2 for wildcard certs and Route 53 integration for challenge automation:
sudo docker run -it --rm --name certbot \
-v `pwd`/certs:/etc/letsencrypt \
-v `pwd`/lib:/var/lib/letsencrypt \
-v `pwd`/log:/var/log/letsencrypt \
--env-file ./env \
certbot/dns-route53 \
certonly --dns-route53 \
--server https://acme-v02.api.letsencrypt.org/directory \
-d example.com \
-d example.net \
-d *.example.com \
-d *.example.net \
--non-interactive \
-m me#example.org \
--agree-tos
The folder certs is the NFS volume shared between the three servers.
Despite that presumably anyone wonders why you actually want/need this and anyone will advise you against this, because traefik handles the automatic re-challenge very well, here's what the acme.json looks like:
{
"Account": {
"Email": "acme#example.com",
"Registration": {
"body": {
"status": "valid",
"contact": [
"mailto:acme#example.com"
]
},
"uri": "https://acme-v02.api.letsencrypt.org/acme/acct/12345678"
},
"PrivateKey": "ABCD...EFG="
},
"Certificates": [
{
"Domain": {
"Main": "example.com",
"SANs": null
},
"Certificate": "ABC...DEF=",
"Key": "ABC...DEF"
},
{
"Domain": {
"Main": "anotherexample.com",
"SANs": null
},
"Certificate": "ABC...DEF==",
"Key": "ABC...DEF=="
}
],
"HTTPChallenges": {}
}
Now it's up to you to write an import script or template parser to loop through your certificates/keys and put the content into the brackets or just generate your own json file in that format.
Notice that it's a slightly different format than classic pem style (i.e without line-breaks).
I just want to add that I meanwhile found another approach while I had the need to use HAProxy:
run one traefik only for acme at port 81 behind HAProxy
acl acme path_beg -i /.well-known/acme-challenge/
use_backend acme if acme
have a small webserver providing the acme.json (i.e. shell2http)
have a cronjob (i use gitlab ci) which downloads the acme.json and extracts the certs (https://github.com/containous/traefik/blob/821ad31cf6e7721ba231164ab468ee554983560f/contrib/scripts/dumpcerts.sh)
append the certs to a traefik.toml template
build a docker image and push to private registry
run this private traefik instance as main traefik on 80/443 in read-only mode on any backend-servers behind HAProxy
write a file-watcher script which restarts all ro-traefiks when acme.json changes and trigger dumpcert-script
Completing sgohl reply:
IMPORTANT: make sure that private key are 4096 bit long. 2048 bit will NOT work, and traefik will try to request a new certificate to Letsencrypt.
Identify your (in this example certbot) certificates. Example:
/etc/letsencrypt/live
├── example.com
│ ├── cert.pem -> ../../archive/example.com/cert6.pem
│ ├── chain.pem -> ../../archive/example.com/chain6.pem
│ ├── fullchain.pem -> ../../archive/example.com/fullchain6.pem
│ ├── privkey.pem -> ../../archive/example.com/privkey6.pem
│ └── README
Check that private key is 4096 bit long:
openssl rsa -text -in privkey.pem | grep bit
Expected result similar to:
writing RSA key
RSA Private-Key: (4096 bit, 2 primes)
Generate string containing desired certificates.
3.1. Certificate
_IN=/etc/letsencrypt/live/example.com/fullchain.pem
_OUT=~/traefik_certificate
cat $_IN | base64 | tr '\n' ' ' | sed --expression='s/\ //g' > $_OUT
3.2. Private key
_IN=/etc/letsencrypt/live/example.com/privkey.pem
_OUT=~/traefik_key
cat $_IN | base64 | tr '\n' ' ' | sed --expression='s/\ //g' > $_OUT
Prepare code snippet containing above strings:
...
"Certificates": [
{
"domain": {
"main": "example.com"
},
"certificate": "CONTENT_OF_traefik_certificate",
"key": "CONTENT_OF_certificate",
"Store": "default"
},
...
Place above code snippet in the right place (below 'Certificates' JSON key) in traefik's acme.json file:
vim /path/to/acme.json
No restart of traefik container is needed to pick the new certificate.
Related
I am trying to copy directory to new ec2 instance using terraform
provisioner "local-exec" {
command = "scp -i ~/.ssh/id_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -r ../ansible ubuntu#${self.public_ip}:~/playbook_dir"
}
But after instance created I get an error
Error running command 'sleep 5; scp -i ~/.ssh/id_rsa -o StrictHostKeyChecking=no -o
│ UserKnownHostsFile=/dev/null -r ../ansible ubuntu#54.93.82.73:~/playbook_dir': exit status 1. Output:
│ ssh: connect to host 54.93.82.73 port 22: Connection refused
│ lost connection
The main thing is that if I copy command to terminal and replace IP it works. Why is that happens? Please, help to figure out
I read in documentation that sshd service may not work correctly right after creating, so I added sleep 5 command before scp, but it haven't work
I have tried the same in my local env, but unfortunately, when using the local-exec provisioner in aws_instance directly I also got the same error message and am honestly not sure of the details of it.
However, to workaround the issue you can use a null_resource with the local-exec provisioner with the same command including sleep and it works.
Terraform code
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_key_pair" "stackoverflow" {
key_name = "stackoverflow-key"
public_key = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKml4tkIVsa1JSZ0OSqSBnF+0rTMWC5y7it4y4F/cMz6"
}
resource "aws_instance" "stackoverflow" {
ami = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
subnet_id = var.subnet_id
vpc_security_group_ids = var.vpc_security_group_ids ## Must allow SSH inbound
key_name = aws_key_pair.stackoverflow.key_name
tags = {
Name = "stackoverflow"
}
}
resource "aws_eip" "stackoverflow" {
instance = aws_instance.stackoverflow.id
vpc = true
}
output "public_ip" {
value = aws_eip.stackoverflow.public_ip
}
resource "null_resource" "scp" {
provisioner "local-exec" {
command = "sleep 10 ;scp -i ~/.ssh/aws-stackoverflow -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -r ~/test/sub-test-dir ubuntu#${aws_eip.stackoverflow.public_ip}:~/playbook_dir"
}
}
Code In Action
aws_key_pair.stackoverflow: Creating...
aws_key_pair.stackoverflow: Creation complete after 0s [id=stackoverflow-key]
aws_instance.stackoverflow: Creating...
aws_instance.stackoverflow: Still creating... [10s elapsed]
aws_instance.stackoverflow: Still creating... [20s elapsed]
aws_instance.stackoverflow: Still creating... [30s elapsed]
aws_instance.stackoverflow: Still creating... [40s elapsed]
aws_instance.stackoverflow: Creation complete after 42s [id=i-006c17b995b9b7bd6]
aws_eip.stackoverflow: Creating...
aws_eip.stackoverflow: Creation complete after 1s [id=eipalloc-0019932a06ccbb425]
null_resource.scp: Creating...
null_resource.scp: Provisioning with 'local-exec'...
null_resource.scp (local-exec): Executing: ["/bin/sh" "-c" "sleep 10 ;scp -i ~/.ssh/aws-stackoverflow -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -r ~/test/sub-test-dir ubuntu#3.76.153.108:~/playbook_dir"]
null_resource.scp: Still creating... [10s elapsed]
null_resource.scp (local-exec): Warning: Permanently added '3.76.153.108' (ED25519) to the list of known hosts.
null_resource.scp: Creation complete after 13s [id=3541365434265352801]
Verification Process
Local directory and files
$ ls ~/test/sub-test-dir
some_test_file
$ cat ~/test/sub-test-dir/some_test_file
local exec is not nice !!
Files and directory on Created instance
$ ssh -i ~/.ssh/aws-stackoverflow ubuntu#$(terraform output -raw public_ip)
The authenticity of host '3.76.153.108 (3.76.153.108)' can't be established.
ED25519 key fingerprint is SHA256:8dgDXB/wjePQ+HkRC61hTNnwaSBQetcQ/10E5HLZSwc.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '3.76.153.108' (ED25519) to the list of known hosts.
Welcome to Ubuntu 20.04.5 LTS (GNU/Linux 5.15.0-1028-aws x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Sat Feb 11 00:25:13 UTC 2023
System load: 0.0 Processes: 98
Usage of /: 20.8% of 7.57GB Users logged in: 0
Memory usage: 24% IPv4 address for eth0: 172.31.6.219
Swap usage: 0%
* Ubuntu Pro delivers the most comprehensive open source security and
compliance features.
https://ubuntu.com/aws/pro
* Introducing Expanded Security Maintenance for Applications.
Receive updates to over 25,000 software packages with your
Ubuntu Pro subscription. Free for personal use.
https://ubuntu.com/aws/pro
Expanded Security Maintenance for Applications is not enabled.
0 updates can be applied immediately.
Enable ESM Apps to receive additional future security updates.
See https://ubuntu.com/esm or run: sudo pro status
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
ubuntu#ip-172-31-6-219:~$ cat ~/playbook_dir/some_test_file
local exec is not nice !!
I want to create a simple k8s cronjob to keep a gcp identity token fresh in a secret.
A relatively simple problem that I am not able to solve
Given
kubectl patch secret token-test --type json -p=<< END
[
{
"op": "replace",
"path": "/data/token.jwt",
"value": "$(gcloud auth print-identity-token | base64 )"
}
]
END
I want this to be applied with kubectl patch secret token-test --type json -p=$(printf "'%s'" "$json")
I have tried many variants, the weird thing is that if I paste in the result of the heredoc insted of printf it works. But all my efforts fails with (also tried a json doc on a single line)
$ kubectl patch secret token-test --type json -p=$(printf "'%s'" "$json")
error: unable to parse "'[{": yaml: found unexpected end of stream
Whereas this actually works:
printf "'%s'" "$json"|pbcopy
kubectl patch secret sudo-token-test --type json -p '[{ "op": "replace","path": "/data/token.jwt","value": "ZX...Zwo="}]'
secret/token-test patched
I cannot understand what is different when it fails. I understand bash is a bit tricky when it comes to string handling, but I am not sure if this is a bash issue or an issue in kubectl.
It's a slight different approach but, howabout:
printf "foo" > test
kubectl create secret generic freddie \
--namespace=default \
--from-file=./test
kubectl get secret/freddie \
--namespace=default \
--output=jsonpath="{.data.test}" \
| base64 --decode
foo
X="$(printf "bar" | base64)"
kubectl patch secret/freddie \
--namespace=default \
--patch="{\"data\":{\"test\":\"${X}\"}}"
kubectl get secret/freddie \
--namespace=default \
--output=jsonpath="{.data.test}" \
| base64 --decode
bar
NOTE it's not a best practice to use your user (gcloud auth print-identity-token) credentials in this way. Service Accounts are preferred. Service Accounts are intended for machine (rather than a human) auth and they can be more easily revoked.
User credentials grant the bearer all the powers of the user account (and this is likely extensive).
There's a portable alternative in which you create a Kubernetes secret from a Service Account key:
kubectl create secret generic your-key \
--from-file=your-key.json=/path/to/your-key.json
There's a cool-kids who use GKE-mostly approach called Workload Identity
$ json='[{ "op": "replace","path": "/data/token.jwt","value": "'"$(gcloud auth print-identity-token | base64 )"'"}]'
$ kubectl patch secret token-test --type json -p="$json"
secret/token-test patched
By appending the string insted of interpolating in the heredoc this was solved.
Still don't know why the other approach failed though.
EDIT:
The end result for this was to drop kubectl and use curl instead as it fit better inside the docker image.
# Point to the internal API server hostname
APISERVER=https://kubernetes.default.svc
# Path to ServiceAccount token
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
# Read this Pod's namespace
NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
# Read the ServiceAccount bearer token
TOKEN=$(cat ${SERVICEACCOUNT}/token)
# Reference the internal certificate authority (CA)
CACERT=${SERVICEACCOUNT}/ca.crt
SECRET_NAME=$(cat /etc/scripts/secret-name)
# Explore the API with TOKEN
curl --fail --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" --request PATCH ${APISERVER}/api/v1/namespaces/${NAMESPACE}/secrets/${SECRET_NAME} \
-H 'Accept: application/json' \
-H "Content-Type: application/json-patch+json" \
-d '[{ "op": "replace","path": "/data/token.jwt","value": "'"$(gcloud auth print-identity-token | base64 -w0 )"'"}]' \
--output /dev/null
As commente by me in DazWilkin answer as well, the user that calls gcloud auth print-identity-token is actually a serviceaccount on k8s authenticated via workload identity on GKE to a GCP ServiceAccount.
I need the token to be able to call AWS without storing the actual credentials by using it to call AWS via Workload Identity Federation
I have a elasticsearch cluster running on Azure AKS. I want to connect to a different es cluster running on seperate AKS for which I need to export certificate from one cluster and add it to the other cluster. I am following the official documentation from here
.
However I am not able to export the certificate and getting error on executing the following command:
kubectl get secret europecluster-es-transport-certs-public -o
go-template='{{index .data "ca.crt"}}'
Error I am getting is:
error: error parsing template {{index .data ca.crt}}, template: output:1: function "ca" not defined
I am novice in elastic and kubernetes space, and not able to find solution for this on the internet.
If you are okay to manually extract the ca.crt value and decode it then you can try following:
Extract ca.crt value without quotes [copy to clipboard]
kubectl get secret europecluster-es-transport-certs-public | grep ca.crt
perform a base64decode and redirect it to a file
echo -n <paste clipboard content> | base64 -d -w 0 > remote.ca.crt
Above procedure performs same operation as go template is doing in your command.
Example:
kubectl get secret default-token-h8w57 -o json | grep -i ca.crt
"ca.crt": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01UQXlPVEV4TkRVeU9Gb1hEVE13TVRBeU56RXhORFV5T0Zvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTEl4CmpjMCttcGVXWm5TL3NnenloZ1Ftd1ZaN3A1b2hoUktqY0gvNkVIbDBRbzljOTkyZVBTcmEzNEU0cHpmZXRBUE8Kdm1Ia0Q2Z0dCb1FyVUI3NHFMOFZpaUs4c0hZQXcyWElxMERTZHhHb3VqcUVabUM4SnpSK3gxVE1CaUZ2YUR4dQpaZVpTT3JTc1R2dGN6TjNnMG5XK0xPY1Q2UCtGQlRLbzh1RXBjbXY5cll1ZytOR25xZ0l3L0VNRXlQenM4RGk1CkhzYVJma0FwSmloeERUdTBTY1Z5MkpaakxZZ2RBMUlaSkRScjV6Unc1U3RlWlltTm5rVTY5cEtVVlNlQ2lQWnUKMFdlY3ZaTXE1NDhKWWtmUStWY3pFMjFtUTBJMSs4NXpOUUFvQmZ4aG5tYjNhcW5yL2hEdUZETm9PelIrdCtUSApteTU2ajRWTUtzY3RvNUxkOFFFQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZNZVlQcGVuYmV3RUg4bFFKdDlxaUs4bG5QWmFNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFDbFpIZGQrZDlWWElobTdsdhskhdjshdjsahdjkasdhkasdhasXOUhQNC9HMXRScTVLUWtZSlJjVHdreGZWNUlhMS8zNW1vRwpyeU5SOVZzYnRZeDF6aFNsRy91NWRGOWFYYjI3M2J4bWNEOVY0UUQvamNXMWRsdnJ6NlFWMGg3dEcwcUd6UG1xClUveC9saXJaTWMrTmVKSXJXZGo5ZjM5dXFuR2VCZnF6ZWN4QXBoRG5xY1dUNWZTVjlSVjdqaE5sNnhSZUVlRGMKUmZQMnFlb3g4d0xyYXBiVDVOSG9PK1FjS3NoUHhPL0FTNXhVVE9yOTZ2YTZkSFhzZFdsQWdaTUtva1lldlN1SApBdjVrYml3ODJBVzlaOHZrS0QrQXdFSWFwdzNNQnEvOUFxQjZBZm93RTJCckZVcTdwVzk3ZHUvRC81NWxQbTN5CllmVFo3ZVZnQUF4Yk1lTDRDdlhSZ1FJWHB5NmROTFN0SGJCSAotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
echo -n LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01UQXlPVEV4TkRVeU9Gb1hEVE13TVRBeU56RXhORFV5T0Zvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTEl4CmpjMCttcGVXWm5TL3NnenloZ1Ftd1ZaN3A1b2hoUktqY0gvNkVIbDBRbzljOTkyZVBTcmEzNEU0cHpmZXRBUE8Kdm1Ia0Q2Z0dCb1FyVUI3NHFMOFZpaUs4c0hZQXcyWElxMERTZHhHb3VqcUVabUM4SnpSK3gxVE1CaUZ2YUR4dQpaZVpTT3JTc1R2dGN6TjNnMG5XK0xPY1Q2UCtGQlRLbzh1RXBjbXY5cll1ZytOR25xZ0l3L0VNRXlQenM4RGk1CkhzYVJma0FwSmloeERUdTBTY1Z5MkpaakxZZ2RBMUlaSkRScjV6Unc1U3RlWlltTm5rVTY5cEtVVlNlQ2lQWnUKMFdlY3ZaTXE1NDhKWWtmUStWY3pFMjFtUTBJMSs4NXpOUUFvQmZ4aG5tYjNhcW5yL2hEdUZETm9PelIrdCtUSApteTU2ajRWTUtzY3RvNUxkOFFFQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZNZVlQcGVuYmV3RUg4bFFKdDlxaUs4bG5QWmFNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFDbFpIZGQrZDlWWElobTdsdhskhdjshdjsahdjkasdhkasdhasXOUhQNC9HMXRScTVLUWtZSlJjVHdreGZWNUlhMS8zNW1vRwpyeU5SOVZzYnRZeDF6aFNsRy91NWRGOWFYYjI3M2J4bWNEOVY0UUQvamNXMWRsdnJ6NlFWMGg3dEcwcUd6UG1xClUveC9saXJaTWMrTmVKSXJXZGo5ZjM5dXFuR2VCZnF6ZWN4QXBoRG5xY1dUNWZTVjlSVjdqaE5sNnhSZUVlRGMKUmZQMnFlb3g4d0xyYXBiVDVOSG9PK1FjS3NoUHhPL0FTNXhVVE9yOTZ2YTZkSFhzZFdsQWdaTUtva1lldlN1SApBdjVrYml3ODJBVzlaOHZrS0QrQXdFSWFwdzNNQnEvOUFxQjZBZm93RTJCckZVcTdwVzk3ZHUvRC81NWxQbTN5CllmVFo3ZVZnQUF4Yk1lTDRDdlhSZ1FJWHB5NmROTFN0SGJCSAotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== | base64 -d -w 0 > remote.ca.crt
I currently have 2 services running on a single node (EC2 instance) with a consul client. I would like to health check both of these by hitting a single endpoint, namely: http://localhost:8500/v1/agent/health/service/id/AliasService based on the information Consul provides from https://www.consul.io/api/agent/service.html.
The issue is that I can't seem to find any sort of documentation regarding this AliasService, just that I can use it to run health checks. I've tried putting it into my service definitions but to no avail. It just seems to ignore it altogether.
It seems that what you need is to manually define both services and then attach HTTP health check to one of them and alias health check to the other. What is being aliased here is not service but health check.
For example:
$ consul services register -name ssh -port 22
$ consul services register -name ssh-alias -port 22 -address 172.17.0.1
$ cat >ssh-check.json
{
"ID": "ssh",
"Name": "SSH TCP on port 22",
"ServiceID": "ssh",
"TCP": "localhost:22",
"Interval": "10s",
"Timeout": "1s"
}
$ curl --request PUT --data #ssh-check.json http://127.0.0.1:8500/v1/agent/check/register
$ cat >ssh-alias-check.json
{
"ID": "ssh-alias",
"Name": "SSH TCP on port 22 - alias",
"ServiceID": "ssh-alias",
"AliasService": "ssh"
}
$ curl --request PUT --data #ssh-alias-check.json http://127.0.0.1:8500/v1/agent/check/register
Here I have defined two separate services and two health checks. But only the first health check is doing actual work, the second is aliasing health status from one service to the other.
Is there any way to ruyn the knife vsphere for unattended execution? I have a deploy shell script which I am using to help me:
cat deploy-production-20-vm.sh
#!/bin/bash
##############################################
# These are machine dependent variables (need to change)
##############################################
HOST_NAME=$1
IP_ADDRESS="$2/24"
CHEF_BOOTSTRAP_IP_ADDRESS="$2"
RUNLIST=\"$3\"
CHEF_HOST= $HOSTNAME.my.lan
##############################################
# These are psuedo-environment independent variables (could change)
##############################################
DATASTORE="dcesxds04"
##############################################
# These are environment dependent variables (should not change per env)
##############################################
TEMPLATE="\"CentOS\""
NETWORK="\"VM Network\""
CLUSTER="ProdCluster01" #knife-vsphere calls this a resource pool
GATEWAY="10.7.20.1"
DNS="\"10.7.20.11,10.8.20.11,10.6.20.11\""
##############################################
# the magic
##############################################
VM_CLONE_CMD="knife vsphere vm clone $HOST_NAME \
--template $TEMPLATE \
--cips $IP_ADDRESS \
--vsdc MarkleyDC\
--datastore $DATASTORE \
--cvlan $NETWORK\
--resource-pool $CLUSTER \
--cgw $GATEWAY \
--cdnsips $DNS \
--start true \
--bootstrap true \
--fqdn $CHEF_BOOTSTRAP_IP_ADDRESS \
--chost $HOST_NAME\
--cdomain my.lan \
--run-list=$RUNLIST"
echo $VM_CLONE_CMD
eval $VM_CLONE_CMD
Which echos (as a single line):
knife vsphere vm clone dcbsmtest --template "CentOS" --cips 10.7.20.84/24
--vsdc MarkleyDC --datastore dcesxds04 --cvlan "VM Network"
--resource-pool ProdCluster01 --cgw 10.7.20.1
--cdnsips "10.7.20.11,10.8.20.11,10.6.20.11" --start true
--bootstrap true --fqdn 10.7.20.84 --chost dcbsmtest --cdomain my.lan
--run-list="role[my-env-prod-server]"
When it runs it outputs:
Cloning template CentOS Template to new VM dcbsmtest
Finished creating virtual machine dcbsmtest
Powered on virtual machine dcbsmtest
Waiting for sshd...done
Doing old-style registration with the validation key at /home/me/chef-repo/.chef/our-validator.pem...
Delete your validation key in order to use your user credentials instead
Connecting to 10.7.20.84
root#10.7.20.84's password:
If I step away form my desk and it prompts for PWD - then sometimes it times out and the connection is lost and chef doesn't bootstrap. Also I would like to be able to automate all of this to be elastic based on system needs - which won't work with attended execution.
The idea I am going to run with, unless provided a better solution is to have a default password in the template and pass it on the command line to knife, and have chef change the password once the build is complete, minimizing the exposure of a hard coded password in the bash script controlling knife...
Update: I wanted to add that this is working like a charm. Ideally we could have changed the centOs template we were deploying - but it wasn't possible here - so this is a fine alternative (as we changed the root password after deploy anyhow).