Use this way to create an instance on aws:
docker-machine create \
-d amazonec2 \
--amazonec2-region ap-northeast-1 \
--amazonec2-zone a \
--amazonec2-ami ami-XXXXXX \
--amazonec2-keypair-name my_key_pair \
--amazonec2-ssh-keypath ~/.ssh/id_rsa \
my_instance
Can't connect to it by ssh.
The my_key_pare is a name that exist on aws. The ~/.ssh/id_rsa is local ssh private key. How to set the right value?
I have read the document but didn't find an example of using both --amazonec2-keypair-name and --amazonec2-ssh-keypath.
Download the file from "Key Pairs" in AWS Console and place it in ~/.ssh.
Then run
docker-machine create \
-d amazonec2 \
--amazonec2-region ap-northeast-1 \
--amazonec2-zone a \
--amazonec2-ami ami-XXXXXX \
--amazonec2-ssh-keypath ~/.ssh/keypairfile \
my_instance
In gitlab-runner using Docker+machine, you have to provide both "amazonec2-keypair-name=XXX",
"amazonec2-ssh-keypath=XXX".
the Keypath should be like /home/gitlab-runner/.ssh/id_rsa and the path should also have id_rsa.pub file with id_rsa. These two file should not be your local generated key, should be the id_rsa, id_rsa.pub of Pem created
The following commands will do the trick,
cat faregate-test.pem > /home/gitlab-runner/.ssh/id_rsa
ssh-keygen -y -f faregate-test.pem > /home/gitlab-runner/.ssh/id_rsa.pub
And This will allow you to connect from runner manager instance to the runner provisioned by using your AWS existing keypair...
Related
I have an openstack server in which i want to create an instance with user data file for example
openstack server create --flavor 2 --image 34bf1632-86ed-46ca-909e-c6ace830f91f --nic net-id=d444145e-3ccb-4685-88ee --security-group default --key-name Adeel --user-data ./adeel/script.sh m3
script.sh contain
#cloud-config
password: mypasswd
chpasswd: { expire: False }
ssh_pwauth: True
#!/bin/sh
wget https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-7.17.7-linux-x86_64.tar.gz && tar -xzf elastic-agent-7.17.7-linux-x86_64.tar.gz cd
elastic-agent-7.17.7-linux-x86_64 sudo ./elastic-agent install \
--fleet-server-es=http://localhost:9200 \
--fleet-server-service-token=AAEAAWVsYXN0aWMvZmxlZXQtc2VydmVyL3Rva2VuLTE2Njc0MDM1 \
--fleet-server-policy=499b5aa7-d214-5b5d \
--fleet-server-insecure-http
when i add this script nothing executed. i want run above script when my instance boot first time.
I installed Elasticsearch agent to VM of AKS. There is no data to send to Kibana cloud(which is not install outside AKS). When I tested Elastecsearch modules, there is error:
azureuser#aks-agentpool-yyyyyy-0:/var/log/elasticsearch$ sudo metricbeat test modules
Error getting metricbeat modules:
module initialization error:
5 errors:
reading bearer token file:
open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory; reading bearer token file:
open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory; reading bearer token file:
open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory; reading bearer token file:
open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory; reading bearer token file:
open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
Checked that the error should be based on below code in kubernetes.yml
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.certificate_authorities:
- /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
How can I create secret on Azure VM to solve this problem?
Supplements
I follow https://learn.microsoft.com/en-us/azure/aks/ssh to connect to VMSS / VM then install Metricbeat already. But there is no data to send to Kibana.
Below are the steps that I performed:
# CLUSTER_RESOURCE_GROUP=$(az aks show --resource-group XXX --name YYY --query nodeResourceGroup -o tsv)
# SCALE_SET_NAME=$(az vmss list --resource-group $CLUSTER_RESOURCE_GROUP --query [0].name -o tsv)
# az vmss extension set \
--resource-group $CLUSTER_RESOURCE_GROUP \
--vmss-name $SCALE_SET_NAME \
--name VMAccessForLinux \
--publisher Microsoft.OSTCExtensions \
--version 1.4 \
--protected-settings "{\"username\":\"azureuser\", \"ssh_key\":\"$(cat ~/.ssh/id_rsa.pub)\"}"
# az vmss update-instances --instance-ids '*' \
--resource-group $CLUSTER_RESOURCE_GROUP \
--name $SCALE_SET_NAME
# kubectl get nodes -o wide
# az vm list --resource-group $CLUSTER_RESOURCE_GROUP -o table
# az vm list-ip-addresses --resource-group $CLUSTER_RESOURCE_GROUP -o table
# kubectl run --generator=run-pod/v1 -it --rm aks-ssh --image=debian
// Inside aks-ssh
apt-get update && apt-get install openssh-client -y
// Open another terminal then copy SSH key
# kubectl cp ~/.ssh/id_rsa $(kubectl get pod -l run=aks-ssh -o jsonpath='{.items[0].metadata.name}'):/id_rsa
// Inside aks-ssh again
#chmod 0600 id_rsa
// Connect to vmss/VM:
#ssh -i id_rsa azureuser#10.240.0.4
// -- in VMSS --
// Download Metricbeat(use deb)
# curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.5.0-amd64.deb
# sudo dpkg -i metricbeat-7.5.0-amd64.deb
// Modify metricbeat.yml:
# sudo nano /etc/metricbeat/metricbeat.yml
// Add below in “Elastic Cloud” section
cloud.id: "<--id-->"
cloud.auth: "<--auth-->"
// Enable Kubernetes
# sudo metricbeat modules enable kubernetes
// Modify kubernetes.yml
# sudo nano /etc/metricbeat/modules.d/kubernetes.yml
// Start Metricbeat
# sudo metricbeat setup
# sudo service metricbeat start
I'm running this command for getting the public IP of an EC2. It works as expected!
aws ec2 describe-instances --filters "Name=tag:Name,Values=EC2" --query 'Reservations[*].Instances[*].PublicIpAddress' --output text
I'm trying to use this value for running this command, so instead of XX.XX.XX.XX it has to be the value I get from ec2 describe-instances
ansible-playbook provisioning/site.yml -i inventory --ssh-common-args '-o "proxycommand ssh -W %h:%p -i Key.pem ubuntu#XX.XX.XX.XX"'
So I have my commands like this
IP=aws ec2 describe-instances --filters "Name=tag:Name,Values=$NAME_PREFIX*" --query 'Reservations[*].Instances[*].PublicIpAddress' --output text
export BASTION="ubuntu#${IP}"
ansible-playbook provisioning/site.yml -i inventory --ssh-common-args '-o "proxycommand ssh -W %h:%p -i Key.pem ${BASTION}"'
So not sure, how should I use that output, if you could help me please
It is better to add "Name=instance-state-name,Values=running" clause in filters to make sure you get a running instance only so your aws command would be:
aws ec2 describe-instances --filters "Name=instance-state-name,Values=running" "Name=tag:Name,Values=$NAME_PREFIX*" --query 'Reservations[*].Instances[*].PublicIpAddress' --output text
Now to store output IP in a variable, you can use command substitution:
ip=$(aws ec2 describe-instances --filters "Name=instance-state-name,Values=running" "Name=tag:Name,Values=$NAME_PREFIX*" --query 'Reservations[*].Instances[*].PublicIpAddress' --output text)
Then to use it:
export BASTION="ubuntu#$ip"
ansible-playbook provisioning/site.yml -i inventory --ssh-common-args '-o "proxycommand ssh -W %h:%p -i Key.pem '"$BASTION"'"'
I have created an Oracle 12c docker instance on my Mac(Sierra). I can do everything outlined in this link (bring it up, connect to it, create table, insert data):
https://www.toadworld.com/platforms/oracle/b/weblog/archive/2017/06/21/modularization-by-using-oracle-database-containers-and-pdbs-on-docker-engine
In the docker toolkit I have mapped a shared drive /Users/user/projects/database.
I am executing this command:
docker run --name oraclecdb \
-p 1521:1521 -p 5500:5500 \
-e ORACLE_SID=ORCLCDB \
-e ORACLE_PDB=ORCLPDB1 \
-e ORACLE_PWD=oracle \
-v /Users/user/projects/database/oradata:/home/oracle/oradata \
oracle/database:12.2.0.1-ee
"oradata" gets created, but the pluggable database never gets persisted to the shared volume. So what am I missing?
Turns out that /home/oracle/oradata should be /opt/oracle/oradata
I'm unable to bootstrap my server because "knife ec2 server create" keeps expanding my runlist to "roles".
knife ec2 server create \
-V \
--run-list 'role[pgs]' \
--environment $1 \
--image $AMI \
--region $REGION \
--flavor $PGS_INSTANCE_TYPE \
--identity-file $SSH_KEY \
--security-group-ids $PGS_SECURITY_GROUP \
--subnet $PRIVATE_SUBNET \
--ssh-user ubuntu \
--server-connect-attribute private_ip_address \
--availability-zone $AZ \
--node-name pgs \
--tags VPC=$VPC
Consistently fails because 'roles[pgs]' is expanded to 'roles'. Why is this? Is there some escaping or alternative method I can use?
I'm currently working around this by bootstrapping with an empty run-list and then overriding the runlist by running chef-client once the node is registered.
This is a feature of bash. [] is a wildcard expander. You should can escape the brackets using "\".