I'm trying to run GitlabRunner locally but ..
This works ...
❯ docker pull registry.gitlab.com/{MY_PROJECT}
❯ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.gitlab.com/{MY_PRIVATE_IMAGE} latest XXXX 2 days ago 605MB
❯ gitlab-runner verify
WARNING: Running in user-mode.
WARNING: Use sudo for system-mode:
WARNING: $ sudo gitlab-runner...
Verifying runner... is alive runner={XXXX}
❯ cat /.gitlab-runner/config.toml
concurrent = 1
check_interval = 0
[[runners]]
name = "macbook-{XXXX}"
url = "https://gitlab.com/"
token = "XXXXXXX"
executor = "docker"
[runners.docker]
tls_verify = false
image = "registry.gitlab.com/{MY_PRIVATE_IMAGE}:latest"
privileged = true
disable_cache = false
volumes = ["/cache"]
shm_size = 0
pull_policy = "if-not-present"
[runners.cache]
❯ cat ../../../.docker/config.json
{
"auths": {
"https://index.docker.io/v1/": {},
"https://registry.gitlab.com": {},
"registry.gitlab.com": {}
},
"credsStore": "osxkeychain"
}
In my project when I try to execute runner ..
❯ gitlab-runner exec docker lint
WARNING: You most probably have uncommitted changes.
WARNING: These changes will not be tested.
Running with gitlab-ci-multi-runner 9.4.0 (ef0b1a6)
on ()
Using Docker executor with image registry.gitlab.com/{MY_PRIVATE_IMAGE} ...
map[]
Using docker image sha256:XXXX for predefined container...
Pulling docker image registry.gitlab.com/{MY_PRIVATE_IMAGE} ...
ERROR: Preparation failed: Error response from daemon: Get https://registry.gitlab.com/v2/{MY_PRIVATE_IMAGE}/manifests/latest: denied: access forbidden
Will be retried in 3s ...
Using Docker executor with image registry.gitlab.com/{MY_PRIVATE_IMAGE} ...
map[]
Using docker image sha256:XXX for predefined container...
ERROR: Preparation failed: Error response from daemon: Get {MY_PRIVATE_IMAGE}/manifests/latest: denied: access forbidden
open your ~/.docker/config.json file and replace the credsStore entry with an empty string, docker login <your-registry> again and it should work out
Related
I have a Terraform+Ansible combination that sets up an OVH cloud instance, and then runs an Ansible playbook on it using provisioners. When I run this locally, I can supply the public and private keys directly via the command line (not using file paths), and the terraform apply works perfectly.
On Terraform Cloud, I create the keys as variables. When I run the Terraform plan, the remote-exec provisioner works, and connects to the instance as it should. However, the local-exec fails with a Permission denied (publickey). What am I missing?
My provisioner blocks:
# Dummy resource to hold the provisioner that runs ansible
resource "null_resource" "run_ansible" {
provisioner "remote-exec" {
inline = ["sudo apt update", "sudo apt install python3 -y", "echo Done!"]
connection {
host = openstack_compute_instance_v2.test_instance.network[0].fixed_ip_v4
type = "ssh"
user = "ubuntu"
private_key = var.pvt_key
}
}
provisioner "local-exec" {
command = "python3 -m pip install --no-input ansible; ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -u ubuntu -i '${openstack_compute_instance_v2.test_instance.network[0].fixed_ip_v4},' '--private-key=${var.pvt_key}' -e 'pub_key=${var.pub_key}' ansible/setup.yml"
}
}
Terraform cloud run error:
TASK [Gathering Facts] *********************************************************
fatal: [xx.xxx.xxx.xx]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added 'xx.xxx.xxx.xx' (ECDSA) to the list of known hosts.\r\nno such identity: /home/tfc-agent/.tfc-agent/component/terraform/runs/run-AhaANkduM9YXJVoC/config/<<EOT\n-----BEGIN OPENSSH PRIVATE KEY-----<private-key>-----END OPENSSH PRIVATE KEY-----\nEOT: No such file or directory\r\nubuntu#xx.xxx.xxx.xx: Permission denied (publickey).", "unreachable": true}
I solved the problem by creating (sensitive) key files on the Terraform Cloud host, and passing the paths to them to Ansible instead.
The variables are still supplied via TFCloud, but without the heredoc syntax.
I had to add an extra new line \n at the end of the key to get around it being stripped. See the following issue: https://github.com/ansible/awx/issues/9082.
resource "local_sensitive_file" "key_file" {
content = "${var.pvt_key}\n"
filename = "${path.root}/.ssh/key"
file_permission = "600"
directory_permission = "700"
}
resource "local_sensitive_file" "pubkey_file" {
content = "${var.pub_key}\n"
filename = "${path.root}/.ssh/key.pub"
file_permission = "644"
directory_permission = "700"
}
I created 3 jenkins jobs linked to the same github project, i'm using wdio v5 and cucumber, i want to run each job on a different port this is why i'm trying to pass the port from the jenkins post-build task : execute shell
I tryed this -- --seleniumArgs.seleniumArgs= ['-port', '7777']
then this
-- --seleniumArgs.seleniumArgs= ["-port", "7777"]
then
-- --seleniumArgs.seleniumArgs= '-port: 7777'
but nothing works
i found a solution :
so this is the wdio.conf.js file :
var myArgs = process.argv.slice(2);
Port= myArgs[1]
exports.config = {
////////////////////////
services: ['selenium-standalone'],
seleniumArgs: {
seleniumArgs: ['-port', Port]
},
//////////////////////
}
myArg will receive an array with the arguments passed in the command line
and this is the command
npm test 7777 -- --port 7777
the 7777 is the argument number 2, thus the index 1 in the array,
the index 0 is : wdio.conf.js, which is in the "test" script in package.json
===> "test": "wdio wdio.conf.js"
I have the following launch config for an auto-scaling group:
resource "aws_launch_configuration" "ASG-launch-config" {
#name = "ASG-launch-config" # see: https://github.com/hashicorp/terraform/issues/3665
name_prefix = "ASG-launch-config-"
image_id = "ami-a4dc46db" #Ubuntu 16.04 LTS
#image_id = "ami-b70554c8" #Amazon Linux 2
instance_type = "t2.micro"
security_groups = ["${aws_security_group.WEB-DMZ.id}"]
key_name = "MyEC2KeyPair"
#user_data = <<-EOF
# #!/bin/bash
# echo "Hello, World" > index.html
# nohup busybox httpd -f -p "${var.server_port}" &
# EOF
provisioner "file" {
source="script.sh"
destination="/tmp/script.sh"
}
provisioner "remote-exec" {
inline=[
"chmod +x /tmp/script.sh",
"sudo /tmp/script.sh"
]
}
connection {
user="ubuntu"
private_key="${file("MyEC2KeyPair.pem")}"
}
lifecycle {
create_before_destroy = true
}
}
Error: Error applying plan:
1 error(s) occurred:
aws_launch_configuration.ASG-launch-config: timeout - last error: dial tcp :22: connectex: No connection could be made because the target machine actively refused it.
I want to run a bash script to basically install WordPress on the instances created.
the script runs fine in a resource type "aws_instance" "example"
how to troubleshoot this?
Sounds like the instance is denying your traffic. Start up an instance without the provisioning script and see if you can SSH to it using the key you provided. You may want to add verbose logging to the SSH command with -v.
When I do terraform plan -var-file=../variables.tfvars
pass all good
But then I run terraform apply -var-file=../variables.tfvars
give me this error and I don't know how to solve this because the directory's path is correct.
Error: Error applying plan:
1 error(s) occurred:
* aws_instance.mongodb_server: 1 error(s) occurred:
* Error running command 'sleep 60 && export ANSIBLE_HOST_KEY_CHECKING=False && echo "[mongodb]
54.193.20.170" > /tmp/inventory.ws && ansible-playbook -i /tmp/inventory.ws -e "mongodb_password=blahblah" -e "mongodb_user=admin" -u ec2-user -b --private-key=../BASE/files/joujou.pem ../DATABASE/files/ansible-mongodb-standalone/mongodb.yml': exit status 127. Output: /bin/sh: 2: ansible-playbook: not found
The code is like:
resource "aws_instance" "mongodb_server" {
instance_type = "${lookup(var.mongodb_instance_type_control,
var.target_env)}"
vpc_security_group_ids =
["${aws_security_group.default_internal.id}"]
ami = "${lookup(var.amazon_ami_by_location, var.aws_region)}"
key_name = "${var.key_name}"
subnet_id = "${data.aws_subnet.subnet_a.id}"
tags {
Name = "tf-mongodb-${lookup(var.environment, var.target_env)}"
}
associate_public_ip_address = true
provisioner "local-exec" {
command = "sleep 60 && export ANSIBLE_HOST_KEY_CHECKING=False && echo \"[mongodb]\n${aws_instance.mongodb_server.public_ip}\" > /tmp/inventory.ws && ansible-playbook -i /tmp/inventory.ws -e \"mongodb_password=${var.mongodb_default_password}\" -e \"mongodb_user=${var.mongodb_default_username}\" -u ec2-user -b --private-key=../BASE/files/joujou.pem ../DATABASE/files/ansible-mongodb-standalone/mongodb.yml"
}
Output: /bin/sh: 2: ansible-playbook: not found
This is your actual error. Terraform plan does not capture this error as local-exec commands are not evaluated by terraform plan.
Do you have ansible installed on the machine where you are trying to run the above terraform? And if installed, is it on the path.
Try installing ansible if its not installed already. If ansible is already installed, do a echo $PATH in your local-exec command and confirm if ansible is present in the given path.
guys,i have a problem on using xinetd,the error message is 'xinetd[20126]: execv( /home/fulu/download/mysqlchk_status2.sh ) failed: Exec format error (errno = 8)'
the system operation is : CentOS release 6.2;
i installed the xinetd by the command 'sudo yum install xinetd'
i edited the /etc/services, add my port 6033 for my service named 'mysqlchk'
the service 'mysqlchk' in /etc/xinetd.d/mysqlchk is
service mysqlchk
{
disable = no
flags = REUSE
socket_type = stream
port = 6033
wait = no
user = fulu
server = /home/fulu/download/mysqlchk_status2.sh
log_on_failure += USERID
}
the shell file /home/fulu/download/mysqlchk_status2.sh content is
echo 'test'
6.i can run the command /home/fulu/download/mysqlchk_status2.sh straightly and get the result 'test'
when i telnet 127.0.0.1 6033,i get the output
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Connection closed by foreign host.
then i tail the log file /var/log/messages,it shows
Apr 22 22:01:47 AY1304111122016 xinetd[20001]: START: mysqlchk pid=20126 from=127.0.0.1
Apr 22 22:01:47 AY1304111122016 xinetd[20126]: execv( /home/fulu/download/mysqlchk_status2.sh ) failed: Exec format error (errno = 8)
Apr 22 22:01:47 AY1304111122016 xinetd[20001]: EXIT: mysqlchk status=0 pid=20126 duration=0(sec)
i don't know why,can anybody help me ?
I'm sorry, after questioning it i suddenly found the answer. If you want the shell to be run in other program you need add '#!/bin/echo' at the first line of the shell file (of course the echo can be changed)