I am looking for ways to execute commands on to a remote server using ssh, when I am on cloudbuild.
Below is my current cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args:
- kms
- decrypt
- --ciphertext-file=build.pem.encrypted
- --plaintext-file=build.pem
- --location=asia-southeast1
- --keyring=keyring
- --key=build-key
- name: 'ubuntu'
args: ['chmod', '400', './build.pem']
- name: 'ubuntu'
args: ['bash', './deploy.bash']
And my deploy.bash looks like this
#! /bin/bash
apt update
apt install -y openssh-client
mkdir ~/.ssh
touch ~/.ssh/known_hosts
ssh-keyscan -H somedomain.com >> ~/.ssh/known_hosts
ssh -i build.pem -T -v somedomain.com 'bash -s deploy1.bash'
And my deploy1.bash looks like
#! /bin/bash
echo "Hello World!"
echo "It works"
I have been trying out different ways to make it work. But could not.
If anybody could recommend how to make it work, I am very appreciated.
Currently I am it stuck at this step -
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
I manage to resolve my issue.
The issue was actually from, sshguard, it's actually blocking the ssh session.
Related
I'm a beginner with Ansible, and I need to run some basic tasks on a remote server.
The procedure is as follows:
I log as some user (osadmin)
I run su - to become root
I then do the tasks I need to.
So, I wrote my playbook as follows:
---
- hosts: qualif
vars:
- ansible_user: osadmin
- ansible_password: H1g2.D6#
tasks:
- name: Copy stuff from here to over there
copy:
src: /home/osadmin/file.txt
dest: /home/osadmin/file-changed.txt
owner: osadmin
group: osadmin
mode: 0777
Also, I have the following in vars/main.yml:
ansible_user: osadmin
ansible_password: password1
ansible_become_password: password2
[ some other values ]
However, when running my tasks, Ansible / the hosts returns me the following:
"Incorrect sudo password"
I then changed my tasks so that instead of becoming sudo and copy the file in some place my osadmin doesn't have access, I just copy the file on /home/osadmin. So, theorically, no need to become sudo for just a simple copy.
The problem now is that not only it keeps saying "wrong sudo password", but if I remove it, Ansible asks for it.
I then decided to run the command and added -vvv at the end, and it showed me the following:
ESTABLISH SSH CONNECTION FOR USER: osadmin
SSH: EXEC sshpass -d10 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o User=osadmin -o ConnectTimeout=10 -o ControlPath=/home/osadmin/.ansible/cp/b9489e2193 -tt HOST-ADDRESS '/bin/sh -c '"'"'sudo -H -S -n -u
root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-ewujwywrqhcqfdrkaglvrouhmuiefwlj; /usr/bin/python /home/osadmin/.ansible/tmp/ansible-tmp-1550076004.1888492-11284794413477/AnsiballZ_setup.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
(1, b'sudo: a password is required\r\n', b'Shared connection to HOST-ADDRESS closed.\r\n')
As you can see, it somehow uses root, while I never told him to.
Does anyone know why Ansible keeps trying to be sudo, and how can I disable this?
Thank you in advance
There is a difference between 'su' and 'sudo'. If you have 'su' access, that means, that you can log as root (may be not, but it looks like). Use ansible_ssh_user=root, ansible_password=password2.
If this doesn't work, try to configure sudo on a server. You should be able to run sudo whoami and to get answer root. After that your code should run.
One more thing: you are using 'copy' module incorrectly. It uses src as path on local machine (where ansible is run), and dst as path on remote machine.
I want to make ssh connection automatically and install a packet to the connected machine. I'm able to process the SSH connection automatically. I can even run commands that do not require sudo authorization. But I didn't find a way to automatically enter the password in the commands that require sudo authorization. How do you think I can automatically enter the sudo password?
asd.sh
/usr/bin/expect -c 'spawn ssh -t usr#ip bash "pwd; sudo apt-get update"; expect "password:"; send "12345\r"; interact;'
asd.sh output
spawn ssh -t usr#ip bash pwd; sudo apt-get update
usr#ip's password:
/bin/pwd: /bin/pwd: cannot execute binary file
[sudo] password for usr:
You need the -c argument to pass a command string to Bash. Also, try to have the pattern match the full line. Try with:
/usr/bin/expect -c 'spawn ssh -t usr#ip bash -c "pwd; sudo apt-get update"; expect "*password:"; send "12345\r"; interact;'
^^ ^
Note that for this kind of task, Ansible can be very helpful as it will take care of all the boilerplate related to SSH and SUDO, and offers high-level modules to carry on any task easily.
The Ansible script ('playbook') would look like this (untested):
- hosts: ip
tasks:
- name: Update and upgrade apt packages
become: true
apt:
upgrade: yes
You can store the SUDO password in a file, and that file can be encrypted.
I'm trying to make job for deployment on gitlab. I'm writing yaml file for it.
image: maven:3-jdk-8
testjob:
script:
- "apt-get update"
- "apt-get install sshpass -y"
- "echo installed"
- "sshpass -p 'pass' ssh user#host"
- "echo login successful"
- "touch testfile.txt"
- "echo finished"
But when I'm trying to login with sshpass and i'm getting en error
Pseudo-terminal will not be allocated because stdin is not a terminal
What the problem?
You can try by disabling Pseudo-tty allocation with option -T.
I'm using LFTP on Gitlab CI to deploy a set of files. I've got this working nicely on one server that I've set up (a staging server using SFTP). However, on my client's server, I can't seem to connect. The server is setup using FTP and I have to use plain/unsecure mode to connect via Filezilla - it does connect and work fine (although I'll be giving some advice to use SFTP in the future).
When I try to do the same using LFTP through the .gitlab-ci.yml file I get the following error:
Unknown command `ftp.example.com'.
mirror: Not connected
ERROR: Build failed: exit code 1
I suspect that this is because of using plain FTP but I've tried changing hosts, putting ftp:// infront of the host and a few other commands using set but having no luck.
Here's (an edited version of) my .gitlab-ci.yml file:
stages:
- build-staging
- build-production
variables:
EXCLUDE: "--exclude '.htaccess' --exclude-glob .git* --exclude '.git/' --exclude 'wp-config.php'"
SOURCE_DIR: "./"
# STAGING
DEST_DIR: "/"
HOST_STAGING: "sftp://123.456.789"
USERNAME_STAGING: "user"
PASSWORD_STAGING: "password"
# PRODUCTION
DEST_DIR_PROD: "/"
HOST_PROD: "ftp.example.com"
USERNAME_PROD: "user"
PASSWORD_PROD: "password"
job1:
stage: build-staging
environment: staging
script:
- apt-get update -qq && apt-get install -y -qq lftp
- echo "Deploying"
- lftp -c "set ftp:ssl-allow no; set sftp:auto-confirm yes; open -u $USERNAME_STAGING,$PASSWORD_STAGING $HOST_STAGING; mirror -Rv --ignore-time --parallel=10 $EXCLUDE $SOURCE_DIR $DEST_DIR_STAGING"
only:
- staging
tags:
- 2gb
job2:
stage: build-production
environment: production
when: manual
script:
- apt-get update -qq && apt-get install -y -qq lftp
- echo "Deploying"
- lftp -c "set ftp:ssl-allow no; open -u $USERNAME_PROD,$PASSWORD_PROD $HOST_PROD; mirror -Rv --ignore-time --parallel=10 $EXCLUDE $SOURCE_DIR $DEST_DIR_PROD"
only:
- production
tags:
- 2gb
Any help would be great, thanks!
This was due to a special characters in the password - my password ended with & which caused lftp to expect a different command. To fix this, I removed the quotes and escaped the & with a |, like so:
PASSWORD_PROD: password\&
I need to run playbooks on Vagrant boxes and on aws when I setup environment with cloud formation.
In Vagrant file I use ansible-local and everything works fine
name: Setup Unified Catalog Webserver
hosts: 127.0.0.1
connection: local
become: yes
become_user: root
roles: generic
However when I create instance in AWS the ansible playbook fails with error:
sudo: sorry, you must have a tty to run sudo
This happen because it is run as root and it doesnt have tty. But I dont know how to fix it without making change in /etc/sudoers to allow !requiretty
Is there any flags I can setup in ansible.cfg or in my Cloud Formation template?
"#!/bin/bash\n", "\n", "
echo 'Installing Git'\n","
yum --nogpgcheck -y install git ansible htop nano wget\n",
"wget https://s3.eu-central-1.amazonaws.com/XXX -O /root/.ssh/id_rsa\n",
"chmod 600 /root/.ssh/id_rsa\n",
"ssh-keyscan 172.31.7.235 >> /root/.ssh/known_hosts\n",
"git clone git#172.31.7.235:something/repo.git /root/repo\n",
"ansible-playbook /root/env/ansible/test.yml\n
I was able to fix this by setting the transport = paramiko configuration in ansible.cfg.
I have found the following solutions for myself:
1. Change requiretty in /etc/sudoers with sed run playbooks and change it back.
"#!/bin/bash\n", "\n", "
echo 'Installing Git'\n","
yum --nogpgcheck -y install git ansible htop nano wget\n",
"wget https://s3.eu-central-1.amazonaws.com/xx/ansible -O /root/.ssh/id_rsa\n",
"chmod 600 /root/.ssh/id_rsa\n",
"ssh-keyscan 172.31.9.231 >> /root/.ssh/known_hosts\n",
"git clone git#172.31.5.254:somerepo/dev.git /root/dev\n",
"sed -i 's/Defaults requiretty/Defaults !requiretty/g' /etc/sudoers\n",
"\n",
"ansible-playbook /root/dev/env/ansible/uk.yml\n",
"\n",
"sed -i 's/Defaults !requiretty/Defaults requiretty/g' /etc/sudoers\n"
OR
2. In ansible playbook specify variable:
- name: Setup
hosts: 127.0.0.1
connection: local
sudo: {{ require_sudo }}
roles:
- generic
Run in AWS Cloud Formation template would be
"ansible-playbook -e require_sudo=False /root/dev/env/ansible/uk.yml\n"
And for Vagrant in ansible.cfg it can be specified
require_sudo=True
Also in CF template may identify who is running and the pass variable
ansible-playbook -e$(id -u |egrep '^0$' > /dev/null && require_sudo=False || require_sudo=True; echo "require_sudo=$require_sudo") /apps/ansible/uk.yml
If you need to specific connection: paramiko within just one playbook versus a global configuration in ansible.cfg, you can add connection: paramiko following in the playbook, example:
- name: Run checks after deployments
hosts: all
# https://github.com/paramiko/paramiko/issues/1369
connection: paramiko
gather_facts: True