So I have the following in my vagrant file:
config.ssh.forward_agent = true
And the following salt state:
git+ssh://git#bitbucket.org/xxx/repo.git:
git.latest:
- rev: rest
- target: /home/vagrant/src
However I get a public-key error when this salt state is executed.
The annoying thing is that if I manually perform git clone git+ssh://git#bitbucket.org/xxx/repo.git from within my instance, everything works fine. Any ideas?
Is bitbucket.org in known_hosts file?
git+ssh://git#bitbucket.org/xxx/repo.git:
git.latest:
- rev: rest
- target: /home/vagrant/src
- require:
- ssh_known_hosts: bitbucket.org
I had the similar requirement with capistrano. I used ssh-forwarding to checkout repo from github to the remote server. I had to add the host in ~/.ssh/config file on my machine as below.
vim ~/.ssh/config
Content
Host <some host or IP>
ForwardAgent yes
I used * as host so that It works with any server.
Related
I'm using Ubuntu Linux
I have created an inventory file and I have put my own system IP address there.
I have written a playbook to install the nginx package.
I'm getting the following error:
false, msg" : Failed to connect to the host via ssh: connect to host myip : Connection refused, unreachable=true
How can I solve this?
You could use the hosts keyword with the value localhost
- name: Install nginx package
hosts: localhost
tasks:
- name: Install nginx package
apt:
name: nginx
state: latest
Putting your host IP directly in your inventory treats your local machine as any other remote target. Although this can work, ansible will use the ssh connection plugin by default to reach your IP. If an ssh server is not installed/configured/running on your host it will fail (as you have experienced), as well as if you did not configure the needed credentials (ssh keys, etc.).
You don't need to (and in most common situations you don't want to) declare localhost in your inventory to use it as it is implicit by default. The implicit localhost uses the local connection plugin which does not need ssh at all and will use the same user to run the tasks as the one running the playbook.
For more information on connection plugins, see the current list
See #gary lopez answer for an example playbook to use localhost as target.
I used the following command and tried to clone a repo but unfortunately the following error pops up. I cannot go further
ubuntu#ip-add-rr-ee-ss:~$ git clone https://github.com/repo/file.git
Cloning into 'file'...
fatal: unable to access 'https://github.com/repo/file.git/': Could not resolve host: github.com
Could not resolve host
This must be due to DNS issue on your EC2 instance (I can see that you're using Ubuntu here)
You can try to use curl to test the connection to that URL first
Check the DNS configuration: cat /etc/resolv.conf
If possible, you
should replace your current DNS setting with others DNS like google
(8.8.8.8 & 8.8.4.4)
Try to edit that file: vi /etc/resolv.conf
You
should insert/edit the following into:
nameserver 8.8.8.8
nameserver 8.8.4.4
Save the file by clicking [Esc] and type :wq
I fixed the problem once I added the following to my outbound connection for my group security setting:
Type: All traffic
Protocol: All
ip: 0.0.0.0/0
This also fixed my sudo yum install issues too.
I'm using a Cloudformation template in YAML with an embedded cloud-init UserData to set hostname and install packages and so on, and I've found that once I include the write_files directive it will break the default SSH key on the EC2 instance i.e. it seems to interfer with whatever process AWS uses to manage authorized_files, in EC2 logs I can see fingerprints of random keys be generating, not the expected keypair.
#cloud-config
hostname: ${InstanceHostname}
fqdn: ${InstanceHostname}.${PublicDomainName}
manage_etc_hosts: true
package_update: true
package_upgrade: true
packages:
- build-essential
- git
write_files:
- path: /home/ubuntu/.ssh/config
permissions: '0600'
owner: "ubuntu:ubuntu"
content: |
Host github.com
IdentityFile ~/.ssh/git
Host *.int.${PublicDomainName}
IdentityFile ~/.ssh/default
User ubuntu
power_state:
timeout: 120
message: Rebooting to ensure hostname has stuck correctly
mode: reboot
Removing the write_files block works fine, leave it in and I cannot SSH to the host due to ssh key mismatch.
So is it due writing a file to ~/.ssh, maybe ~/.ssh/authorized_keys gets deleted? Or maybe the permissions on the directory are changed?
Appending to ~/.ssh/authorized_keys with runcmd works fine, but I'd like to use the proper write_files method for larger files
For AWS EC2 Linux instances, the SSH public key from your keypair is stored in ~/.ssh/authorized_keys. If you overwrite it with something else, make sure that you understand the implications.
The correct procedure is to append public keys from keypairs to authorized_keys AND set the correct file permissions.
If you are setting up a set of keypairs in authorized_keys, this is OK also. Make sure that you are formatting the file correctly with public keys and setting the file permissions correctly.
The file permissions should be 644 so that the SSH server can read them.
Another possible issue is that when you change authorized_keys you also need to restart the SSH server but I do see that you are rebooting the server which removes that problem.
Ubuntu example:
sudo service ssh restart
I have two Gitlab repositories, owned by two different users. Let's call them
personal: git#gitlab.com:user1/personal.git
work: git#gitlab.com:user2/work.git
I got two RSA keys: with key1 I can usually connect to personal repo; with key2 to work repo.
I have to push/pull from different workstations (Mac OS / Windows); on every machine, I can connect to the first or the second repo with no problems.
From a single Windows 10 workstation I need to connect to both repositories and here start the troubles.
On that workstation, I use Git Bash.
According to different guides (e.g. https://coderwall.com/p/7smjkq/multiple-ssh-keys-for-different-accounts-on-github-or-gitlab), I've setup a ~/.ssh/config file:
Host gitlab-personal
HostName gitlab.com
User git
LogLevel DEBUG3
PreferredAuthentications publickey
IdentityFile ~/.ssh/personal
#IdentitiesOnly yes
Host gitlab-work
HostName gitlab.com
User git
LogLevel DEBUG3
PreferredAuthentications publickey
IdentityFile ~/.ssh/work
#IdentitiesOnly yes
If I run
ssh -Tv git#gitlab-personal
I got:
...
Welcome to Gitlab, User1!
...
And if I run
ssh -Tv git#gitlab-work
I got:
...
Welcome to Gitlab, User2!
...
This makes me think that RSA keys are setup correctly (even in Putty / Pageant).
But if I try to clone the repositories, for example
git clone git#gitlab-personal:user1/personal.git
or
git clone git#gitlab-work:user2/work.git
..in both cases I got this response:
Unable to open connection:
Host does not existfatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
What am I doing wrong?
Thanks to anyone who can help me.
I installed git for windows, creating my ssh key and uploaded the public to my server.
I have this working on my Mac, trying to get it working on my windows machine now.
I did a :
chmod 700 ~/.ssh/
chmod 600 ~/.ssh/*
Here is an image of me doing a ssh -v gitserveralias
I have a config file that has the gitserveralias and port etc.
I tried clearing out the known hosts file also.
My config looks like:
Host serveralias
User xxx
Hostname 123.234.452.232
Port 22222
IdentityFile ~/.ssh/id_rsa
TCPKeepAlive true
IdentitiesOnly yes
PreferredAuthentications publickey
Again I have my setup working fine on my Mac.
Two things to check:
Do you have "PubkeyAuthentication yes" in sshd_config on your server? Try setting it.
Is there an offending key in .ssh/known_hosts? Try removing this file.