Starting bootcamp - MacTerminal and Github SSH fingerprint don't match - macos

I have been following the steps of the courses pre-work, including:
checking for, generating, copy/paste, and
saving the SSH keys to GitHub.
But when I am instructed to check the matching fingerprints using "ssh -T git#github.com", the prints don't match.
I've even started from the beginning clear through, but they still don't match.
Thought I'd reach out here before using my 1 tutoring.
Hopefully the screenshot showing what I see helps(link).
EDIT- I understand there's some stuff in there that shouldn't be, I was just trying things for diff results. I would just like to know where I went wrong and how to avoid it.

What you ssh is the remote site SSH key fingerprint, not you registered SSH key fingerprint.
You see (or should see if you are contacting the correct github.com) the fingerprints exposed with api.github.com/meta as explained here.
Using jq, you can add them to your ~/.ssh/known_hosts with:
curl --silent https://api.github.com/meta \
| jq --raw-output '"github.com "+.ssh_keys[]' >> ~/.ssh/known_hosts
From there, you can test your connection with ssh -Tv github.com, and check if you see a welcome message:
Hi username!
You've successfully authenticated, but GitHub does not provide shell access

Related

Cloning of SSH-link from Github was denied

Currently I have a problem that when I want to clone ssh-link from GitHub: it writes to me that permission denied (publickey).
I know exactly that before I accidentally wrote answer "No" on the question If I am sure to continue with connecting.
Please, how could I unblock it? I mean, to change to status - Yes, and then it could be working as well.
The "yes/no" question the first SSH connection is normally the one to add the remote host fingerprint to your ~/.ssh/known_hosts.
You can restore it with:
ssh-keyscan -t rsa github.com >> ~/.ssh/known_hosts
(note the >>, not >, to *append to your ~/.ssh/known_hosts file)
Compare the result of that command with the official SSH host keys from GitHub, to make sure you are talking to the "right" github.com.
A safer alternative to add those keys, using jq:
curl --silent https://api.github.com/meta \
| jq --raw-output '"github.com "+.ssh_keys[]' >> ~/.ssh/known_hosts
After that, if you still have a permission denied, make sure you have added your public key to your GitHub SSH settings.
Test your connection with ssh -Tv github.com: you should see a welcome message:
Hi username! You've successfully authenticated, but GitHub does not provide shell access.

How to repair sshkey pairs after recreating global ssh keys with Ansible

In a nutshell, after deleting then recreating new global ssh keys on a managed host as part of an ansible play, the shared ssh keys between the controller and the host break. I would like to know a superior method to "fix" this issue and regain the original ssh key trust using ansible itself. Unfortunately this will require some explanation.
Basically as a start, right now, I don't have ansible set up when a new image is deployed. To remedy that, I have created a bash script, utilizing expect which nicely and neatly does 2 things on that new managed host:
Creates an ansible account with appropriate sudo permissions
Creates an ssh key pair between the controller and the controller and the managed host.
That's it, and that's all, however it does require manual input at this time as to the IP of the host to be run on. We now have a desired state from which ansible works well via ssh. However it seems cumbersome at 328 lines of code to check and do this procedure, more on this later.
The issue starts, due to the fact that the host/server is deployed from an image, there is a need to recreate the global keys on each so that they do not have the same set. The fix for this part of that issue is a simple 2 steps:
Find and delete all ^ssh_host_. files in the directory /etc/ssh/
Run the command: /usr/bin/ssh-keygen -A to generate new global ssh keys.
We however now have a problem, once the current ssh connection is broken to the managed host, we can no longer connect to our managed hosts as our known_hosts file on the controller now have keys that don't match. If you do nothing else, you get a prompt again to verify the remote key as it has "changed" and you can't continue until you do. (Stopping all playbooks from functioning) OR if you try to clear the IP out of the known_hosts file on the controller and put it back in, you get the lovely below message:
"changed": false,
"msg": "Failed to connect to the host via ssh: ###########################################################\r\n# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! ***SNIP*** You can use following command to remove the offending key:\r\nssh-keygen -R 10.200.5.4 -f /home/ansible/.ssh/known_hosts\r\nECDSA host key for 10.200.5.4 has changed and you have requested strict checking.\r\nHost key verification failed.",
"unreachable": true
So now I have an issue, and there must be a few commands which I can utilize with ssh-keygen, and/or ssh-keyscan to fix this mess cleanly. However for the life of me I can't figure it out. My only recourse now is to re-run the bash script which initially sets this all up, and replace everything on the controller/host sshkey wise. This seems like overkill, I can't possibly believe that is necessary.
My only hope now is that someone else has an idea how to solve this cleanly and permanently without manual intervention. Otherwise, the only thing I can do is set the ansible_ssh_common_args: "-o StrictHostKeyChecking=no"fact and run the commands my script does but only in playbook form. I can't believe there aren't any modules which can accomplish this. I tried the known_host module, but either I don't know how to use it properly, or it doesn't have this functionality. (Also it has the annoying property of changing my known_hosts file to root ownership, which I must then change back.)
If anyone can help that would be fantastic! Thanks in advance!
The below is not strictly needed as it's extra text clogging up the works, but it does illustrate how the bash script fixes this issue and maybe give some insight on a better solution:
In short, it generates an ssh public and private key, attaches the hostname to them, creates an ssh config identity file using a heredocs population method, puts them in the proper spots, and then copies the public key over to mangaged host in question.
The code snipits are below to show how this is accomplished. This is not the entire script just relevant parts:
#HOMEDIR is /home/ansible This host is the IP of managed host in the run.
#THISHOST is the IP of the managed host in question. Yes, we ONLY use IP's, there is no DNS.
cd "$HOMEDIR"
rm -f $HOMEDIR/.ssh/id_rsa
ssh-keygen -t rsa -f "$HOMEDIR"/.ssh/id_rsa -q -P ""
sudo mkdir -p "$HOMEDIR"/.ssh/rsa_inventory && sudo chown ansible:users "$HOMEDIR"/.ssh/rsa_inventory
cp -p "$HOMEDIR"/.ssh/id_rsa "$HOMEDIR"/.ssh/rsa_inventory/$THISHOST-id_rsa
cp -p "$HOMEDIR"/.ssh/id_rsa.pub "$HOMEDIR"/.ssh/rsa_inventory/$THISHOST-id_rsa.pub
#Heredocs implementation of the ssh config identity file:
cat <<EOT >> /home/ansible/.ssh/config
Host $THISHOST $THISHOST
HostName $THISHOST
IdentityFile ~/.ssh/rsa_inventory/${THISHOST}-id_rsa
User ansible
EOT
#Define the variable earlier before the expect script is run so it makes sense in next snipit:
ssh_key=$( cat "$HOMEDIR"/.ssh/id_rsa.pub )
#Snipit in except script where it echos over the public ssh key to the managed host from the controller.
send "sudo echo '"$ssh_key"' >> /home/ansible/.ssh/authorized_keys\n"
expect -re {:~> *$}
send "sudo chmod 644 /home/ansible/.ssh/authorized_keys\n"
expect -re {:~> *$}
#etc etc, so on and so forth properly setting attributes on this file. ```
Now things work with passwordless ssh as they should. Until they are re-ruined by the global ssh key replacement.

connect to gitlab project using ssh key

Please i added an ssh key on gitlab (public_rsa).
My problem is that I am still asked for my gitlab password and passphrase when i tried to push a branch on repository. My understanding was that after I set up this SSH key, I would no longer have to do that.
ssh-keygen -t rsa -b 4096 -C "gregory#gmail.com" -f $HOME/.ssh/id_rsa_specific
If someone can help to give an explanation i would appreciate it.
Tell me if im not clear .
Thank you.
I see you listed the command that generates the rsa key. You didn't mention if you placed that key in Gitlab or where.
I would first double check that you have copied and pasted the contents of $HOME/.ssh/id_rsa_specific into your Gitlab accounts settings >> ssh keys.
https://docs.gitlab.com/ee/ssh/#adding-an-ssh-key-to-your-gitlab-account
Then I would try checking the ssh key by running the following command in a terminal:
ssh -T git#gitlab.com
https://docs.gitlab.com/ee/ssh/#testing-that-everything-is-set-up-correctly

Using cURL to send JSON within a BASH script

Alright, here's what I'm trying to do. I'm attempting to write a quick build script in bash that will check out a private repository from GitHub on a remote server. To do this a "hands off" as possible, I want to generate a local RSA key set on the remote server and add the public key as a Deploy Key for that particular repository. I know how to do this using GitHub's API, but I'm having trouble building the JSON payload using Bash.
So far, I have this particular process included below:
#!/bin/bash
ssh-keygen -t rsa -N '' -f ~/.ssh/keyname -q
public_key=`cat ~/.ssh/keyname.pub`
curl -u 'username:password' -d '{"title":"Test Deploy Key", "key":"'$public_key'"}' -i https://api.github.com/repos/username/repository/keys
It's just not properly building the payload. I'm not an expert when it comes to string manipulation in Bash, so I could seriously use some assistance. Thanks!
It's not certain, but it may help to quote where you use public_key, i.e.
curl -u 'username:password' \
-d '{"title":"Test Deploy Key", "key":"'"$public_key"'"}' \
-i https://api.github.com/repos/username/repository/keys
Otherwise it will be much easier to debug if you use the shell's debugging options set -vx near the top of your bash script.
You'll see each line of code (or block (for, while, etc) as it is in your file. Then you see each line of code with the variables expanded to their values.
If you're still stuck, edit your post to show the expanded values of variables for the problem line in your script. What you have looks reasonable at first glance.

Automatically accept rsa fingerprint using pscp

When you're using pscp to send files to a single machine is not a big deal because you will get the rsa fingerprint prompt once and never again after. But if you want to connect to 200 machines, you definitely don't want to type "yes" 200 times....
I'm using pscp on a Windows machine and I really don't care about the fingerprint, I only want to accept it. I'm using Amazon EC2 and the finger print change every time i restart the machines....
If there is a way to avoid it using pscp or a different tool please let me know!!!
Thanks!
See Putty won't cache the keys to access a server when run script in hudson
On Windows you can use prefix echo y | in front of your command which will blindly accept any host key every time. However, a more secure solution is to run interactively the first time, or generate a .reg file that can be run on any client machine.
I do not completely agree with the last answer. The first time you accept an SSH key, you know nothing about the remote host, so automatically accepting it makes no difference.
What I would do is auto accept the key the first time you connect to a host. I've read that doing something like yes yes | ssh user#host works, but it doesn't, because SSH does not read from stdin, but from a terminal.
What does work is to pass, that first time you connect, the following ssh option (it works for both scp and ssh:
scp -oStrictHostKeyChecking=no user#host1:file1 user#host2:file2
This command would add the key the first time you run it, but if, as Eric says, doing this once you have accepted the key is dangerous (man in the middle is uncool). If I were you I'd add it to a script that checked in ~/.ssh/known_hosts if there's already a line for that host, in which case I wouldn't add that option. On the other hand, if there was no line, I'd do so ;).
If you are dealing with an encrypted version of known_hosts, try with
ssh-keygen -F hostname
Here's something I'm actually using (function receiving the following arguments: user, host, source_file)
deployToServer() {
echo "Deployng to $1#$2 from $3"
if [ -z "`cat ~/.ssh/known_hosts | grep $2`" ] && [ -z "`ssh-keygen -F $2`" ]
then
echo 'Auto accepting SSH key'
scp -oStrictHostKeyChecking=no $3* $1#$2:.
else
scp $3* $1#$2:.
fi
}
Hope this helped ;)
The host ssh key fingerprint should not change if you simply reboot or stop/start an instance. If it does, then the instance/AMI is not configured correctly or something else (malicious?) is going on.
Good EC2 AMIs are set up to create a random host ssh key on first boot. Most popular AMIs will output the fingerprint to the console output. For security, you should be requesting the instance console output through the EC2 API (command line tool or console) and comparing that to the fingerprint in the ssh prompt.
By saying you "don't care about the fingerprint" you are saying that you don't care about encrypting the traffic between yourself and the instance and it's ok for anybody in between you and the instance to see that communication. It may even be possible for a man-in-the-middle to take over the ssh session and gain access to control your instance.
With ssh on Linux you can turn off the ssh fingerprint check with a command line or config file option. I hesitate to publish how to do this as it is not recommended and seriously reduces the safety of your connections.
A better option is to have your instances set up their own host ssh key to a secret value that you know. You can save the public side of the host ssh key in your known hosts file. This way your traffic is encrypted and safe, and you don't have to continually answer the prompt about the fingerprints when connecting to your own machine.
I created a expect file with following commands in it:
spawn ssh -i ec2Key.pem ubuntu#ec2IpAddress
expect "Are you sure you want to continue connecting (yes/no)?" { send "yes\n" }
interact
I was able to ssh into the ec2 console without disabling the rsa fingerprint. My machine was added to the known hosts of this ec2.
I hope it helps.

Resources