bash: execute specifc command depending upon the input - bash

I am able to make the code works for me in command line but what I am trying to achieve is a bit more and I am not getting how to achieve that.
For Example, I have a string MySQL Server#db1com:1111 And this contained hostname is db1.com.
Instead of copying only hostname everytime, I am using javascript functionality to extract the hostname only, I mean as below
function getHostname() {
dbiNode = process.argv.slice(2).toString();
hostname = dbiNode.split(/[#:]/)[1];
console.log(hostname)
}
getHostname()
And for now to automatically SSH I am using below command
ssh "$(node gethostname.js "MySQL Server#db1:1111")""
And it works with out any issue. What I am trying to achieve here is if I type "MySQL Server#db1:1111" in terminal, somehow I need to make my bash execute function via .bashrc where I can handle extracting hostname and SSH into that.
But how, I am not getting.
Any reference links/suggestions greatly appreciated.

You don't need node here, just add the following to your .bashrc
MySQL() {
host="${1##*#}" # remove everything before #
host="${host%%:*}" # remove everything after :
ssh "$host"
}

Related

How to repair sshkey pairs after recreating global ssh keys with Ansible

In a nutshell, after deleting then recreating new global ssh keys on a managed host as part of an ansible play, the shared ssh keys between the controller and the host break. I would like to know a superior method to "fix" this issue and regain the original ssh key trust using ansible itself. Unfortunately this will require some explanation.
Basically as a start, right now, I don't have ansible set up when a new image is deployed. To remedy that, I have created a bash script, utilizing expect which nicely and neatly does 2 things on that new managed host:
Creates an ansible account with appropriate sudo permissions
Creates an ssh key pair between the controller and the controller and the managed host.
That's it, and that's all, however it does require manual input at this time as to the IP of the host to be run on. We now have a desired state from which ansible works well via ssh. However it seems cumbersome at 328 lines of code to check and do this procedure, more on this later.
The issue starts, due to the fact that the host/server is deployed from an image, there is a need to recreate the global keys on each so that they do not have the same set. The fix for this part of that issue is a simple 2 steps:
Find and delete all ^ssh_host_. files in the directory /etc/ssh/
Run the command: /usr/bin/ssh-keygen -A to generate new global ssh keys.
We however now have a problem, once the current ssh connection is broken to the managed host, we can no longer connect to our managed hosts as our known_hosts file on the controller now have keys that don't match. If you do nothing else, you get a prompt again to verify the remote key as it has "changed" and you can't continue until you do. (Stopping all playbooks from functioning) OR if you try to clear the IP out of the known_hosts file on the controller and put it back in, you get the lovely below message:
"changed": false,
"msg": "Failed to connect to the host via ssh: ###########################################################\r\n# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! ***SNIP*** You can use following command to remove the offending key:\r\nssh-keygen -R 10.200.5.4 -f /home/ansible/.ssh/known_hosts\r\nECDSA host key for 10.200.5.4 has changed and you have requested strict checking.\r\nHost key verification failed.",
"unreachable": true
So now I have an issue, and there must be a few commands which I can utilize with ssh-keygen, and/or ssh-keyscan to fix this mess cleanly. However for the life of me I can't figure it out. My only recourse now is to re-run the bash script which initially sets this all up, and replace everything on the controller/host sshkey wise. This seems like overkill, I can't possibly believe that is necessary.
My only hope now is that someone else has an idea how to solve this cleanly and permanently without manual intervention. Otherwise, the only thing I can do is set the ansible_ssh_common_args: "-o StrictHostKeyChecking=no"fact and run the commands my script does but only in playbook form. I can't believe there aren't any modules which can accomplish this. I tried the known_host module, but either I don't know how to use it properly, or it doesn't have this functionality. (Also it has the annoying property of changing my known_hosts file to root ownership, which I must then change back.)
If anyone can help that would be fantastic! Thanks in advance!
The below is not strictly needed as it's extra text clogging up the works, but it does illustrate how the bash script fixes this issue and maybe give some insight on a better solution:
In short, it generates an ssh public and private key, attaches the hostname to them, creates an ssh config identity file using a heredocs population method, puts them in the proper spots, and then copies the public key over to mangaged host in question.
The code snipits are below to show how this is accomplished. This is not the entire script just relevant parts:
#HOMEDIR is /home/ansible This host is the IP of managed host in the run.
#THISHOST is the IP of the managed host in question. Yes, we ONLY use IP's, there is no DNS.
cd "$HOMEDIR"
rm -f $HOMEDIR/.ssh/id_rsa
ssh-keygen -t rsa -f "$HOMEDIR"/.ssh/id_rsa -q -P ""
sudo mkdir -p "$HOMEDIR"/.ssh/rsa_inventory && sudo chown ansible:users "$HOMEDIR"/.ssh/rsa_inventory
cp -p "$HOMEDIR"/.ssh/id_rsa "$HOMEDIR"/.ssh/rsa_inventory/$THISHOST-id_rsa
cp -p "$HOMEDIR"/.ssh/id_rsa.pub "$HOMEDIR"/.ssh/rsa_inventory/$THISHOST-id_rsa.pub
#Heredocs implementation of the ssh config identity file:
cat <<EOT >> /home/ansible/.ssh/config
Host $THISHOST $THISHOST
HostName $THISHOST
IdentityFile ~/.ssh/rsa_inventory/${THISHOST}-id_rsa
User ansible
EOT
#Define the variable earlier before the expect script is run so it makes sense in next snipit:
ssh_key=$( cat "$HOMEDIR"/.ssh/id_rsa.pub )
#Snipit in except script where it echos over the public ssh key to the managed host from the controller.
send "sudo echo '"$ssh_key"' >> /home/ansible/.ssh/authorized_keys\n"
expect -re {:~> *$}
send "sudo chmod 644 /home/ansible/.ssh/authorized_keys\n"
expect -re {:~> *$}
#etc etc, so on and so forth properly setting attributes on this file. ```
Now things work with passwordless ssh as they should. Until they are re-ruined by the global ssh key replacement.

BASH alias running a script on another server

This is either rather simple or impossible, however I can't seem to get a go with it. I'm trying to run a script located on a remote server and I have the following alias in my .bashrc:
alias fin='sh username#host.co.uk:~/scripts/finder.sh'
I have set up SSH key authentication to that host, however I am getting the following error:
sh: 0: Can't openusername#host.co.uk:~/scripts/finder.sh
Can someone please help, thanks :)
You cannot refer to a remote script as if it were a file name. You can use ssh but the syntax is slightly different.
ssh username#host.co.uk scripts/finder.sh
As an aside, functions are often better than aliases.
fin () {
ssh username#host.co.uk scripts/finder.sh "$#"
}
The "$#" is for passing arguments. If the script takes no parameters, it can be omitted.
alias fin='ssh username#host.co.uk /home/username/scripts/finder.sh'
you need to make sure finder.sh has execute permissions and runs locally on host.co.uk as user username

bash variable expansion for scp tab completion

I use ssh at work and recently switched to Linux on my local work machine. As a result, I now have to fully-qualify the hostname (i.e. instead of dev1 I need to use dev1.megacorpnetworkdomain.com). This gets rather annoying for scp. I already have auto-completion set up for remote paths. What I tried to do is create a variable alias for the path:
WORK='user#dev1.megacorpnetworkdomain.com:/home/user'
The problem is now scp fails to do remote completion when I do the following:
scp $WORK/<tab>
Whereas the following performs remote completion correctly:
scp user#dev1.megacorpnetworkdomain.com:/home/user/<tab>
I tried to naively work around this by using complete:
_scp_expand_vars () {
COMPREPLY=()
local cur=${COMP_WORDS[COMP_CWORD]}
eval cur=$cur
COMPREPLY=( $cur )
return 0
}
complete -F _scp_expand_vars scp
This now correctly expands variables for scp, but breaks the original functionality of remote directory expansion. How could I achieve both? I'm assuming the original functionality is handled by _scp_remote_files function based on a bit of online research?
For one, if you regularly access hosts under megacorpnetworkdomain.com, you should put it in /etc/resolv.conf; there should be a line:
search megacorpnetworkdomain.com
Then you can use all hosts by just referencing the short hostname, i.e. dev1 in any program.
For ssh you can follow the advice of Etan Reiser, an example looks like this:
Host dev1
HostName dev1.megacorpnetworkdomain.com
User user
When you use ssh or scp they will offer you the choices defined there with tab completion as usual.

SSH connection with Ruby without username using `authorized_keys`

I have authenticated a server using authorized_keys push so I could run command ssh 192.168.1.101 from my system and could connect via server.
Now, I tried with library , It didn't worked for me
Net::SSH.start("192.168.1.209",username) do |ssh| #output=ssh.exec!("ls -l") end
as, This required username field. I want without username.
So , I tried this
system('ssh 192.168.1.209 "ls -l"')
It run the command for me. But I want the output in a variable like #output in first example. Is there any command any gem or any way by which I could get the solution ?
Any ssh connection requires a username. The default is either your system account name or whatever's specified in .ssh/config for that host you're connecting to.
Your current username should be set as ENV['USER'] if you need to access that.
If you're curious what username is being used for that connection, try finding out with ssh -v which is the verbose mode that explains what's going on.
you can pass parameters into %x[] as follows:
1. dom = ‘www.ruby-rails.in‘
2. #whois = %x[whois #\{dom\}]
Backquotes works very similar to "system" function but with important difference. Shell command enclosed between the backquotes is executed with standard output as result.
So, following statement should execute ssh 192.168.1.209 "ls -l" and puts directory files listing into #output variable:
#output = `ssh 192.168.1.209 "ls -l"`

Bash eval inside quotes

I have a bash script which takes one parameter and does something like this:
ssh -t someserver "setenv DISPLAY $1; /usr/bin/someprogram"
How can I force bash to substitute in the $1 instead of passing the literal characters "$1" as the display variable?
Based on your comment on sehe's answer, it sounds like you just want the remote command to use the local X display — so that the program is running on your remote server (someserver) but being displayed on the machine you ran the ssh command on.
This can be done by just passing -X, e.g.
ssh -X someserver /usr/bin/someprogram
For some reason, this doesn't work with a few programs, for example evince. I'm not really sure why. I'm pretty sure that evince is the only app I've had trouble forwarding back over an SSH connection.
If this isn't what you're aiming to do, please explain.
Edit Are you aware of
ssh -X ...
ssh -Y ...
which already support X forwarding out of the box? Also look at
xhost +
in case you need to increase permissions to 'guests'.
If you want to forward non-standard X display address, you could always use
DISPLAY=localhost:3 ssh -XCt user#remote xterm
Bonus: to make ssh background after authentication, add '-f'
What locally? That should already work as shown. Remotely? escape the $: \$
However, I'm not sure where the command would be taking it's arguments ($1) from

Resources