pdsh not working with ips in the file - bash

I have a text file, like this:
cat hed.txt
10.21.23.12
10.23.12.12
I can ssh to each ip without without prompting for the key verification.
I want to run a command on each of these IPs, so I was using pdsh. I tried multiple options, but I am getting following error:
pdsh -w ^hed uptime
00f12e86-cfcc-4239-9dfc-006b65a319c3: ssh: Could not resolve hostname 00f12e86-cfcc-4239-9dfc-006b65a319c3: nodename nor servname provided, or not known
pdsh#saurabh: 00f12e86-cfcc-4239-9dfc-006b65a319c3: ssh exited with exit code 255
I mentioned here, I tried following as well, but this also gave same error.
PDSH_SSH_ARGS_APPEND="-o StrictHostKeyChecking=no" pdsh -R ssh -w ^hed uptime
Also tried comment from here, but no help.
PDSH_SSH_ARGS_APPEND="-o StrictHostKeyChecking=no" pdsh -R ssh ^hed uptime
pdsh#saurabh: no remote hosts specified
I am able to do csshx on these via: csshX --host hed.txt, which works but pdsh will suit more for my work which is not working.

Ahh, This worked like this:
pdsh -w '^hed.txt' uptime
For my colleagues it is working without quotes as well with same version of pdsh, which is weird.

Related

Secure copy over two IPs on the same network to the local machine [duplicate]

I wonder if there is a way for me to SCP the file from remote2 host directly from my local machine by going through a remote1 host.
The networks only allow connections to remote2 host from remote1 host. Also, neither remote1 host nor remote2 host can scp to my local machine.
Is there something like:
scp user1#remote1:user2#remote2:file .
First window: ssh remote1, then scp remot2:file ..
Second shell: scp remote1:file .
First window: rm file; logout
I could write a script to do all these steps, but if there is a direct way, I would rather use it.
Thanks.
EDIT: I am thinking something like opening SSH tunnels but i'm confused on what value to put where.
At the moment, to access remote1, i have the following in $HOME/.ssh/config on my local machine.
Host remote1
User user1
Hostname localhost
Port 45678
Once on remote1, to access remote2, it's the standard local DNS and port 22. What should I put on remote1 and/or change on localhost?
I don't know of any way to copy the file directly in one single command, but if you can concede to running an SSH instance in the background to just keep a port forwarding tunnel open, then you could copy the file in one command.
Like this:
# First, open the tunnel
ssh -L 1234:remote2:22 -p 45678 user1#remote1
# Then, use the tunnel to copy the file directly from remote2
scp -P 1234 user2#localhost:file .
Note that you connect as user2#localhost in the actual scp command, because it is on port 1234 on localhost that the first ssh instance is listening to forward connections to remote2. Note also that you don't need to run the first command for every subsequent file copy; you can simply leave it running.
Double ssh
Even in your complex case, you can handle file transfer using a single command line, simply with ssh ;-)
And this is useful if remote1 cannot connect to localhost:
ssh user1#remote1 'ssh user2#remote2 "cat file"' > file
tar
But you loose file properties (ownership, permissions...).
However, tar is your friend to keep these file properties:
ssh user1#remote1 'ssh user2#remote2 "cd path2; tar c file"' | tar x
You can also compress to reduce network bandwidth:
ssh user1#remote1 'ssh user2#remote2 "cd path2; tar cj file"' | tar xj
And tar also allows you transferring a recursive directory through basic ssh:
ssh user1#remote1 'ssh user2#remote2 "cd path2; tar cj ."' | tar xj
ionice
If the file is huge and you do not want to disturb other important network applications, you may miss network throughput limitation provided by scp and rsync tools (e.g. scp -l 1024 user#remote:file does not use more than 1 Mbits/second).
But, a workaround is using ionice to keep a single command line:
ionice -c2 -n7 ssh u1#remote1 'ionice -c2 -n7 ssh u2#remote2 "cat file"' > file
Note: ionice may not be available on old distributions.
This will do the trick:
scp -o 'Host remote2' -o 'ProxyCommand ssh user#remote1 nc %h %p' \
user#remote2:path/to/file .
To SCP the file from the host remote2 directly, add the two options (Host and ProxyCommand) to your ~/.ssh/config file (see also this answer on superuser). Then you can run:
scp user#remote2:path/to/file .
from your local machine without having to think about remote1.
With openssh version 7.3 and up it is easy. Use ProxyJump option in the config file.
# Add to ~/.ssh/config
Host bastion
Hostname bastion.client.com
User userForBastion
IdentityFile ~/.ssh/bastion.pem
Host appMachine
Hostname appMachine.internal.com
User bastion
ProxyJump bastion # openssh 7.3 version new feature ProxyJump
IdentityFile ~/.ssh/appMachine.pem. #no need to copy pem file to bastion host
Commands to run to login or copy
ssh appMachine # no need to specify any tunnel.
scp helloWorld.txt appMachine:. # copy without intermediate jumphost/bastion host copy.**
ofcourse you can specify bastion Jump host using option "-J" to ssh command, if not configured in config file.
Note scp does not seems to support "-J" flag as of now. (i could not find in man pages. However above scp works with config file setting)
There is a new option in scp that add recently for exactly this same job that is very convenient, it is -3.
TL;DR For the current host that has authentication already set up in ssh config files, just do:
scp -3 remote1:file remote2:file
Your scp must be from recent versions.
All other mentioned technique requires you to set up authentication from remote1 to remote2 or vice versa, which not always is a good idea.
Argument -3 means you want to move files from two remote hosts by using current host as intermediary, and this host actually does the authentication to both remote hosts, so they don't have to have access to each other.
You just have to setup authentication in ssh config files, which is fairly easy and well documented, and then just run the command in TL;DR
The source for this answer is https://superuser.com/a/686527/713762
This configuration works nice for me:
Host jump
User username
Hostname jumphost.yourorg.intranet
Host production
User username
Hostname production.yourorg.intranet
ProxyCommand ssh -q -W %h:%p jump
Then the command
scp myfile production:~
Copies myfile to production machine.
a simpler way:
scp -o 'ProxyJump your.jump.host' /local/dir/myfile.txt remote.internal.host:/remote/dir

User-level command shell change for accessing remote machine with paramiko

I use some code connecting with remote machine with use of paramiko library. The connection is established over tunnelling ssh connection bound to one of the localhost ports. The default shell on the remote machine is tcsh, but my code requires it to run bash. I have tested the sshing some simple commands and it works fine.
$ ssh localhost -p 2222 'echo $0'
tcsh
To change the login shell I have added to my .tcshrc file following two lines:
setenv SHELL /bin/bash
exec /bin/bash --login
The following thing works:
$ ssh localhost -p 2222
[user#remote ~]$ echo $0
/bin/bash
But not the following:
$ ssh localhost -p 2222 'echo $0'
which gives no response. The same for the connections with paramiko established by the code I want to use.
At the moment I am limited only to user-level solutions and would rather not play with the paramiko-using-code itself. Is there anything else I could try here?

Hadoop : start-dfs.sh Connection refused

I have a vagrant box on debian/stretch64
I try to install Hadoop3 with documentation
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.htm
When I run start-dfs.sh
I have this message
vagrant#stretch:/opt/hadoop$ sudo sbin/start-dfs.sh
Starting namenodes on [localhost]
pdsh#stretch: localhost: connect: Connection refused
Starting datanodes
pdsh#stretch: localhost: connect: Connection refused
Starting secondary namenodes [stretch]
pdsh#stretch: stretch: connect: Connection refused
vagrant#stretch:/opt/hadoop$
of course I tried to update my hadoop-env.sh with :
export HADOOP_SSH_OPTS="-p 22"
ssh localhost work (without password)
I have not ideas what I can change to solve this problem
There is a problem the way pdsh works by default (see edit), but Hadoop can go without it. Hadoop checks if the system has pdsh on /usr/bin/pdsh and uses it if so. An easy way get away from using pdsh is editing $HADOOP_HOME/libexec/hadoop-functions.sh
replace the line
if [[ -e '/usr/bin/pdsh' ]]; then
by
if [[ ! -e '/usr/bin/pdsh' ]]; then
then hadoop goes without pdsh and everything works.
EDIT:
A better solution would be use pdsh, but with ssh instead rsh as explained here, so replace line from $HADOOP_HOME/libexec/hadoop-functions.sh:
PDSH_SSH_ARGS_APPEND="${HADOOP_SSH_OPTS}" pdsh \
by
PDSH_RCMD_TYPE=ssh PDSH_SSH_ARGS_APPEND="${HADOOP_SSH_OPTS}" pdsh \
Obs: Only doing export PDSH_RCMD_TYPE=ssh, as I mention in the comment, doesn't work. I don't know why...
I've also opened a issue and submitted a patch to this problem: HADOOP-15219
I fixed this problem for hadoop 3.1.0 by adding
PDSH_RCMD_TYPE=ssh
in my .bashrc as well as $HADOOP_HOME/etc/hadoop/hadoop-env.sh.
check if your /etc/hosts file contains the hostname stretch and localhost mapping or not
my /etc/hosts file
Go to your hadoop home directory
~$ cd libexec
~$ nano hadoop-functions.sh
edit this line:
if [[ -e '/usr/bin/pdsh' ]]; then
with:
if [[ ! -e '/usr/bin/pdsh' ]]; then
Additionally, it is recommended that pdsh also be installed for better ssh resource management. —— Hadoop: Setting up a Single Node Cluster
We can remove pdsh to solve this problem.
apt-get remove pdsh
Check if the firewalls are running on your vagrant box
chkconfig iptables off
/etc/init.d/iptables stop
if not that have a look in the underlying logs /var/log/...
I was dealing with my colleague's problem.
he configured ssh using hostname from the hosts file and specified ip in the workers.
after I rewrote the workers file everything worked.
~/hosts file
10.0.0.1 slave01
#ssh-copy-id hadoop#slave01
~/hadoop/etc/workers
slave01
I added export PDSH_RCMD_TYPE=ssh to my .bashrc file, logged out and back in and it worked.
For some reason simply exporting and running right away did not work for me.

How to copy files from one machine to another machine

I want to copy /home/cmind012/m.sh from one system to another system (both system Linux) using shell script.
Command $
scp /home/cmind012/m.sh cmind013:/home/cmind013/tanu
getting message
ssh: cmind013: Name or service not known
lost connection
It seems that cmind013 is not being resolved, I would try using first
nslookup cming013
and see what why donesn't it resolve.
It seems that you are missing the IP Address/Domain of the remote host. The format should be user#host:[directory]
You could do the following:
scp -r [directory/files] [remote host]:[destination directory]
ex: scp -r /var/www/html/* root#192.168.1.0:/var/www/html/
Try the following command:
scp /home/cmind012/m.sh denil#172.22.192.105:/home/denil/

Vagrant + docker errors

I'm using Vagrant 1.6.3 with phusion/baseimage as the docker provider to get going with Docker. But I have been running into this error:
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
ssh -i
/tmp/key_e8ffa02d35af2bec7aab60fe7e9df4db_0c30703c7b7126cdf4832a41b85627e5
-o Compression=yes -o ConnectTimeout=5 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -p22 root#172.17.0.2 'sudo -E -H bash -l'
Stdout from the command:
boot2docker: 0.8.0
VAGRANT FENCE: 1402443935 41755
Reading package lists...
Building dependency tree...
Reading state information...
Stderr from the command:
Warning: Permanently added '172.17.0.2' (ECDSA) to the list of known hosts.
stdin: is not a tty
VAGRANT FENCE: 1402443935 88439
modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/3.13.3-tinycore64/modules.dep.bin'
E: Unable to locate package linux-image-extra-3.13.3-tinycore64
E: Couldn't find any package by regex 'linux-image-extra-3.13.3-tinycore64'
Can anyone help me out? Thanks.
It seems like the problem is, that you're doing ssh to this server for the first time and ssh asks you to confirm the server's key. But since this is run from a script, the user doesn't answer it and ssh return an error code.
Option 1. I haven't used vagrant, so I'm not sure if you can ssh to this host interactively to add the key.
Option 2. Add the key manually. Usually the known_hosts file is hashed so it's not very easy to work with it can be a bit hard. You'll have to use ssh-keyscan and ssh-keygen to find the right keys. Here is a small tutorial, you can google for more.
Option 3. Use something like
yes "yes" | ssh ...
to automatically accept the offered key
Option 4. Do not require the key, like this
ssh -oStrictHostKeyChecking=no ...
P.S. I haven't tested these, so some may not work, sorry.
P.P.S. Options 3 and 4 have security problems. Options 1 and 2 are better, but still may pose security issues if you don't verify the keys.

Resources