There are 100k keys in my ec2. I want to copy all keys to another ec2 without BGSAVE and SAVE commands. I want to copy all keys with Linux command. Is there any Linux command to do so?
Hi You can moves your Redis keys from one instance to another by below command
Option One
MIGRATE HOSTNAME PORT "" 0 5000 KEYS key1 key2 key3
Option Second
COPY HOSTNAME PORT "" 0 5000 KEYS key1 key2 key3
Hoping this will help you.
How about using the MIGRATE command?
Related
I am using two users in my windows pc and both the users need git while they have different accounts on GitHub. so how can I fix this? is there a way that both of us can use two accounts on the same pc
I tried to do this where they changed the configuration file but my new user doesn't have the file
thanks in advance!!
If those are two different account on your PC, they have each their %USERPROFILE%/$HOME.
That means each one can create their own SSH key (ssh-keygen -t rsa -P "") and register the %USERPROFILE%\.ssh\id_rsa.pub to their respective GitHub profile.
No need for a %USERPROFILE%\.ssh\config in that case.
If you have only one logged on user on your PC, but need to push as two different users, then yes:
create two different key pairs (ssh-keygen -t rsa -P "" -f key1 and ssh-keygen -t rsa -P "" -f key2, in a created %USERPROFILE%\.ssh)
create a %USERPROFILE%\.ssh\config as you have seen.
I know that ssh key connection should be required for the hadoop operation.
Suppose that there are five clusters consisting of one namenode and four data nodes.
By setting the ssh key connection, we can connect from namenode to datanode and vice versa.
Note that two-way connection should be required for hadoop operation, which means that only one side (namenode to datanode, but not connect to from datanode to namenode) is not possible to operate hadoop as far as I know.
For above scenario, if we have 50 nodes or 100 nodes, it is very laborious jobs to configure all the ssh-key command by connecting the machine and typing same commands ssh-keygen -t ...
For these reasons, I have tried to script the shell code and but failed to do it in an automatic way.
my code is as below.
list.txt
namenode1
datanode1
datanode2
datanode3
datanode4
datanode5
...
cat list.txt | while read server
do
ssh $server 'ssh-keygen' < /dev/null
while read otherserver
do
ssh $server 'ssh-copy-id $otherserver' < /dev/null
done
done
However, it didn't work. As you can understand, the code means that it iterates over all the nodes and creates the key and then copy the generated key into other server using the ssh-copy-id command. But the code didn't work.
So my question is that how to script the codes which enables ssh connection (bothways) using shell scripts...It takes a lot of time for me to achieve it and I cannot find any document describing the ssh connection for multi nodes for avoiding laborious tasks.
You only need to create a public/private key pair at the master node, then use ssh-copy-id -i ~/.ssh/id_rsa.pub $server in the loop. And the master should be in the loop. And there is no need to do this in reverse at the namenodes. The keys have to belong and installed by the user that is running the hadoop cluster. After running the script, you should be able to ssh to all namenodes, as the hadoop user, without using a password.
I have to connect to more than 100 machines through SSH. I made an script to make all the connections and perform the changes that i need. The problem is that i cant type the password every time i execute the script for each of the remote machines. Then, I found out that I could create a file in the /root/.ssh/ directory named config where I can store lines like this:
IdentityFile /root/.ssh/id_rsa_XXXX
The key pair is saved also in /root/.ssh/ but the problem is that there is a limit of 100 identity files that I can write in the config file.
Do u know if there's a workaround to make this possible?
Thanks to all, first question here! :)
First of all, if you have 100 servers to connect and 100 keys, you are doing it wrong. You can reuse the public key for other servers if you make sure the private key is safe.
If you are trying to load all the keys to ssh at once, you are doing it also wrong. The ssh config has a Host keyword, which can be made to filter which key is supposed to be used on which server. And I advise you to use it. Otherwise ssh will not know what key to use to which server and it also overcomes the limit.
Do you have separate ssh keys for each and every server? You could bundle them (one key for each type/function of server). Then you wouldn't need to specify each inside a config file.
Another way around this, would be to call the key from the command line, instead of a config file like so:
ssh -i /root/.ssh/id_rsa_XXX -l user.name server.example.com
If you do it carefully, you could create /root/.ssh/hostname where hostname is the actual hostname of the server you want to connect to. For example:
/root/.ssh/server.example.com
You could then script (BASH) like so (assuming you call the script dossh.sh):
key_and_hostname=$1
ssh -i /root/.ssh/${key_and_hostname} -l user.name ${key_and_hostname}
call the script like:
dossh.sh server.example.com
I'm setting up a multinode hadoop cluster and have a shared key to passwordless SSH between nodes. I named the file ~/.ssh/hadoop_rsa and can connect to other hosts using ssh -i ~/.ssh/hadoop_rsa host.
I need some way to tell hadoop to use this alternate SSH key when connecting to other nodes.
It appears that commands are run on each slave using the script:
$HADOOP_HOME/sbin/slaves.sh
That script includes a reference to the environment variable $HADOOP_SSH_OPTS when calling ssh. I was able to tell Hadoop to use a different key file by setting an environment variable like this:
export HADOOP_SSH_OPTS="-i ~/.ssh/hadoop_rsa"
Thanks to Varun on the Hadoop mailing list for pointing me in the right direction
Not sure the best to title this question... I have a bash script on Server A.
Work Ubuntu Desktop:
ssh -l USER host
*password*
coolscript var1 var2 var3
Server A (coolscript):
command1 $1
command2 $2
now at this point, I need to trigger coolscript2 on server b with the third argument passed. something like
run_remote_command_on_server_b coolscript2 $3
Server B (coolscript2)
command3 $3
However i need this to happen without having to enter user/pass for the second server.
If I understand your question correctly, you need to setup SSH keys.
Arch Linux Wiki has a great article on using SSH keys.
You can also read shorter HOWTO here.
Basically, when you login from host A to host B via SSH, you can omit password authentication by generating private-public key pair. Private key is stored on the host A, and public key you copy to host B.
Please note, that there is an option to secure SSH private key with passphrase - in your case you wouldn't do that.
So, just generate keys on desktop:
$ ssh-keygen
Then copy them to Server A and Server B:
$ ssh-copy-id -i ~/.ssh/id_rsa.pub HOST_A
$ ssh-copy-id -i ~/.ssh/id_rsa.pub HOST_B