script for accessing remote server, get error log and rename it automatically. - shell

Hi, my name is Evan, newbie on UNIX :)
i want to ask about scripting on unix. here is the case :
i have 4 unix server (with freeBSD OS), let call them "Gorrila's"
And one gateway server (also, with unix FreeBSD OS), Let call this one "Monkey's"
if i want access and login to Gorillas server, i have to using putty to access Monkey and then, from monkey doing ssh connection to enter Gorillas server.
The case is, my boss asking to me, to get an apache error log, everday, in fourth of gorrila's server.
All this time, i am doing manually. putty to monkeys - ssh to gorrilas - copy error log into monkey server using scp command and then, get error log with winscp from monkeys server.
the problem is :
how to make script with this case ?
how to rename automatically the error_log because, error log name in every server has a same name. which is "01_error.log". i had to rename it manually so they can't replace each other.
i hope, somebody can help me with this.
All, Thank you for your help and time. and sorry for the bad english language. :)

The easiest way to accomplish this would be to setup an automated job on Gorilla4.
Your first problem, is that you'll need to setup password-less SSH access between Gorilla4 and Monkey so you don't need a person to physically type in the password.
While you can do this with the 'root' user I would STRONGLY recommend against it.
Instead create a maintenance user on BOTH hosts:
$ useradd -m maintuser
Then switch to the new user and create SSH key on Gorilla4:
$ ssh-keygen -t rsa -b 2048
Accept the defaults when prompted. Then copy the id_rsa.pub file to the ~/.ssh directory of the maintuser on Monkey.
Now, when you are the "maintuser" on Gorilla4, you can SSH to Monkey without a password.
Then you can create a script called "copy_log.sh":
#!/bin/bash
# copy_log.sh
log_path="/path/to/logdir"
log_name="01_error.log"
target_host="monkey"
echo "copying ${log_name} to ${target_host}..."
# note: $(hostname) below will add "Gorilla4" to the name of the file
scp ${log_path}/${log_name} maintuser#${target_host}:/path/to/dest/$(hostname)_${log_name} || {
echo "Failed to scp file"
exit 2
}
echo "completed successfully"
Make it executable:
$ chmod +x copy_log.sh
Add it to the maintuser's crontab on Gorilla4 to run at whatever time you would nomrally do it yourself, say 8am everyday:
00 08 * * * /path/to/copy_log.sh >> /some/log/dir/copy_log.out 2>&1
Hope this helps; if nothing else, it will give you plenty to Google :)

Related

Unable to run script remotely on VIO servers?

I need to run an KSH script in VIO server remotely. But as VIO server is in restricted shell, I tried as below.
ssh -q -T padmin#vioserver "oem_setup_env" < script.ksh
This was worked fine last time, But when try again today I found this throwing an error.
rksh: oem_setup_env: not found
Can someone suggest how to run remotely on VIO servers.
I assume you are using keys so you can log in without using a password. If the previous sentence makes no sense to you, we can address that as well. Just ask.
VIOS is just AIX so it has a root user. You can find the path of root’s home with echo ~root. As I recall, it is usually /. So, become root by doing the oem_setup_env. Create ~root/.ssh. Copy your public key into ~root/.ssh/authorized_keys. Check all the permissions. They should be owned by root, and be either 0700 or 0600 permissions (not readable nor writable by others). Then use ssh root#host ...
I tumbled upon this question when looking for a way to run commands as root through ssh. I found this to work:
ssh padmin#vioshost "echo lparstat -i | oem_setup_env"
lparstat could be substituted by and commands to be run as root.

SSH and agent for Ubuntu file transfer automation

I had a script which is used to create dumps of Database and transfers the files from Ubuntu server to Linux machine, I use scp for file transfer it prompts for password every time, need to automate it. I had the Rsa public key of Linux in Ubuntu machine as authorized_keys, when i scp it says Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password) checked the permissions and every thing like passwordAuthontication off etc no luck.
Can i write the password in my script and use regardless of security as i will provide 700 permissin and no one can access it except me the root user.
This is my script:
export DB_DUMP_DIR=/home/database_dump
export DB_NAME=database_name_$(date '+%Y_%m_%d').sql
mysqldump -u root mysql > ${DB_DUMP_DIR}/${DB_NAME}
if [ $? -eq 0 ];then
scp -i /root/.ssh/id_rsa ${DB_DUMP_DIR}/${DB_NAME} root#192.0.0.0:
else
echo "Error generating database dump"
fi
The first things that come to mind are
Is the server set to allow key authentication authentication? (that's PubkeyAuthentication yes in sshd_config)
Is the server allowing RSA keys? (this might look like RSAAuthentication no in your sshd_config)
Is root's ~/.ssh directory set to 700? (or tighter)
Is root's ~/.ssh/authorized_keys set to 600? (or tighter)
Is the remote machine allowing you to log in as root? (the PermitRootLogin no option in sshd_config)
Is it really the right key you're sending here? Did you try with a different key you created just to test this?
Lastly, it is never, ever a good idea to write the password down in a script. Just don't do it. Fix the problem you have with key authentication here instead.

Pause for password sftp bash script file

I am trying to write a script to automatically upload files to a sftp server. My problem is authentication.
I know it is not possible to store a password in a bash script for sftp.
I can't use keys because the admin of the server won't allow me.
I don't want to use any extras (sshpass/expect) because I can't
guarantee they will be on the machine I'm using (the script are wanted so that the processes are not tied down to a particular machine).
Manual entry of the password is not a problem I just need to get the script to wait for the user to put the password in. At the minute when I run the script it opens terminal, prompts for the password, but when this is entered nothing else happens. If I enter the lines of code manual after it uploads everything correctly.
#!bin/bash/
cd /remote_directory
lcd /local_directory
put some_file.txt
After months of looking for an answer I have finally found the solution. It was in a comment on an answer in some other thread I can't even remember. Hope this can help others out there.
Your bash script should look like this and will connect to the sftp server, prompt the user for the password, and then execute the remaining commands.
#!/bin/bash
sftp user#server <<!
cd /the/remote/directory
lcd /your/local/directory
put/get some.file
!

Need shell script to auto login to remote server

I have 10 Linux servers.
To connect to server every time I have to execute the ssh command to login.
I need one single shell script to login to a remote server.
e.g if server is host name is testhost.com, user is user1 and pass password
when I give the user name user1 in terminal, it should automatically execute the shell script and logged in to remote server for the user user1
Hi i know this is an old question but here is a way to do it follow the link above from the #nick hartung then after that since you have 10 servers you call each server by name so say 'server1' or any name you like but for this example ill name one of the servers 'server1' also remember to change the port from 22 to something else eg 22277 so create a script and name it server1 and the put this in it
#!/bash/bin
ssh username#hostname -p22277
then move the script to user bin
$ sudo chmod 600 server1
$ sudo mv server1 /usr/bin/
then now u can just login to the remote host like this
$ server1
the you will be automatically logged in.
You can write a script that will take a username as a parameter and ssh to the correct host based on that. A quick example:
if [ "$1" == "username" ]; then
ssh username#hostname
fi
if [ "$1" == "username2" ]; then
...
However, the ssh command doesn't have a built in way to provide a password AFAIK. You shouldn't be storing your passwords in a script anyway. The way to get around this is to set up automatic authentication by creating a key pair using ssh-keygen. Here is a link that will show you how to set this up.

How to automate password entry?

I want to install a software library (SWIG) on a list of computers (Jenkins nodes). I'm using the following script to automate this somewhat:
NODES="10.8.255.70 10.8.255.85 10.8.255.88 10.8.255.86 10.8.255.65 10.8.255.64 10.8.255.97 10.8.255.69"
for node in $NODES; do
scp InstallSWIG.sh root#$node:/root/InstallSWIG.sh
ssh root#$node sh InstallSWIG.sh
done
This way it's automated, except for the password request that occur for both the scp and ssh commands.
Is there a way to enter the passwords programmatically?
Security is not an issue. I’m looking for solutions that don’t involve SSH keys.
Here’s an expect example that sshs in to Stripe’s Capture The Flag server and enters the password automatically.
expect <<< 'spawn ssh level01#ctf.stri.pe; expect "password:"; send "e9gx26YEb2\r";'
With SSH the right way to do it is to use keys instead.
# ssh-keygen
and then copy the *~/.ssh/id_rsa.pub* file to the remote machine (root#$node) into the remote user's .ssh/authorized_keys file.
You can perform the task using empty, a small utility from sourceforge. It's similar to expect but probably more convenient in this case. Once you have installed it, your first scp will be accomplished by following two commands:
./empty -f scp InstallSWIG.sh root#$node:/root/InstallSWIG.sh
echo YOUR_SECRET_PASSWORD | ./empty -s -c
The first one starts your command in the background, tricking it into thinking it's running in interactive mode on a terminal. The other one sends it data from stdin. Of course, putting your password anywhere on command line is risky due to shell history being preserved, users being able to see it in ps results etc. Not secure either, but a bit better thing would be to store the password in a file and redirect the second command's input from that file instead of using echo and a pipe.
After copying to the server, you can run the script in a similar manner:
./empty -f ssh root#$node sh InstallSWIG.sh
echo YOUR_SECRET_PASSWORD | ./empty -s -c
You could look into setting up passwordless ssh keys for that. Establishing Batch Mode Connections between OpenSSH and SSH2 is a starting point, you'll find lots of information on this topic on the web.
Wes' answer is the correct one but if you're keen on something dirty and slow, you can use expect to automate this.

Resources