Cronjob not executing the Shell Script completely - shell

This is Srikanth from Hyderabad.
I the Linux Administrator in one of the corporate company. We have a squid server, So i prepared a Backup squid server, so that when LIVE Squid server goes down i can put the backup server into LIVE.
My squid servers are configured with Centos 5.5. I have prepared a script to take backup of all configuration files in /etc/squid/ of LIVE server to the backup server. i.e It will copy all files from Live server's /etc/squid/ to backup server's /etc/squid/
Here's the script saved as squidbackup.sh in the directory /opt/ with permission 755(rwxr-xr-x)
#! /bin/sh
username="<username>"
password="<password>"
host="Server IP"
expect -c "
spawn /usr/bin/scp -r <username>#Server IP:/etc/squid /etc/
expect {
"*password:*"{
send $password\r;
interact;
}
eof{
exit
}
}
** Kindly note that this will be executed in the backup server that will check for the user which is mentioned in the script. I have created a user in the live server and given the same in the script too.
When i execute this command using the below command
[root#localhost ~]# sh /opt/squidbackup.sh
Everything works fine till now, this script downloads all the files from the directory /etc/squid/ of LIVE server to the location /etc/squid/ of Backup server
Now the problem raises, If i set this in crontab like below or with other timings
50 23 * * * sh /opt/squidbackup.sh
Dont know what's wrong, it is not downloading all files. i.e Cronjob is downloading only few files from /etc/squid/ of LIVE server to the /etc/squid/ of backup server.
**Only few files are downloaded when cron executes the script, If i run this script manually then it is downloading all files perfectly with out any errors or warnings.
If you have any more questions, Please go ahead to post it.
Now i kindly request to give if any solutions are available.
Please Please, Thank you in advance.
thanks for your interest. I have tried what you have said, it show like below, but previously i use to get the same output to mail of the User in the squid backup server.
Even in cron logs it show the same, but i was not able to understand what was the exact error from the below lines.
Please note that only few files are getting downloaded with cron.
spawn /usr/bin/scp -r <username>#ServerIP:/etc/squid /etc/
<username>#ServerIP's password:
Kindly check if you can suggest any thing else.

Try the simple options first. Capture the stdout and stderr as shown below. These files should point to the problem.
Looking at the script, you need to specify the location of expect. That could be an issue.
50 23 * * * sh /opt/squidbackup.sh >/tmp/cronout.log 2>&1

Related

Shell Script Issue Running Command Remotely using SSH

I have a deploy script in which I want to clear the cache of my CDN. When I am on the server and run my script everything is fine, however when I SSH in and run only that file (i.e. not actually getting into the server, cding into the directory and running it) it fails and states the my doctl command cannot be found. This seems to only be an issue with this program over ssh, running systemctl --help works fine.
Please note that I have installed Digital Ocean's doctl using sudo snap install doctl and it is there.
Here is the .sh file (minus comments):
#!/bin/sh
doctl compute cdn flush [MYID] --files [*] # static cache
So I am not sure what the issue is. Anybody have an idea?
Again, if I get into the server and run the file all works, but here is the SSH command I use that returns the error:
ssh root#123.45.678.999 "/deploy/clear_digital_ocean_cache.sh"
And here is the error.
/deploy/clear_digital_ocean_cache.sh: 10: doctl: not found
Well one solution was to change the command to be an absolute path inside my .sh file like so:
#!/bin/sh
/snap/bin/doctl compute cdn flush [MYID] --files [*] # static cache
I realized that I could run my user commands with ssh (like systemctl) so it was either change where doctl was located (i.e. in the user bin) or ensure that the command was called with an absolute path adding the /snap/bin/ in front of the command.

Newb issue with wget sftp - hangs on authentication

I am a web dev trying to do a little bit of Linux admin and could use help. My server needs to retrieve a file daily from a remote location over sftp, name and date/time stamp it, and push it to a directory for archive.
I have adapted a shell script that I had working when doing this over ftp, but sftp is causing me some issues.
I can successfully connect to the server in Filezilla when I have it set to the sftp protocol and choose the "Longon Type" as "Interactive" where it prompts for a password.
When I use the command line to call my script, it seems to resolve but hangs on the logging in step and provides the following error before retrying: "Error in server response, closing control connection. Retrying."
Here is the output:
https://i.imgur.com/dEXYRHk.png
This is the contents of my script where I've replaced any sensitive information with a placeholder in ALL CAPS.
#!/bin/bash
# Script Function:
# This bash script backups up the .csv everyday (dependent on cron job run) with a file name time stamp.
#[Changes Directory]
cd /THEDIRECTORY
wget --no-passive-ftp --output-document=completed`date +%Y-%m-%d`.csv --user=THEUSER --password='THEPASSWORD' ftp://sftp.THEDOMAIN.com:22 completed.csv
Anyone wanna help a newb to get some of them internet points?! :-)

Mikrotik - Upload file and change it into a script

New into Mikrotik scripting, and missing something really obvious. When create a new script with
/system script add name=mail
/system script edit mail source
save the script and run it, everything is just fine.
Now, if I want to push scripts via scp I hit a roadblock. I upload the rsc files but now don't know, how to make i.e. the uploaded script.rsc to be used as the source for a new script. And my google-fu fails me. Any help appreciated here!
To push a file and execute commands on RouterOS/Mikrotik:
Use a Linux server:
Prepare variables:
ROUTEROS_USER=$1
ROUTEROS_HOST=$2
ROUTEROS_SSH_PORT=$3
FILE=somescript.rsc
Push a file using:
scp -P $ROUTEROS_SSH_PORT "$FILE" "$ROUTEROS_USER"#"$ROUTEROS_HOST":"$FILE"
Execute the commnad that will run a command on RouterOS
ssh $ROUTEROS_USER#$ROUTEROS_HOST -p $ROUTEROS_SSH_PORT "/import file-name=$FILE"
command /import file-name=$FILE.rsc" may be different depends on
your RouterOS version

Cant locate desired local directory while using SCP on Mac

I'm trying to copy a local directory to the root directory of my 1and1 server. I'm on a mac and I've ssh'ed into the server just fine. I looked online and saw numerous examples all along the same lines. I tried:
scp -r ~/Desktop/Projects/resume u67673257#aimhighprogram.com:/
The result in my terminal was:
I'm not sure where Kuden/homepages/29/d401405832/htdocs came from, I thought the ~ would take me to the macbook user directory
Any help would be appreciated, I'm not sure if I'm just missing something simple.
Thanks in advance
To scp, issue the command on your Mac, don't SSH into 1and1.
The error message is telling you that ~/Desktop/Projects/resume is not on the 1and1 server, which you know - because you're working to put it there.
More ...
scp myfile myuser#myserver:~/mypath/myuploadedfile
You would read this as:
scp myfile to myserver, logging in as myuser and place it under the mypath directory of the myuser account, with the name myuploadedfile

script for accessing remote server, get error log and rename it automatically.

Hi, my name is Evan, newbie on UNIX :)
i want to ask about scripting on unix. here is the case :
i have 4 unix server (with freeBSD OS), let call them "Gorrila's"
And one gateway server (also, with unix FreeBSD OS), Let call this one "Monkey's"
if i want access and login to Gorillas server, i have to using putty to access Monkey and then, from monkey doing ssh connection to enter Gorillas server.
The case is, my boss asking to me, to get an apache error log, everday, in fourth of gorrila's server.
All this time, i am doing manually. putty to monkeys - ssh to gorrilas - copy error log into monkey server using scp command and then, get error log with winscp from monkeys server.
the problem is :
how to make script with this case ?
how to rename automatically the error_log because, error log name in every server has a same name. which is "01_error.log". i had to rename it manually so they can't replace each other.
i hope, somebody can help me with this.
All, Thank you for your help and time. and sorry for the bad english language. :)
The easiest way to accomplish this would be to setup an automated job on Gorilla4.
Your first problem, is that you'll need to setup password-less SSH access between Gorilla4 and Monkey so you don't need a person to physically type in the password.
While you can do this with the 'root' user I would STRONGLY recommend against it.
Instead create a maintenance user on BOTH hosts:
$ useradd -m maintuser
Then switch to the new user and create SSH key on Gorilla4:
$ ssh-keygen -t rsa -b 2048
Accept the defaults when prompted. Then copy the id_rsa.pub file to the ~/.ssh directory of the maintuser on Monkey.
Now, when you are the "maintuser" on Gorilla4, you can SSH to Monkey without a password.
Then you can create a script called "copy_log.sh":
#!/bin/bash
# copy_log.sh
log_path="/path/to/logdir"
log_name="01_error.log"
target_host="monkey"
echo "copying ${log_name} to ${target_host}..."
# note: $(hostname) below will add "Gorilla4" to the name of the file
scp ${log_path}/${log_name} maintuser#${target_host}:/path/to/dest/$(hostname)_${log_name} || {
echo "Failed to scp file"
exit 2
}
echo "completed successfully"
Make it executable:
$ chmod +x copy_log.sh
Add it to the maintuser's crontab on Gorilla4 to run at whatever time you would nomrally do it yourself, say 8am everyday:
00 08 * * * /path/to/copy_log.sh >> /some/log/dir/copy_log.out 2>&1
Hope this helps; if nothing else, it will give you plenty to Google :)

Resources