Remote bash script and executing the make command - bash

I have a device installed remotely that has Internet access. As i cannot SSH directly to it, the device downloads updates from a .txt file located in a server. This .txt file is interpreted by the device as a sequence of bash instructions.
Now, i'm planning an update that requires re-compiling a C program in the device after downloading and overwritting some files. The content of the .txt file for this update looks like:
#!/bin/bash
curl -o /path-to-file/newfile.c http://myserver/newfile.c
sleep 10 #wait so it can download
cd /path-to-file
make
sleep 10 #wait to make
sudo /path-to-file/my-program #test if it worked
I previously tested this method and it worked as expected, but never tested make. A couple of questions:
Should it work?
Is the sleep after make necessary?

Here is an example of how to retrieve a source code file into another directory, change to that directory, compile the source code with make and then execute the resulting binary:
mkdir -p path-to-file/
curl -o path-to-file/newfile.c http://www.csit.parkland.edu/~cgraff1/src/hello_world.c
cd path-to-file/
make newfile
./newfile
The cd is really not an integral part of the process, but it seems as if the question specifically pertains to performing the work in a directory other than the present working directory.

Related

Run script on remote server from local machine

I have a remote script on a machine (B) which works perfectly when I run it from machine (B). I wanted to run the script via ssh from machine (A) using:
ssh usersm#${RHOST} './product/2018/requests/inbound/delDup.sh'
However, machine (A) complains about the contents of the remote script (2018req*.txt is a variable defined at the beginning of the script):
ls: cannot access 2018req*.txt: No such file or directory
From the information provided, it's hard to do more than guess. So here's a guess: when you run the script directly on machine B, do you run it from your home directory with ./product/2018/requests/inbound/delDup.sh, or do you cd into the product/2018/requests/inbound directory and run it with ./delDup.sh? If so, using 2018req*.txt will look in different places; basically, it looks in the directory that you were in when you ran the script. If you cded to the inbound directory locally, it'll look there, but running it remotely doesn't change to that directory, so 2018req*.txt will look for files in the home directory.
If that's the problem, I'd rewrite the script to cd to the appropriate directory, either by hard-coding the absolute path directly in the script, or by detecting what directory the script's in (see "https://stackoverflow.com/questions/59895/getting-the-source-directory-of-a-bash-script-from-within" and BashFAQ #28: "How do I determine the location of my script? I want to read some config files from the same place").
BTW, anytime you use cd in a script, you should test the exit status of the cd command to make sure it succeeded, because if it didn't the rest of the script will execute in the wrong place and may do unexpected and unpleasant things. You can use || to run an error handler if it fails, like this:
cd somedir || {
echo "Cannot cd to somedir" >&2
exit 1
}
If that's not the problem, please supply more info about the script and the situation it's running in (i.e. location of files). The best thing to do would be to create a Minimal, Complete, and Verifiable example that shows the problem. Basically, make a copy of the script, remove everything that isn't relevant to the problem, make sure it still exhibits the problem (otherwise you removed something that was relevant), and add that (and file locations) to the question.
First of all when you use SSH, instead of directly sending the output (stdout and stderr) to the monitor, the remote machine/ssh server sends the data back to the machine from which you started the ssh connection. The ssh client running in your local machine will just display it (except if you redirect it of course).
Now, from the information you have provided, it looks like the files are not present on server (B) or not accessible (last but not least, are you sure your ls target the proper directory? ) you could display the current directory in your script before running the ls command for debugging purpose.

wget hangs after large file download

I'm trying to download a large file over a ftp. (5GB file). Here is my script.
read ZipName
wget -c -N -q --show-progress "ftp://Password#ftp.server.com/$ZipName"
unzip $ZipName
The files downloads at 100% but never goes to the unzip command. No special error message, no outputs in the terminal. Just blank new line. I have to send CTRL + c and run back to script to unzip since wget detects that the file is fully downloaded.
Why does is hangs out like this? Is it because of the large file, or passing an argument in command?
By the way I can't use ftp because it's not on the VM i'm working on, and it's a temporary VM so no root privilege to install anything.
I've made some tests, and I think that size of the disk was the reason.
I've tried with curl -O and it worked for the same disk space.

How to write a Makefile to copy scripts to server

After I finish writing scripts on my local machine, I need to copy them in the cluster to execute the codes. For example, I want to copy all the matlab files in my current directory in a directory at the server id#server.
Can anyone help to write a very basic Makefile to fulfill this purpose?
Thanks a lot!
John
Here is an adaptation of Jens's answer, together with my answer here, that takes advantage of the capabilities of Make to only copy across those files that have been modified since the last time you copied the files to the server. That way, if you have hundreds of .m files and you modify one of them, you won't copy all of them across to the server.
It makes use of an empty hidden file, .last_push, that serves only to record (through its own timestamp) the time at which we last copied files to the server.
FILES = $(shell find . -name *.m)
SCP = scp id#server:path/relative/to/your/serverhomedir
LAST_PUSH = .last_push
.PHONY : push
push : $(LAST_PUSH)
$(LAST_PUSH) : $(FILES)
$(SCP) $?
touch $(LAST_PUSH)
Run this with make or make push. The key is the variable $?, which is populated with the list of all prerequisites that are newer than the target - in this case, the list of .m files that have been modified more recently than the last push.
How do you copy files to the server? Assuming you have ssh/scp available:
FILES = file1 file2 *.matlab
copy:
scp $(FILES) id#server:path/relative/to/your/serverhomedir
Run with
$ make copy
As a shell script, it could look like this:
#!/bin/sh
set -- file1 file2 *.matlab
scp "$#" id#server:path/relative/to/your/serverhomedir
Don't forget to chmod u+x yourscript.

How to make open sourced scripts 'installable'?

I've finished a little useful script written in Bash, hosted on github. It's tested and documented. Now, I struggle with how to make it installable, i.e. where should I put it and how.
It seems other such projects use make and configure but I couldn't really find any information on how to do this for bash scripts.
Also I'm unsure into which directory to put my script.
I know how to make it usable by myself but if a user downloads it, I want to provide the means for him to install it easily.
There is no standard for this because most of the time, a project isn't a single script file. Also single file scripts don't need a build step (the script is already in an executable form) and configuration usually comes from an external config file (so no need for a configure script, either).
But I suggest to add a comment near the top of the file which explains what it does and how to install it (i.e. chmod +x + copy to folder).
Alternatively, you could create an installer script which contains your original script plus a header which asks the user where she wants to install the real script and which does everything (mkdir, set permissions with sudo, etc) but it really feels like overkill in your case.
If you want to make it installable so the package manager can easily install and remove (!) it, you need to look at the documentation for rpm or Debian packaging. These are the two most used package managers but they can't install a script per-user (so it would probably end up in /usr/bin)
instruct them to create a file named after the script in their home directory, chmod ug+x the file so it has executable permissions than put the script inside the file, don't forget the #!/bin/bash up top of the vim. This example is a script to copy a file, archive the copied file than remove the copied file leaving only the original file and the archived file.
#!/bin/bash
#### The following will copy the desired file
cp -r /home/wes/Documents/Hum430 /home/wes/docs
#### Next archives the copied file
tar -zcvf Hum430.tar.gz /home/wes/docs
#### and lastly removes the un-archived copy leaving only the original and the archived file.
rm -r /home/wes/docs
### run the file with ./filename (whatever the file is named)

Why my Bash script won't even run if it is deployed in a web app?

I created this simple script that does a backup, I wrote and tested it in Linux, then I copied it in my WebApp WEB-INF/scripts directory so that I could be run via Java Runtime.exec().
#!/bin/bash
JACCISE_FOLDER="/var/jaccise"
rm $JACCISE_FOLDER/jaccisebackup.zip
zip -r jaccisefolder.zip $JACCISE_FOLDER
mysqldump -ujacc -pxxx jacciseweb > jaccisewebdump.sql
zip jaccisebackup.zip jaccisewebdump.sql
zip jaccisebackup.zip jaccisefolder.zip
rm jaccisewebdump.sql
rm jaccisefolder.zip
cp jaccisebackup.zip $JACCISE_FOLDER
But it doesn't. So I tried to copy it from WEB-INF/scripts to my user dir and run it to roubleshoot it. The result is that it comes out with: ": File o directory non esistente" (Means "Unknown file or directory" notice the colon at the beginning). I created another file from scratch, copied and pasted the whole script and it works. I may think that this is related to:
Text encoding
\n\r differences between windows (I use Eclipse on windows to edit everything) and Linux.
How do I solve this deploy problem?
You should check if the file is executable (chmod +x). Then you should check, if your web server allows the execution of external programs. This might be a security problem and it is likely that the web server prevents the execution. Check the logs of the web server. The encoding of the file can be changed with the dos2unix command. In order to debug your script you can add an "set -x" at the beginning, but I think the script does not start at all.

Resources