Problems in running a script.sh in Ubuntu app Windows10 - bash

I'm trying to learn how to code. I have to say that I'm using the Ubuntu app on the Windows system, so I don't know if my problems are related to this system.
I established these variables in the terminal
FOLDER="/mnt/c/Users/franc/Desktop/nuova"
species=mm10
fragmentsize=200
window=200
gap=200
output="/mnt/c/Users/franc/Desktop/nuova/sicer"
and then I wrote this loop
#!/bin/bash
for fq in $FOLDER/*.bam
do
bedtools bamtobed -i "$fq" > "${fq%.bam}.bed"
sicer -t ${fq%.bam}.bed \
-s $species \
-f $fragmentsize \
-w $window \
-g $gap \
-o $output
echo "DONE"
done
Basically I want the files in the FOLDER to be transformed in "${fq%.bam}.bed" and then I want to run sicer tool on these new files.
If I copy and paste these commands, on the terminal, everything goes fine but if I save the loop as script.sh and I try to run the script I obtain different errors.
Of course, I made the script executable with chmod +x and I also changed the syntax of the script using awk '{ sub("\r$", ""); print }' myscript.sh > myscript1.sh since I edited it in Windows(otherwise ubuntu fails to open it).
But when I launch the script containing the loop, it says that it is not able to open the files in the FOLDER (Failed to open BAM file /*.bam or BAD permission denied). I tried both to open it just giving the command ./myscript1.sh or also using sudo ./myscript1.sh.
What I'm missing? I have in some way link the variable I establish in the terminal to a new variable in the script saved?
thanks
Francesca

You need to export the variables so that they'll be inherited by the shell process running the script.
export FOLDER="/mnt/c/Users/franc/Desktop/nuova"
export species=mm10
export fragmentsize=200
export window=200
export gap=200
export output="/mnt/c/Users/franc/Desktop/nuova/sicer"

Related

Old version of script is run unless invoked with "sh scriptname"

I'm making a small edit to a shell script I use to mask password inputs like so:
#!/bin/bash
printf "Enter login and press [ENTER]\n"
read user
printf "Enter password and press [ENTER]\n"
read -s -p pass
With the read -s -p pass being the updated part. For some reason I'm not seeing the changes when I run it normally by entering script.sh into the command line but I do see the changes when I run sh script.sh. I've tried opening new terminal windows, and have run it in both ITerm and the default Mac terminal. I'm far from a scripting master, does anyone know why I'm not seeing the changes without the prefix?
Use a full or relative path to the script to make sure you're running what you think you're running.
If you are running it as simply script.sh then the shell will PATH environment variable lookup to locate it. To see which script.sh bash would be using in that case, run type script.sh.
Relative Path
./script.sh
Full Path
/path/to/my/script.sh

Exporting ROS Master URI from shell script

I am trying to export ROS_MASTER_URI from a shell script and then launch roscore. In my .sh file I have:
roxterm --tab -e $SHELL -c "cd $CATKIN_WS; $srcdevel; export ROS_MASTER_URI='http://locahost:1234'; roscore -p 1234"
When I do this, however, I get the following error in the roscore tab:
WARNING: ROS_MASTER_URI [http://locahost:1234] host is not set to this machine.
When I echo the ROS_MASTER_URI in this tab, it says that it is localhost:1234, which is correct. When I manually execute these commands, it works correctly and roscore launches without any issues. I am not sure why it does not work when launched from a bash file.
It was just a typo- missed the l in localhost. All working now.

Stdout & stderr not redirecting on autostart

I am using Raspbian (Debian with LXDE on a Raspberry Pi).
I have created the following two files. The first one is a .desktop file so as lxde can autostart my script, and the second one is the script in question.
The problem is that when I manually start the script it works perfect, creating the directories and redirecting the streams. However when I reboot the pi, and the script autostarts I get no output at all. The script is surelly working as my final app indeed starts. Only the streams are not there.
I have no idea for what to search for, or what causes this...
.desktop
[Desktop Entry]
Type=Application
Exec=system_start.sh
system_start.sh
#!/bin/bash
cd ~/application.linux64/
mkdir system_log
DIR=system_log/$(date +%Y%m%d)
mkdir $DIR/
./start.sh 1> $DIR/$(date +%T)operation_log.txt 2> $DIR/$(date +%T)errors_log.txt
I had this same problem with Linux Mint. A working command with redirect to a file did not work when started at boot using autostart .desktop file.
Enclosing the command in bash -c " " helped:
bash -c "/home/huehuehue/myguiapp >> /home/huehuehue/myguiapp.log 2>&1"
You should probably use the whole path instead of a relative path to make your script work in any circumstances and avoid ~:
#!/bin/bash
DIR=/home/username/application.linux64/
mkdir $DIR/system_log
SUBDIR=system_log/$(date +%Y%m%d)
mkdir $SUBDIR
./start.sh 1> $SUBDIR/$(date +%T)operation_log.txt 2> $SUBDIR/$(date +%T)errors_log.txt

Applying sudo to some commands in script

I have a bash script that partially needs to be running with default user rights, but there are some parts that involve using sudo (like copying stuff into system folders) I could just run the script with sudo ./script.sh, but that messes up all file access rights, if it involves creating or modifying files in the script.
So, how can I run script using sudo for some commands? Is it possible to ask for sudo password in the beginning (when the script just starts) but still run some lines of the script as a current user?
You could add this to the top of your script:
while ! echo "$PW" | sudo -S -v > /dev/null 2>&1; do
read -s -p "password: " PW
echo
done
That ensures the sudo credentials are cached for 5 minutes. Then you could run the commands that need sudo, and just those, with sudo in front.
Edit: Incorporating mklement0's suggestion from the comments, you can shorten this to:
sudo -v || exit
The original version, which I adapted from a Python snippet I have, might be useful if you want more control over the prompt or the retry logic/limit, but this shorter one is probably what works well for most cases.
Each line of your script is a command line. So, for the lines you want, you can simply put sudo in front of those lines of your script. For example:
#!/bin/sh
ls *.h
sudo cp *.h /usr/include/
echo "done" >>log
Obviously I'm just making stuff up. But, this shows that you can use sudo selectively as part of your script.
Just like using sudo interactively, you will be prompted for your user password if you haven't done so recently.

Bash script: Turn on errors?

After designing a simple shell/bash based backup script on my Ubuntu engine and making it work, I've uploaded it to my Debian server, which outputs a number of errors while executing it.
What can I do to turn on "error handling" in my Ubuntu machine to make it easier to debug?
ssh into the server
run the script by hand with either -v or -x or both
try to duplicate the user, group, and environment of the error run in your terminal window If necessary, run the program with something like "su -c 'sh -v script' otheruser
You might also want to pipe the result of the bad command, particularly if run by cron(8), into /bin/logger, perhaps something like:
sh -v -x badscript 2>&1 | /bin/logger -t badscript
and then go look at /var/log/messages.
Bash lets you turn on debugging selectively, or completely with the set command. Here is a good reference on how to debug bash scripts.
The command set -x will turn on debugging anywhere in your script. Likewise, set +x will turn it off again. This is useful if you only want to see debug output from parts of your script.
Change your shebang line to include the trace option:
#!/bin/bash -x
You can also have Bash scan the file for errors without running it:
$ bash -n scriptname

Resources