Cannot append to a file from bash script - bash

if I run only this - it works as expected
user_conf=( "USER_CONFIG_FILE=user_conf.sh"
"USER_CONFIG_FILE=user_conf.sh" "USER_CONFIG_FILE=user_conf.sh" )
USER_CONFIG_FILE="user_conf.sh"
echo "${user_conf[#]}"
for i in "${user_conf[#]}"; do
echo "$i" >> "$USER_CONFIG_FILE"
done
echo "User config file initiated."
However when I run it in my install.sh nothing is appended to desired config file. The same code as above is at the bottom of install.sh. You can also see tere that I first tried to to it with echo-appends on multiple places(commented out lines) but only two echo statements above separator line (===) worked - not below that line. I really have no idea what causes it. What have I missed?
EDIT:
Do not worry to run install.sh (without args).
It only creates one entry in your .bash_aliases and creates directory 'wenv' in your HOME. You can delete both afterwards.

You just have to figure out what directory you're in when you that executes. make_base_dir calls make_log_file which does a cd but never changes back to the previous directory.

Related

bash commands to remote hosts - errors with writing local output files

I'm trying to run several sets of commands in parallel on a few remote hosts.
I've created a script that constructs these commands, and then writes the output in a local file, something along the lines of:
ssh <me>#<ip1> "command" 2> ./path/to/file/newFile1.txt & ssh <me>#<ip2>
"command" 2> ./path/to/file/newFile2.txt & ssh <me>#<ip2> "command" 2>
./path/to/file/newFile3.txt; ...(same repeats itself, with new commands and new
file names)...
My issue is that, when my script runs these commands, I am getting the following errors:
bash: ./path/to/file/newFile1.txt: No such file or directory
bash: ./path/to/file/newFile2.txt: No such file or directory
bash: ./path/to/file/newFile3.txt: No such file or directory
...
These files do NOT exist but will be written. That being said, the directory paths are valid.
The strange thing is that, if I copy and paste the whole big command, then it works without any issue. I'd rather have it automated tho ;).
Any ideas?
Edit - more information:
My filesystem is the following:
- home
- User
- Desktop
- Servers
- Outputs
- ...
I am running the bash script from home/User/Desktop/Servers.
The script creates the commands that need to be run on the remote servers. First thing first, the script creates the directories where the files will be stored.
outputFolder="./Outputs"
...
mkdir -p ${outputFolder}/f{fileNumb}
...
The script then continues to create the commands that will be called on remotes hosts, and their respective outputs will be placed in the created directories.
The directories are there. Running the commands gives me the errors, however printing and then copying the commands into the same location works for some reason. I have also tried to give the full path to directory, still same issue.
Hope I've been a bit clearer.
If this is the exact error message you get:
bash: ./path/to/file/newFile1.txt: No such file or directory
Then you'll note that there's an extra space between the colon and the dot, so it's actually trying to open a file called " ./path/to/file/newFile1.txt" (without the quotes).
However, to accomplish that, you'd need to use quotes around the filename in the redirection, as in
something ... 2> " ./path/to/file/newFile1.txt"
Or the first character would have to something else than a regular space. A non-breaking space perhaps, possible something that some editor might create if you hit alt-space or such.
I don't believe you've shown enough to correctly answer the question.
This doesn't look like a problem with ssh, but the way you are calling the (ssh) commands.
You say that you are writing the commands into a file... presumably you are then running that file as a script. Could you show the code you use to do that. I believe that's your problem.
I suspect you have made a false assumption about the way the working directory changes when you run a script. It doesn't. You are listing relative paths, so its important to know what they are relative to. That is the most likely reason for it working when you copy and paste it... You are executing from a different working directory.
I am new to bash scripting and was building my script based on another one I had seen. I was "running" the command by simply calling the variable where the command was stored:
$cmd
Solved by using:
eval $cmd
instead. My bad, should have given the full script from the start.

bash while loop through a file doesn't end when the file is deleted

I have a while loop in a bash script:
Example:
while read LINE
do
echo $LINE >> $log_file
done < ./sample_file
My question is why when I delete the sample_file while the script is running the loop doesn't end and I see that the log_file is updating? How the loop is continuing while there is no input?
In unix, a file isn't truly deleted until the last directory entry for it is removed (e.g. with rm) and the last open file handle for it is closed. See this question (especially MarkR's answer) for more info. In the case of your script, the file is opened as stdin for the while read loop, and until that loop exits (or closes its stdin), rming the file will not actually delete it off disk.
You can see this effect pretty easily if you want. Open three terminal windows. In the first, run the command cat >/tmp/deleteme. In the second, run tail -f /tmp/deleteme. In the third, after running the other two commands, run rm /tmp/deleteme. At this point, the file has been unlinked, but both the cat and tail processes have open file handles for it, so it hasn't actually been deleted. You can prove this by typing into the first terminal window (running cat), and every time your hit return, tail will see the new line added to the file and display it in the second window.
The file will not actually be deleted until you end those two commands (Control-D will end cat, but you need Control-C to kill tail).
See "Why file is accessible after deleting in unix?" for an excellent explanation of what you are observing here.
In short...
Underlying rm and any other command that may appear to delete a file
there is the system call unlink. And it's called unlink, not remove or
deletefile or anything similar, because it doesn't remove a file. It
removes a link (a.k.a. directory entry) which is an association
between a file and a name in a directory.
You can use the function truncate to destroy the actual contents (or shred if you need to be more secure), which would immediately halt the execution of your example loop.
The moment shell executes the while loop, the sample_file contents have been read, and it does not matter whether the file exists or not after that point.
Test script:
$ cat test.sh
#!/bin/bash
while read line
do
echo $line
sleep 1
done < data_file
Test file:
$ seq 1 10 > data_file
Now, in one terminal you run the script, in another terminal, you go and delete the file data_file, you would still see the 1 to 10 numbers printed by the script.

Copying .jpg file using shell script gives 'Failed to open input stream for file' error

Very simple script to copy a file
#!/bin/bash
#copy file
mtp-getfile "6" test2.jpg
I set it as executable and run it using
sudo sh ./test.sh
It gives me a file called test2.jpg that has no icon and I cannot open I get a 'Failed to open input stream for file' error
However, if I simply issue the following from the command line
mtp-getfile "6" test2.jpg
It works as expected. What is wrong with my script? I checked and the resulting .jpg file in each case has the same number of bytes. Very strange.
As commented by chepner, your file might have a DOS (Windows) invisible line ending on its name, which would cause an error. To get rid of this unwanted character(s), just create a new blank script on your "nix" system and type the name by hand (not by copying and pasting, to avoid problems), let's say, name it test2.sh.
Then copy all the contents of test.sh to test2.sh (copy and paste) and run test2.sh and see if it works. If it doesn't, try running the following code on the new script, to make sure that there are no unwanted characters on the code itself:
tr -d "\r" < /folder/test2.sh && echo >> /folder/test2.sh
And then try to run script2.sh again to see if it works. Note: the echo >> /folder/test2.sh part of the code above is just to make sure that your new script ends with a newline, which is a Posix standard (and without which some programs may misbehave because they expect the file to end with a newline).
Apparently it was a permissions issue.
I only had to do a sudu chown test2.jpg

First line in file is not always printed in bash script

I have a bash script that prints a line of text into a file, and then calls a second script that prints some more data into the same file. Lets call them script1.sh and script2.sh. The reason it's split into two scripts, is because I have different versions of script2.sh.
script1.sh:
rm -f output.txt
echo "some text here" > output.txt
source script2.sh
script2.sh:
./read_time >> output.txt
./run_program
./read_time >> output.txt
Variations on the three lines in script2.sh are repeated.
This seems to work most of the time, but every once in a while the file output.txt does not contain the line "some text here". At first I thought it was because I was calling script2.sh like this: ./script2.sh. But even using source the problem still occurs.
The problem is not reproducible, so even when I try to change something I don't know if it's actually fixed.
What could be causing this?
Edit:
The scripts are very simple. script1 is exactly as you see here, but with different file names. script 2 is what I posted, but then the same 3 lines repeated, and ./run_program can have different arguments. I did a grep for the output file, and for > but it doesn't show up anywhere unexpected.
The way these scripts are used is that script1 is created by a program (the only difference between the versions is the source script2.sh line. This script1.sh is then run on a different computer (linux on an FPGA actually) using ssh. Before that is done, the output file is also deleted using ssh. I don't know why, but I didn't write all of this. Also, I've checked the code running on the host. The only mention of the output file is when it is deleted using ssh, and when it is copied back to the host after the script1 is done.
Edit 2:
I finally managed to make the problem reproducible at a reasonable rate by stripping script2.sh of everything but a single line printing into the file. This also let me do the testing a bit faster. Once I had this I got the problem between 1 and 4 times for every 10 runs. Removing the command that was deleting the file over ssh before the script was run seems to have solved the problem. I will test it some more to be sure, but I think it's solved. Although I'm still not sure why it would be a problem. I thought that the ssh command would not exit before all the remove commands were executed.
It is hard to tell without seeing the real code. Most likely explanation is that you have a typo, > instead of >>, somewhere in one of the script2.sh files.
To verify this, set noclobber option with set -o noclobber. The shell will then terminate when trying to write to existing file with >.
Another possibility, is that the file is removed under certain rare conditions. Or it is damaged by some command which can have random access to it - look for commands using this file without >>. Or it is used by some command both as input and output which step on each other - look for the file used with <.
Lastly, you can have a racing condition with a command outputting to the file in background, started before that echo.
Can you grep all your scripts for 'output.txt'? What about scripts called inside read_time and run_program?
It looks like something in one of the script2.sh scripts must be either overwriting, truncating or doing a substitution on output.txt.
For example,there could be a '> output.txt' burried inside a conditional for a condition that rarely obtains. Just a guess, but it would explain why you don't always see it.
This is an interesting problem. Please post the solution when you find it!

Bash command route malfunction

Given this (among more...):
compile_coffee() {
echo "Compile COFFEESCRIPT files..."
i=0
for folder in ${COFFEE_FOLDER[*]}
do
for file in $folder/*.coffee
do
file_name=$(echo "$file" | awk -F "/" '{print $NF}' | awk -F "." '{print $1}')
file_destination_path=${COFFEE_DESTINATION_FOLDER[${i}]}
file_destination="$file_destination_path/$file_name.js"
if [ -f $file_path ]; then
echo "+ $file -> $file_destination"
$COFFEE_CMD $COFFEE_PARAMS $file > $file_destination #FAIL
#$COFFEE_CMD $COFFEE_PARAMS $file > testfile
fi
done
i=$i+1
done
echo "done!"
compress_javascript
}
And just to clarify, everything except the #FAIL line works flawessly, if I'm doing something wrong just tell me, the problem I have is:
the line executes and does what it have to do, but dont write the file that I put in "file_destination".
if a delete a folder in that route (it's relative to this script, see below), bash throws and error saying that the folder do not exist.
If I make the folder again, no errors, but no file either.
If I change the $file_destination to "testfile", it create the file with correct contents.
The $file_destination path its ok -as you can see, my script echoes it-
if I echo the entire line, copy the exact command with params and execute it onto a shell in the same directory the script is, it
works.
I don't know what is wrong with this, been wondering for two hours...
Script output (real paths):
(alpha)[pyron#vps herobrine]$ ./deploy.sh compile && ls -l database/static/js/
===============================
=== Compile ===
Compile COFFEESCRIPT files...
+ ./database/static/coffee/test.coffee -> ./database/static/js/test.js
done!
Linking static files to django staticfiles folder... done!
total 0
Complete command:
coffee --compile --print ./database/static/coffee/test.coffee > ./database/static/js/test.js
What am I missing?
EDIT I've made some progression through this.
In the shell, If I deactivate the python virtualenv the script works, but If I call deactivate from the script it says command not found.
Assuming destination files have no characters as spaces in their names, directories exist etc. I'd try adding 2>&1 e.g.
$COFFEE_CMD $COFFEE_PARAMS $file > testfile 2>&1
compilers may put desired output and/or compilation messages on stderr instead of stdout. You may also want to put full path to coffee , e.g. /usr/bin/coffee instead of just compiler name.
Found that the problem wasn't the bash script itself. A few lines later the deploy script perform the collectstatic method from django. Noticed that until that line the files were there, I started reading that the collecstatic have a cache system. A very weird one IMO, since I have to delete all the static files and start from scratch to have the script working.
So... the problem wasn't the bash script but the django cache system. Im not givin' reputation to me anyways.
The full deploy script is here: https://github.com/pyronhell/deploy-script-boilerplate and everyone is welcome if you can improve it.
Cheers.

Resources