bash commands to remote hosts - errors with writing local output files - bash

I'm trying to run several sets of commands in parallel on a few remote hosts.
I've created a script that constructs these commands, and then writes the output in a local file, something along the lines of:
ssh <me>#<ip1> "command" 2> ./path/to/file/newFile1.txt & ssh <me>#<ip2>
"command" 2> ./path/to/file/newFile2.txt & ssh <me>#<ip2> "command" 2>
./path/to/file/newFile3.txt; ...(same repeats itself, with new commands and new
file names)...
My issue is that, when my script runs these commands, I am getting the following errors:
bash: ./path/to/file/newFile1.txt: No such file or directory
bash: ./path/to/file/newFile2.txt: No such file or directory
bash: ./path/to/file/newFile3.txt: No such file or directory
...
These files do NOT exist but will be written. That being said, the directory paths are valid.
The strange thing is that, if I copy and paste the whole big command, then it works without any issue. I'd rather have it automated tho ;).
Any ideas?
Edit - more information:
My filesystem is the following:
- home
- User
- Desktop
- Servers
- Outputs
- ...
I am running the bash script from home/User/Desktop/Servers.
The script creates the commands that need to be run on the remote servers. First thing first, the script creates the directories where the files will be stored.
outputFolder="./Outputs"
...
mkdir -p ${outputFolder}/f{fileNumb}
...
The script then continues to create the commands that will be called on remotes hosts, and their respective outputs will be placed in the created directories.
The directories are there. Running the commands gives me the errors, however printing and then copying the commands into the same location works for some reason. I have also tried to give the full path to directory, still same issue.
Hope I've been a bit clearer.

If this is the exact error message you get:
bash: ./path/to/file/newFile1.txt: No such file or directory
Then you'll note that there's an extra space between the colon and the dot, so it's actually trying to open a file called " ./path/to/file/newFile1.txt" (without the quotes).
However, to accomplish that, you'd need to use quotes around the filename in the redirection, as in
something ... 2> " ./path/to/file/newFile1.txt"
Or the first character would have to something else than a regular space. A non-breaking space perhaps, possible something that some editor might create if you hit alt-space or such.

I don't believe you've shown enough to correctly answer the question.
This doesn't look like a problem with ssh, but the way you are calling the (ssh) commands.
You say that you are writing the commands into a file... presumably you are then running that file as a script. Could you show the code you use to do that. I believe that's your problem.
I suspect you have made a false assumption about the way the working directory changes when you run a script. It doesn't. You are listing relative paths, so its important to know what they are relative to. That is the most likely reason for it working when you copy and paste it... You are executing from a different working directory.

I am new to bash scripting and was building my script based on another one I had seen. I was "running" the command by simply calling the variable where the command was stored:
$cmd
Solved by using:
eval $cmd
instead. My bad, should have given the full script from the start.

Related

bash script fails to delete s3 object inside a while loop

I have text a file with a list of s3 objects, in the form of:
prefix_x/prefix_y/file_name_1
prefix_w/prefix_z/file_name_88
etc...
I wrote a bash script to delete all of these objects, as follows:
#! /bin/bash
LIST_OF_PATHS=$1
while read FILE_PATH; do
aws s3 rm s3://bucket-name/$FILE_PATH
done < $LIST_OF_PATHS
The script doesn't seem to delete the objects (they appear in the UI and in the terminal when lsing them with the CLI).
Further details and things I've already tried:
deleting the objects manually with similar command in the CLI - it works.
adding ls command to the loop provides no output, whereas typing this command manually on the very same files does give output.
adding sleep 0.1 to each iteration of the loop didn't help either.
of course the script runs - I see the output: delete: s3://bucket-name/prefix_x/prefix_y/file_name_1, but the file doesn't actually get deleted.
running a simpler bash script with the same command and a specific file name (not inside a loop) does delete successfully.
What might be the problem?
Solved!
The issue was that in a script, bash expects the newline char to be '\n', but my input file contained '\r' at the end of each line. More on this can be found here:
https://superuser.com/questions/489180/remove-r-from-echoing-out-in-bash-script/489191
Many thanks to #Barmar, whose comment helped me see this and debug the issue.
I simply changed input file itself, and the script ran perfectly as it was.
Check your AWS permissions on the S3 service.
Check your AWS command line tool configuration.
Check your s3 bucket configuration.
Run your AWS s3 command without using a loop, see if the command is run successfully, before using a loop

Script piped into bash fails to expand globs during rm command

I am writing a script with the intention of being able to download and run it from anywhere, like:
bash <(curl -s https://raw.githubusercontent.com/path/to/script.sh)
The command above allows me to download the script, run interactive commands (e.g. read), and - for the most part - Just Works. I have run into an issue during the cleanup portion of my script, however, and haven't been able to discern a fix
During cleanup I need to remove several .bkp files created by the script's execution. To do so I run rm -f **/*.bkp inside the script. When a local copy of the script is run, this works great! When run via bash/curl, however, it removes nothing. I believe this has something to do with a failure to expand the glob as a result of the way I've connected the I/O of bash and curl, but have been unable to find a way to get everything to play nice
How can I meet all of the following requirements?
Download and run a script from a remote resource
Ensure that the user's keyboard input is connected for use in e.g. read calls within the script
Correctly expand the glob passed to rm
Bonus points: colorize input with e.g. echo -e "\x1b[31mSome error text here\x1b[0m" (also not working, suspected to be related to the same bash/curl I/O issues)

How to run shell script within shell script with fixed arguments

I have a simple script that creates a loop around another script and directly gives the parameters and arguments to that script - here comes the loop into play since the script is supposed to run over several files. The way I wrote it it's currently not working so how should I attach these parameters? I'm fairly new to bash so any help will be appreciated a lot!
#!/bin/bash
SCRIPT_PATH="xx.sh"
for x in {001..031}; do
"$SCRIPT_PATH" /data/raw/"$x"_AE data/processed/"$x"_AE 5 --info
done
There may have a syntax issue in your script, the first path is starting with '/' (/data/raw/...) so it is absolute, but it is NOT the case of the second one data/processed/...; is is intentional?
Ensure there is NO directory/path issue (where is xx.sh located ?)
Ensure the user who launches the script has access permissions on /data directories and sub-directories
Let me know if it fixes your issue?

First line in file is not always printed in bash script

I have a bash script that prints a line of text into a file, and then calls a second script that prints some more data into the same file. Lets call them script1.sh and script2.sh. The reason it's split into two scripts, is because I have different versions of script2.sh.
script1.sh:
rm -f output.txt
echo "some text here" > output.txt
source script2.sh
script2.sh:
./read_time >> output.txt
./run_program
./read_time >> output.txt
Variations on the three lines in script2.sh are repeated.
This seems to work most of the time, but every once in a while the file output.txt does not contain the line "some text here". At first I thought it was because I was calling script2.sh like this: ./script2.sh. But even using source the problem still occurs.
The problem is not reproducible, so even when I try to change something I don't know if it's actually fixed.
What could be causing this?
Edit:
The scripts are very simple. script1 is exactly as you see here, but with different file names. script 2 is what I posted, but then the same 3 lines repeated, and ./run_program can have different arguments. I did a grep for the output file, and for > but it doesn't show up anywhere unexpected.
The way these scripts are used is that script1 is created by a program (the only difference between the versions is the source script2.sh line. This script1.sh is then run on a different computer (linux on an FPGA actually) using ssh. Before that is done, the output file is also deleted using ssh. I don't know why, but I didn't write all of this. Also, I've checked the code running on the host. The only mention of the output file is when it is deleted using ssh, and when it is copied back to the host after the script1 is done.
Edit 2:
I finally managed to make the problem reproducible at a reasonable rate by stripping script2.sh of everything but a single line printing into the file. This also let me do the testing a bit faster. Once I had this I got the problem between 1 and 4 times for every 10 runs. Removing the command that was deleting the file over ssh before the script was run seems to have solved the problem. I will test it some more to be sure, but I think it's solved. Although I'm still not sure why it would be a problem. I thought that the ssh command would not exit before all the remove commands were executed.
It is hard to tell without seeing the real code. Most likely explanation is that you have a typo, > instead of >>, somewhere in one of the script2.sh files.
To verify this, set noclobber option with set -o noclobber. The shell will then terminate when trying to write to existing file with >.
Another possibility, is that the file is removed under certain rare conditions. Or it is damaged by some command which can have random access to it - look for commands using this file without >>. Or it is used by some command both as input and output which step on each other - look for the file used with <.
Lastly, you can have a racing condition with a command outputting to the file in background, started before that echo.
Can you grep all your scripts for 'output.txt'? What about scripts called inside read_time and run_program?
It looks like something in one of the script2.sh scripts must be either overwriting, truncating or doing a substitution on output.txt.
For example,there could be a '> output.txt' burried inside a conditional for a condition that rarely obtains. Just a guess, but it would explain why you don't always see it.
This is an interesting problem. Please post the solution when you find it!

Bash Script File Descriptor echo

echo: write error: Bad file descriptor
Throughout my code (through several bash scripts) I encounter this error. It happens when I'm trying to write or append to a (one) file.
LOGRUN_SOM_MUT_ANA=/Volumes/.../logRUN_SOMATIC_MUT_ANA
I use the absolute path for this variable and I use the same file for each script that is called. The file has a bunch of lines just like this. I use the import '.' on each script to get it.
echo "debug level set for $DEBUG_LEVEL" >> ${LOGRUN_SOM_MUT_ANA}
Worth noting:
It typically happens AFTER the FIRST time I write to it.
I read about files 'closing' themselves and yielding this error
I am using the above line in one script, and then calling another script.
I'd be happy to clarify anything.
For others encountering the same stupid error under cygwin in a script that works under a real Linux: no idea why, but it can happen:
1) after a syntax error in the script
2) because cygwin bash wants you to replace ./myScript.sh with . ./myScript.sh (where dot is the bash-style include directive, aka source)
I figured it out, the thumb drive I'm using is encrypted. It outputs to /tmp/ so it's a permission thing. That's the problem!

Resources