I have a pretty simple shell script. It reads something like this.
#!/bin/bash
echo upload trigram
bteq < load_temp_trigram
echo load bigram
bteq < load_temp_bigram
echo load word
bteq < load_temp_word
echo load phrase
bteq < load_temp_phrase
I get the following errors, and then it the script executes the following command
: command not found
upload trigram
: No such file or directoryigram
: command not found
load bigram
: No such file or directorygram
: command not found
load word
: No such file or directoryord
: command not found
load phrase
I'm calling the script with bash script.sh or sh script.sh.
So, it looks like it isn't recognizing the echo command, even though it seems to work. And it is cutting off pieces of the strings/files - which is probably why it can't find them. I'm at a loss as to what's going on here. Any help would be appreciated.
1)add an -x to shell like this
#!/bin/bash -x
it will reduce amount of questions in general.
2) in case you want the text output use quotes
like this
echo "upload trigram"
3) use absolute paths to the executable
like this
/usr/bin/bteq
the path to your bteq executable could be found by executing
which bteq
(if you are lucky and have a which installed)
same goes for sql batch files
/path/to/td_bins/bteq < /path/to/batch/load_temp_word
if you in the folder with batch files, relative path to it will be ./
Related
First off, I am in the early stages of learning bash shell scripting, so I apologize if I say / do anything that doesn't make sense.
Currently, I'm trying to have an SBC, a Khadas VIM3 specifically, run a python script to find and label faces in any given video from a local server. Currently, I need to reduce the frame rate and resolution of the video, which is where the bash script comes into play. I need to automate this process and thought I'd do it using a bash script and crontab.
The file paths are found and output into a file from a separate script, and are read by the bash script. The problem comes when I try and call ffmpeg to use the file paths.
The Code :
pathFile="/home/khadas/Documents/paths"
while IFS= read -r line
do
ffmpeg -i "$line" -vf scale=960:540 -y "$line"
cp "$line" ./
done < $pathFile
The resulting error :
: No such file or directoryalRecognition/10/14-53.h264+/2019-09-26-10-14-53.mp4
cp: cannot stat '/home/khadas/Downloads/FacialRecognition/10/14-53.h264+/2019-09-26-10-14-53.mp4'$'\r': No such file or directory
Example of the paths file (There will be hundreds of entries) :
/home/khadas/Downloads/FacialRecognition/10/14-42.h264+/2019-09-26-10-14-42.mp4
/home/khadas/Downloads/FacialRecognition/10/59-06.h264+/2019-09-26-10-59-06.mp4
/home/khadas/Downloads/FacialRecognition/10/36-28.h264+/2019-09-26-10-36-28.mp4
/home/khadas/Downloads/FacialRecognition/10/14-53.h264+/2019-09-26-10-14-53.mp4
When using a trimmed down version, the script works as expected. Could it be an issue with the length of the lines? Any help is much appreciated.
According to the error message, cp is attempting to access a file whose name ends with .mp4'$'\r'. That appears to means that somewhere there are DOS/Windows-style line-endings. If you have the dos2unix utility, run it against your files including /home/khadas/Documents/paths.
-- John1024
I'm working on bash Unix shell. My script executes five 4gl files in order.if an error occurs in any of the files, the Scrip stops. But the problem is when we clear the error in the file and execute the script again..it should start from when it stopped (i.e from the step where the error is cleared)not from the first file.
You'll have to do this yourself:
if ! some-check-for-first-commmand; then
do-something-with first.4gl
fi
if ! some-check-for-second-commmand; then
do-something-with second.4gl
fi
# and so on
where "some-check-for-..." is something you write that checks if that 4gl file has been processed. It might look for some text in an output file, or the timestamp of an output file, or whatever you can do.
While i'm using the following command in unix command prompt everything is working fine,log fiel is creating fine.
ls -l|echo "[`date +%F-%H-%M-%S`] SUCCESS - FILES"|tee -a logger2.log
but using the same thing in side the shell script it is showing error
No such file or directory.
I'm not getting what is the problem here!!
If I read between the lines: you want a list of files, followed by a date and a message?
try:
{ ls -l ; echo "[$(date "+%F-%H-%M-%S")] SUCCESS - FILES" ; } |tee -a logger2.log
That should give you in 'logger2.log' of the current directory the lines
.................... file
.................... file2
(ie the list of all files and dirs in the current dir, EXCEPT those starting with a ".")
[2013-12-26-..-..-..] SUCCESS - FILES
Please note that, if you put nothing in front of the script, it could be started by a different shell than the one you use when testing. It depends what/who does invoke the script : if it is a crontab, it will probably be interpreted by "sh" (or by "bash" in "sh-compatibility" mode)...
Please tell us what the above gives you as output, and how you start the script (by a user, on the prompt? or via a crontab [which one: /etc/crontab ? or a user's crontab? which user?], etc. And what error messages you see.
Let's say I have this MATLAB function
function a = testfun(filename)
% do something with file contents and create some variable a
disp(a);
I would like to have a shell script that runs on Cygwin as follows:
./testscript.sh < inputforfunction.txt > outputoffunction.txt
The input file would contain the data I need.
After running this command, the output file will contain the result of running testfun(filename).
Up till now, I can write the output to the file outputoffunction.txt.
The problem is I want to read the file name "inputforfunction.txt".
I am able to read in the file contents but not the file name, any hints please?
Thanks!
Why not pass the file as an argument to the bash script?
./testscript.sh inputforfunction.txt > outputoffunction.txt
In the script you can access $1 - it will evaluate to 'inputforfunction.txt'.
read file_name
./testscript.sh < $file_name > outputoffunction.txt
I have the following code, which is intended to run a java program on some input, and test that input against a results file for verification.
#!/bin/bash
java Program ../tests/test"$#".tst > test"$#".asm
spim -f test"$#".asm > temp
diff temp ../results/test"$#".out
The gist of the above code is to:
Run Program on a test file in another directory, and pipe the output into an assembly file.
Run a MIPS processor on that program's output, piping that into a file called temp.
Run diff on the output I generated and some expected output.
I made this shell script to help me automate checking of my homework assignment for class. I didn't feel like manually checking things anymore.
I must be doing something wrong, as although this program works with one argument, it fails with more than one. The output I get if I use the $# is:
./test.sh: line 2: test"$#".asm: ambiguous redirect
Cannot open file: `test0'
EDIT:
Ah, I figured it out. This code fixed the problem:
#!/bin/bash
for arg in $#
do
java Parser ../tests/test"$arg".tst > test"$arg".asm
spim -f test"$arg".asm > temp
diff temp ../results/test"$arg".out
done
It turns out that bash must have interpreted a different cmd arg for each time I was invoking $#.
enter code here
If you provide multiple command-line arguments, then clearly $# will expand to a list of multiple arguments, which means that all your commands will be nonsense.
What do you expect to happen for multiple arguments?