Due to processes out of my control I need run multiple SH files which contains lengthy CURL commands. Problem is that whichever process created these commands seems to have included one line of whitespace at the very end. If I call it as is - it fails. If I physically open the file and hit backspace on the first full empty line and save the file - it works perfectly.
Any way to put some kind of command into the SH file so that it removes any unnecessary stuff?
More info would be helpful, but the following might work:
If you need to put something into each of the files that contain the curl commands as you mention, you could try putting exit as the last line of the curl script (also depends on how you're calling the 'curl files'
exit
If you can run a separate script against the files that have a blank line, perhaps sed the blank lines away?
sed -i s/^\s$// $fileWithLineOfSpaces
edit:
Or (after thinking about it), perhaps simply delete the last line of the file....
sed -i '$d' $file
Related
I googled this command but there was not.
grep -m 1 "\[{" xxx.txt > xxx.txt
However I typed this command, error didn't occured.
Actually, there was not also result of this command.
Please anyone explain me this command's working?
This command reads from and writes to the same file, but not in a left-to-right fashion. In fact > xxx.txt runs first, emptying the file before the grep command starts reading it. Therefore there is no output. You can fix this by storing the result in a temporary file and then renaming that file to the original name.
PS: Some commands, like sed, have an output file option which works around this issue by not relying on shell redirects.
I think this question fall under pipes, am bad at it.
Using one of my shell script, a file is generated with millions of rows.
Before I can use it with another command, I need to edit this file. I need to add a text e.g 'txt' in front of every line.
What i am currently doing now is,
-exit the shell script after file is generated
-open it in vim
-use command :g/^/s//txt/g to add txt at start of each line
-save file
-use it in remaining shell script
I am sure there would be a more efficient way, which i am not aware of. thanks for the help.
As some people said in the comments, you can use GNU sed to do that:
sed -i 's/^/txt/' yourfile.txt
The -i stands for --in-place and edit your file instead of printing to stdout.
I have a bash script that prints a line of text into a file, and then calls a second script that prints some more data into the same file. Lets call them script1.sh and script2.sh. The reason it's split into two scripts, is because I have different versions of script2.sh.
script1.sh:
rm -f output.txt
echo "some text here" > output.txt
source script2.sh
script2.sh:
./read_time >> output.txt
./run_program
./read_time >> output.txt
Variations on the three lines in script2.sh are repeated.
This seems to work most of the time, but every once in a while the file output.txt does not contain the line "some text here". At first I thought it was because I was calling script2.sh like this: ./script2.sh. But even using source the problem still occurs.
The problem is not reproducible, so even when I try to change something I don't know if it's actually fixed.
What could be causing this?
Edit:
The scripts are very simple. script1 is exactly as you see here, but with different file names. script 2 is what I posted, but then the same 3 lines repeated, and ./run_program can have different arguments. I did a grep for the output file, and for > but it doesn't show up anywhere unexpected.
The way these scripts are used is that script1 is created by a program (the only difference between the versions is the source script2.sh line. This script1.sh is then run on a different computer (linux on an FPGA actually) using ssh. Before that is done, the output file is also deleted using ssh. I don't know why, but I didn't write all of this. Also, I've checked the code running on the host. The only mention of the output file is when it is deleted using ssh, and when it is copied back to the host after the script1 is done.
Edit 2:
I finally managed to make the problem reproducible at a reasonable rate by stripping script2.sh of everything but a single line printing into the file. This also let me do the testing a bit faster. Once I had this I got the problem between 1 and 4 times for every 10 runs. Removing the command that was deleting the file over ssh before the script was run seems to have solved the problem. I will test it some more to be sure, but I think it's solved. Although I'm still not sure why it would be a problem. I thought that the ssh command would not exit before all the remove commands were executed.
It is hard to tell without seeing the real code. Most likely explanation is that you have a typo, > instead of >>, somewhere in one of the script2.sh files.
To verify this, set noclobber option with set -o noclobber. The shell will then terminate when trying to write to existing file with >.
Another possibility, is that the file is removed under certain rare conditions. Or it is damaged by some command which can have random access to it - look for commands using this file without >>. Or it is used by some command both as input and output which step on each other - look for the file used with <.
Lastly, you can have a racing condition with a command outputting to the file in background, started before that echo.
Can you grep all your scripts for 'output.txt'? What about scripts called inside read_time and run_program?
It looks like something in one of the script2.sh scripts must be either overwriting, truncating or doing a substitution on output.txt.
For example,there could be a '> output.txt' burried inside a conditional for a condition that rarely obtains. Just a guess, but it would explain why you don't always see it.
This is an interesting problem. Please post the solution when you find it!
I'm trying to write a quick batch file. It will take the result of a command, put some extra text and quotes around it, and put that into a new file.The problem is that the result of the command I'm running includes a new line. Here's the command:
p4 changelists -m 1 -t //depot/...> %FILENAME%
The output of that p4 command has a newline at the end of it. The file I'm putting it into needs to have quotes surrounding the output of that command, but the fact that the command contains a newline in it means that the "closing quote" appears on a new line in the file, which doesn't work for what I'm doing.
I've tried writing the output of that command into a file and reading it back in, and also trying to run FINDSTR on a file containing the output, but I always seem to get back the stupid trailing whitespace. I've even tried inserting backspaces into the file, but that just put a backspace character into the file instead of actually executing a backspace...
Is there anything to be done about this?
I'm no perl wizard, but the following seems to work:
p4 changelists -m 1 -t //depot/...| perl -p -e "s/^/\042/;s/$/\042/"
Check out Strawberry Perl, which provides a Windows version of Perl.
I'm always looking at my Unix tools when solving problems like this, even under Windows. sed and gawk will also get you there, check out msysgit for a nice bundle of Unix tools that will run on Windows.
I've got an irritating closed-source tool which writes specific information into its configuration file. If you then try to use the configuration on a different file, then it loads the old file. Grrr...
Luckily, the configuration files are text, so I can version control them, and it turns out that if one just removes the offending line from the file, no harm is done.
But the tool keeps putting the lines back in. So every time I want to check in new versions of the config files, I have to remove all lines containing the symbol openDirFile.
I'm about to construct some sort of bash command to run grep -v on each file, store the result in a temporary file, and then delete the original and rename the temporary, but I wondered if anyone knew of a nice clean solution, or had already concocted and debugged a similar invocation.
For extra credit, how can this be done without destroying a symbolic link in the same directory (favourite.rc->signals.rc)?
sed -i '/openDirFile/d' *.conf
this do the removing on all conf files
you can also combine the line with "find" command if your conf files are located in different paths.
Note that -i will do the removing "in place".
This was the bash-spell that I came up with:
for i in *.rc ; do
TMP=$(mktemp)
grep -v openDirFile "$i" >"$TMP" && mv "$TMP" "$i"
done
(You can obviously turn this into a one-liner by replacing the newlines with semicolons, except after do.)
Kent's answer is clearly superior.