I am trying to do: create a bash script that downloads the Ubuntu iso and a file from my own ftp server. Then do a traceroute to my server and save route/date and avg speed of the two downloads to log.txt.
Where i am stuck:
This seems to do okay
curl -o test.avi http://hostve.com/neobuntu/pics/Ubu1.avi 2> test.log
Sadly it removes the previous content of test.log.
With >, you are removing the previous data. If you want to append data, use >>:
curl -o test.avi http://hostve.com/neobuntu/pics/Ubu1.avi 2>> test.log
From Bash Reference Manual #3.6 Redirections:
3.6.2 Redirecting Output
Redirection of output causes the file whose name results from the
expansion of word to be opened for writing on file descriptor n, or
the standard output (file descriptor 1) if n is not specified. If
the file does not exist it is created; if it does exist it is
truncated to zero size.
The general format for redirecting output is:
[n]>[|]word
3.6.3 Appending Redirected Output
Redirection of output in this fashion causes the file whose name
results from the expansion of word to be opened for appending on file
descriptor n, or the standard output (file descriptor 1) if n is not
specified. If the file does not exist it is created.
The general format for appending output is:
[n]>>word
Related
I am using the PDFtk to remove last 2 pages of a bunch of PDF from a specific folder.
For removing it individually on a file, this code works perfectly fine as the last two pages are removed from original.pdf and a newly created reduced.pdf copy is created without the last two pages
#echo off
cd "C:\Program Files (x86)\PDFtk\bin"
start pdftk.exe C:\Desktop\long\original.pdf cat 1-r3 output C:\Desktop\short\reduced.pdf
pause
Fyi, the pdf files all have various alphanumeric filenames and a - as separator between filename words e.g. the-march-event-2022.pdf
What I need now is how to automate is so the script would go through each pdf file on the long folder and create a new copy with identical filename through the command into the short folder
The task can be done with a batch file with only following single command line:
#for %%I in ("C:\Desktop\long\*.pdf") do #"C:\Program Files (x86)\PDFtk\bin\pdftk.exe" "%%I" cat 1-r3 output "C:\Desktop\short\%%~nxI" && echo Successfully processed "%%~nxI" || echo ERROR: Failed to process "%%~nxI"
This command line uses the Windows command FOR to process all PDF files in the specified folder. For each PDF file is executed pdftk with the fully qualified file name of the current PDF file as input file name and the file name + extension with a different directory path as output file name. Run for /? in a command prompt window for help on this command.
There is output the success message on pdftk.exe exits with value 0. Otherwise the error message isĀ output on pdftk.exe exits with value not equal 0.
The two # are for suppressing the output of the FOR command line and of each executed pdftk command line on processing the PDF files in the specified folder.
Please see single line with multiple commands using Windows batch file for an explanation of the conditional operators && and ||.
I want to copy the content of file in Multiple files have the same extension how to do that using linux command
I try Run the commond :
cat t1.txt > /etc/apache2/site-available/*le-ssl.conf
and
echo "hello" > /etc/apache2/site-available/*le-ssl.conf
but Give me an error result " Ambiguous redirect"
Any ideas?
A redirect will not duplicate a data stream. If you want multiple copies, use tee. For example:
< t1.txt tee /etc/apa.../*le-ssl.conf
Original problem - I want to check a file format starting at every single offset of a given file.
To do that, the idea was to call the command file and find a way to call it starting at a chosen offset. But this command doesn't work:
file <(tail -c +10 nknukkodes.dat)
With this error message
/dev/fd/63: broken symbolic link to pipe:[26963]
I use WSL and I don't know if it's a WSL problem, I already did that but I don't remember if I use another way on Linux (with Ubuntu).
I could copy the file for each byte, but even the file are relatively small (200kb), copying at each offset is expensive in square of the file size: 40 GB of copy. How could I achieve this ? Either with calling file with a named pipe or with another approch ?
I suggest:
tail -c +10 nk_nuclear_codes.dat | file -
I have a zip archive (let's call it archive) and let's say I want to go through some directories and finally extract ONLY the files that start with the word 'word'. Some thing similar to:
archive.zip/dir1/dir2/word***.csv
What is the command that could do this without having to extract the whole file (very big file)?
I tried this command line:
unzip -p archive.zip dir1/dir2/word***1.csv >destination
But this only extracts one file not all files that start with 'word'
You should do
unzip -p archive.zip dir1/dir2/word*1.csv >>destination.csv
The > truncates file destination.csv to zero length giving you the impression that only one file was unzipped, while >> creates the file if not present, otherwise appends to it which is the required behavior.
Reference : Check I/O redirection
I have an executable file, call it exec1.exe
I have a bunch of files with a .txt extension and I want to run exec1.exe on it and redirect output to a text file with the original file name somewhere in the output file. I'm running the command
for %i in (mydir\\*.txt) do exec1 %i > "%i2.txt"
But this tries to run on the first text file text1.txt,
exec1 exec1text1.txt > exec1text1.txt2.txt
But I want
exec1 text1.txt > text1.txt2.txt
Any idea what's going wrong?
You could probably get away with this, (double up the % if running from a batch file):
for %i in ("mydir\*.txt") do #start "" exec1.exe -y 754 "%i">"%~ni2.txt"
Note
I have used %~ni2 instead of %i2 to write to the current directory. This is because your command would be outputting to .txt whilst reading *.txt. An alternative would be to use a different known path, e.g. >"known\%i2.txt