Issue with bash one liner [duplicate] - bash

This question already has answers here:
Bash function to find newest file matching pattern
(9 answers)
Closed 2 years ago.
I'm trying to do a bash one liner to get the latest logfile to cat and/or tail:
for i in /mnt/usbdrive/backup/filelog_*.log; do ls -t $i | head -n1 ; done
But get all of the matching files:
/mnt/usbdrive/backup/filelog_2020-06-03-09:00:01:345123169.log
/mnt/usbdrive/backup/filelog_2020-06-04-09:00:01:370667894.log
/mnt/usbdrive/backup/filelog_2020-06-04-19:15:27:274135912.log
/mnt/usbdrive/backup/filelog_2020-06-05-09:00:02:020131150.log
/mnt/usbdrive/backup/filelog_2020-06-06-09:00:02:238963148.log
Where am I going wrong?
Also, if I wanted to tail (or cat) that, would I have to declare another variable and tail -f that $variable ?

I'm trying to do a bash one liner to get the latest logfile
You could use
latestfile=$(/bin/ls -t /mnt/usbdrive/backup/filelog_*.log | /bin/tail -1)
assuming you don't have spaces (or semicolons, etc...) in your file names
See ls(1), tail(1) and carefully read the documentation of GNU bash.
You'll better write your script in some other language (e.g. GNU guile, Python, Lua). See the shebang handling of execve(2).
You might also use stat(1) and/or gawk(1) and/or find(1). See glob(7) and path_resolution(7).
You could be interested by logrotate(8) and crontab(5) and inotify(7).

Related

(Argument list too long) While opening a large list of files using *cat* [duplicate]

This question already has answers here:
Argument list too long when concatenating lots of files in a folder
(3 answers)
Closed 2 years ago.
I'm trying to do something like
cat */httprobe-subdomains.out | xargs -n1 -I{} -t sh -c 'curl -k -i --write-out "\n++++++++++\nResponse Code: %{response_code}\nRedirection URL: %{redirect_url}\nContent Size: %{size_download}" "http://{}" -L >> response/http-{}.out '
The response is
-bash: /usr/bin/cat: Argument list too long
If I tried to cat */httprobe-subdomains.out the stderr will be the same -bash: /usr/bin/cat: Argument list too long
I wish I had a way to escape this situation and be able to cat all httprobe-subdomains.out files in * folders and give it to xargs to deal with.
The Argument list too long error is documented in errno(3) (as E2BIG) and related to some execve(2) system call done by your GNU bash shell. Use sysconf(3) with ARG_MAX to query that limit.
You have several approaches:
recompile your Linux kernel to raise that limit.
write some small C program using appropriate syscalls(2) more appropriately, or write some Python script, or some GNU guile script, ... doing the same
increase some limits, but using setrlimit(2) appropriately (perhaps using the shell ulimit builtin).
See also the documentation and the source code of GNU bash

bash: finding file without keyword [duplicate]

This question already has answers here:
Grep : get all file that doesn't have a line that matches [closed]
(3 answers)
Closed 4 years ago.
I am looking for a command in bash that lists the files in which a keyword is not present. for listing files with the keyword I do
fgrep KEYWORD .
I was thinking I could feed vimdiff with two files with the lists, something like this
diff `fgrep KEYWORD .` `ls .` (THIS IS NOT CORRECT)
but I would not like to create two new files at hoc.
How about using simple grep option.
grep -L "foo" *
You could use --files-without-match option too with it.

Bash for loop and glob expansion [duplicate]

This question already has answers here:
Looping on empty directory content in Bash [duplicate]
(2 answers)
Closed 7 years ago.
Consider the following bash code:
for f in /tmp/*.dat; do echo ${f}; done
when I run this and there is no *.dat file in /tmp the output is:
/tmp/*.dat
which is clearly not what I want. However, when there is such a file, it will print out the correct one
/tmp/foo.dat
How can I force the for loop to return 'nothing' when there is no such file in the directory. The find-command is not an option, sorry for that :/ I would like to have also a solution without testing, if *.dat is a file or not. Any solutions so far?
This should work:
shopt -s nullglob
...
From Bash Manual
nullglob
If set, Bash allows filename patterns which match no files to expand
to a null string, rather than themselves.

Input and output redirection to the same file [duplicate]

This question already has answers here:
How can I use a file in a command and redirect output to the same file without truncating it?
(14 answers)
Closed 1 year ago.
How can I redirect input and output to a same file in general? I mean specifically there is -o for the sort command and there might be other such options for various command. But how can I generally redirect input and output to same file without clobbering the file?
For example: sort a.txt > a.txt destroys the a.txt file contents, but I want to store answer in the same file. I know I can use mv and rm after using a temporary file, but is it possible to do it directly?
As mentioned on BashPitfalls entry #13, you can use sponge from moreutils to "soak up" the data before opening the file to write to it.
Example Usage:
sort a.txt | sponge a.txt
While the BashPitfalls page mentions that there could be data loss, the man page for sponge says
It also creates the output file atomically by renaming a temp file into place [...]
This would make it no more dangerous than writing to a temp file and doing a mv.
Credit to Charles Duffy for pointing out the BashPitfalls entry in the comments.
If you're familiar with the POSIX apis, you'll recognize that opening a file has a few possible modes, but the most common ones are read, write and append. You'll recall that if you open a file for writing, you will truncate it, immediately.
The redirects are directly analogous to those common modes.
> x # open x for writing
< x # open x for reading
>> x # open x for appending
There are no shell redirect that are equivalent to modes like O_RDWR, unfortunately.
You can check for this with the noclobber option, but you cannot open a file for both reading and writing using shell redirect operators. You must use a temporary file.
Not if the command doesn't support doing the mv itself after it is finished.
The shell truncates the output file before it even runs the command you told it to run. (Try it with a command that doesn't exist and you'll see it still gets truncated.)
This is why some commands have options to do this for you (to save you from having to use command input > output && mv output input or similar yourself).
I realize this is from a billion years ago, but I came here looking for the answer before I realized tee. Maybe I'll forget and stumble upon this in another 4 years.
$ for task in `cat RPDBFFLDQFZ.tasks | shuf -n 5`; do
echo $task;
grep -v $task RPDBFFLDQFZ.tasks | tee RPDBFFLDQFZ.tasks > /dev/null
done
6551a1fac870
26ab104327af
d6a90cf1720f
9eaa4faea92f
45ebf210a1b6

find a substring inside a bash variable [duplicate]

This question already has answers here:
Extract substring in Bash
(26 answers)
Closed 9 years ago.
we were trying to find the username of a mercurial url:
default = ssh://someone#acme.com//srv/hg/repo
Suppose that there's always a username, I came up with:
tmp=${a#*//}
user=${tmp%%#*}
Is there a way to do this in one line?
Assuming your string is in a variable like this:
url='default = ssh://someone#acme.com//srv/hg/repo'
You can do:
[[ $url =~ //([^#]*)# ]]
Then your username is here:
echo ${BASH_REMATCH[1]}
This works in Bash versions 3.2 and higher.
You pretty much need more that one statement or to call out to external tools. I think sed is best for this.
sed -r -e 's|.*://(.*)#.*|\1|' <<< "$default"
Not within bash itself. You'd have to delegate to an external tool such as sed.
Not familiar with mercurial, but using your url, you can do
echo 'ssh://someone#acme.com/srv/hg/repo' |grep -E --only-matching '\w+#' |cut --delimiter=\# -f 1
Probably not the most efficient way with the two pipes, but works.

Resources