Bash filename expansion identifies items in file tree, called command not - bash

..poky/build$ for SUBPATH in $(bitbake -e alsa-lib | grep -P -e '(?<=^)FILES_alsa-lib(?==)' | cut -d= -f2 | tr -d \") ; do ls ./tmp-glibc/work/armv7a-vfp-neon-oe-linux-gnueabi/alsa-lib/1.0.29-r0/package$SUBPATH 2>&1 ; done | grep -e "No such file or directory" | wc -l
2855
..poky/build$ for SUBPATH in $(bitbake -e alsa-lib | grep -P -e '(?<=^)FILES_alsa-lib(?==)' | cut -d= -f2 | tr -d \") ; do ls ./tmp-glibc/work/armv7a-vfp-neon-oe-linux-gnueabi/alsa-lib/1.0.29-r0/package$SUBPATH 2>&1 ; done | grep -v -e "No such file or directory" | wc -l
15
Here one of all those No such file or directory
ls: cannot access ./tmp-glibc/work/armv7a-vfp-neon-oe-linux-gnueabi/alsa-lib/1.0.29-r0/package/usr/lib/libicalss.so.1.0.0: No such file or directory
where
..poky/build$ bitbake -e alsa-lib | grep -P -e '(?<=^)FILES_alsa-lib(?==)' | cut -d= -f2 | tr -d \"
/usr/bin/* /usr/sbin/* /usr/lib/alsa-lib/* /usr/lib/lib*.so.* /etc /com /var /bin/* /sbin/* /lib/*.so.* /lib/udev/rules.d /usr/lib/udev/rules.d /usr/share/alsa-lib /usr/lib/alsa-lib/* /usr/share/pixmaps /usr/share/applications /usr/share/idl /usr/share/omf /usr/share/sounds /usr/lib/bonobo/servers /usr/lib/alsa-lib/smixer/*.so
and
poky/build$ echo $SHELL
/bin/bash
Apparently Bash file name expansion finds 2855 items in identified sub-paths the called ls command can't identify.
Actually in every iteration instead of ls ... I need to do find with search root point set to ./tmp-glibc/work/armv7a-vfp-neon-oe-linux-gnueabi/alsa-lib/1.0.29-r0/package$SUBPATH and -nameargument set few times (logical OR) to some patterns.
Where is my mistake?
Is this that file name expansion takes place in the for-loop instead of on invoking ls command (as programmer wishes it)?
#
Following alternative found under my limited expertise level and time resources
cd ./tmp-glibc/work/armv7a-vfp-neon-oe-linux-gnueabi/alsa-lib/1.0.29-r0/package && echo "/usr/bin/* /usr/sbin/* /usr/lib/alsa-lib/* /usr/lib/lib*.so.* /etc /com /var /bin/* /sbin/* /lib/*.so.* /lib/udev/rules.d /usr/lib/udev/rules.d /usr/share/alsa-lib /usr/lib/alsa-lib/* /usr/share/pixmaps /usr/share/applications /usr/share/idl /usr/share/omf /usr/share/sounds /usr/lib/bonobo/servers" | sed -r 's/(^\/)/.\//g' | sed -r 's/( \/)/ .\//g' | ls -Ralh $(awk '{print $0}') ; cd -
Glue for pasting long string with sub-paths to this command pipe from building block presented earlier out of scope as for this Q.
If possible please review. Thanks.
Does this qualify to be this Q's answer?

Related

How to remove any commands that begins with "echo" from history

I have tried the below
history -d $(history | grep "echo.*" |awk '{print $1}')
But it is not deleting all the commands from the history with echo
I want to delete any commands start with echo
like
echo "mamam"
echoaaa
echo "hello"
echooooo
You can use this to remove echo entries :
for d in $(history | grep -Po '^\s*\K(\d+)(?= +echo)' | sort -nr); do history -d $d; done
I would do a
history -d $(history | grep -E "^ *[0-9]+ *echo" | awk '{print $1})
The history command produces one column of event number, followed by the command. We need to match an echo, which is following such a event number. The awk then prints just the event number.
An alternative without reverting to awk would be:
history -d $(history | grep -E "^ *[0-9]+ *echo" | grep -Eow '[0-9]+)
history -w
sed -i '/^echo.*/d' ~/.bash_history
history -c
history -r

Get first match from a CURL grep call

Objective:
I'm trying to write a script that will fetch two URLs from a GitHub release page and do something different with each one.
So far:
Here's what I've got so far.
λ curl -s https://api.github.com/repos/mozilla-iot/gateway/releases/latest | grep "browser_download_url.*tar.gz" | cut -d : -f 2,3 | tr -d \"
This will return the following:
"https://github.com/mozilla-iot/gateway/releases/download/0.8.1/gateway-8c29257704ddb021344bdaaa790909a0eacf3293bab94e02859828a6fd9b900a.tar.gz"
"https://github.com/mozilla-iot/gateway/releases/download/0.8.1/node_modules-921bd0d58022aac43f442647324b8b58ec5fdb4df57a760e1fc81a71627f526e.tar.gz"
I want to be able to create some directories, pull in the first one, navigate in the directories from the newly pulled zip after extracting it, and then pull in the second.
fetching the first line is easy by piping the output to head -n1. for solving your problem, you need more than just fetching the first URL of the cURL output. give this a try:
#!/bin/bash
# fetch your URLs
answer=`curl -s https://api.github.com/repos/mozilla-iot/gateway/releases/latest | grep "browser_download_url.*tar.gz" | cut -d : -f 2,3 | tr -d \"`
# get URLs and file names
first_file=`echo "$answer" | grep -Eo '.+?\.tar\.gz' | head -n1 | tr -d " "`
second_file=`echo "$answer" | grep -Eo '.+?\.tar\.gz' | head -n2 | tail -1 | tr -d " "`
first_file_name=`echo "$answer" | grep -Eo '[^/]+?\.tar\.gz' | head -n1 `
second_file_name=`echo "$answer" | grep -Eo '[^/]+?\.tar\.gz' | head -n2 | tail -1`
#echo $first_file
#echo $first_file_name
#echo $second_file_name
#echo $second_file
# download first file
wget "$first_file"
# extracting first one that must be in the current directory.
# else, change the directory first and put the path before $first_file!
tar -xzf "$first_file_name"
# do your stuff with the second file
You can simply pipe the URLs to xargs curl;
curl -s https://api.github.com/repos/mozilla-iot/gateway/releases/latest |
grep "browser_download_url.*tar.gz" |
cut -d : -f 2,3 | tr -d \" |
xargs curl -O
Or if you want to do some more manipulation on each URL, perhaps loop over the results:
curl ... | grep ... | cut ... | tr ... |
while IFS= read -r url; do
curl -O "$url"
: maybe do things with "$url" here
done
The latter could easily be extended to someting like
... | while IFS= read -r url; do
d=${url##*/}
mkdir -p "$d"
( cd "$d"
curl -O "$url"
tar zxf *.tar.gz
# end of subshell means effects of "cd" end
)
done

grep search with filename as parameter

I'm working on a shell script.
OUT=$1
here, the OUT variable is my filename.
I'm using grep search as follows:
l=`grep "$pattern " -A 15 $OUT | grep -w $i | awk '{print $8}'|tail -1 | tr '\n' ','`
The issue is that the filename parameter I must pass is test.log.However, I have the folder structure :
test.log
test.log.001
test.log.002
I would ideally like to pass the filename as test.log and would like it to search it in all log files.I know the usual way to do is by using test.log.* in command line, but I'm facing difficulty replicating the same in shell script.
My efforts:
var-$'.*'
l=`grep "$pattern " -A 15 $OUT$var | grep -w $i | awk '{print $8}'|tail -1 | tr '\n' ','`
However, I did not get the desired result.
Hopefully this will get you closer:
#!/bin/bash
for f in "${1}*"; do
grep "$pattern" -A15 "$f"
done | grep -w $i | awk 'END{print $8}'

Unable to substitute redirection for redundant cat

cat joined.txt | xargs -t -a <(cut --fields=1 | sort -u | grep -E '\S') -I{} --max-args=1 --max-procs=4 echo "mkdir -p imdb/movies/{}; grep '^{}' joined.txt > imdb/movies/{}/movies.txt" | bash
The code above works but substituting the redundant cat at the start of the code with a redirection like below doesn't work and leads to a cut input output error.
< joined.txt xargs -t -a <(cut --fields=1 | sort -u | grep -E '\S') -I{} --max-args=1 --max-procs=4 echo "mkdir -p imdb/movies/{}; grep '^{}' joined.txt > imdb/movies/{}/movies.txt" | bash
In either case, it is the cut command inside the process substitution (and not xargs) that should be reading from joined.txt, so to be completely safe, you should put either the pipe or the input redirection inside the the process substitution. Actually, neither is necessary; cut can just take joined.txt as an argument.
xargs -t -a <( cat joined.txt | cut ... ) ... | bash
or
xargs -t -a <( cut -f1 joined.txt | ... ) ... | bash
However, it would be clearest to skip the process substitution altogether, and pipe the output of that pipeline to xargs:
cut -f joined.txt | sort -u | grep -E '\S' | xargs -t ...

Move a file which is constantly in use

I need to move the whole content of a file (test.log) which is constantly in use. Moving the file could result in an error for the application which is writing to the file.
My approach is to redirect the output to test.log_bck, copy the original file (test.log) to test.log_cp, clear it and then check if there was any data added in the same time the test.log file was cleared. If any data was missing from the _cp file then merge it with the _bck file without redundant data. It is a lot of effort for such an easy task and my question is: is there another, easier / more efficient way to do it.
#/usr/bin/bash
#redirect the output to /tmp/logging/test.log_bck
bck(){
`tail -f /tmp/logging/test.log &> /tmp/logging/test.log_bck`
}
#run it in background
bck &
#copy the original file to test.log_cp
`cp /tmp/logging/test.log /tmp/logging/test.log_cp` | echo "Copped"
#clear the original file
echo "" > /tmp/logging/test.log | echo "Cleared"
#get the PID of the redirection process and check if there are other running and kill them
bck_pid=`ps -ef | grep "tail -f /tmp/logging/test.log" | grep -v grep | awk '{print$2}' | head -1`
while [ "$bck_pid" != "" ]
do
echo $bck_pid
kill $bck_pid | echo "Killed"
bck_pid=`ps -ef | grep "tail -f /tmp/logging/test.log" | grep -v grep | awk '{print$2}' | head -1`
done
date=$(date '+%Y_%m_%d_%H_%M')
cat /tmp/logging/test.log_cp /tmp/logging/test.log_bck > /tmp/logging/test.log_$date
cat -n /tmp/logging/test.log_$date | sort -uk2 | sort -nk1 | cut -f2-

Resources