compgen not displaying all expected suggestions - bash

I need to add some bash shell completion with words read from a json file:
...
{
'bundle': 'R20_B1002_ORDERSB1_FROMB1',
'version':'0.1',
'envs': ['DEV','QUAL','PREPROD2'],
},
{
'bundle': 'R201_QA069_ETIQETTENS_FROMSAP',
'version': '0.1',
'envs': ['DEV','QUAL','QUAL2','PREPROD'],
}
...
To get a words list I can run this command line and it returns all expected words from my file :
grep 'bundle' liste_routes.py | sed "s/'bundle': '//" | sed "s/',//" | grep -v '#'
for instance, with an addtional "grep R20" it returns :
R20_B1002_ORDERSB1_FROMB1
R201_QA069_ETIQETTENS_FROMSAP
R202_LOG287_LIVRAISONSORTANTE_FROMLSP
R203_PP052_FULLSTOCKSAP_FROMSAP
R204_CO062_PRIXTRANSF_FROMOLGA
R206_LOG419_NOMENCLBOMPROD_FROMTDX
R207_CERTIFNFGAZ
R208_SAL363_ARTICLEPRICING_FROMSAP
R209_LOG451_WHSCON_FROMTDX
Now I put this in this compgen file and source it i my bash session.
_find_routenames()
{
search="$cur"
grep 'bundle' liste_routes.py | sed "s/'bundle': '//" | sed "s/',//" | sed "s/\r//g" | grep -v '#' | awk '{$1=$1;print}'
}
_esbdeploy_completions()
{
#local IFS=$'\n'
COMPREPLY=()
cur="${COMP_WORDS[COMP_CWORD]}"
cur="${COMP_WORDS[1]}"
COMPREPLY=( $( compgen -W "$(_find_routenames)" -- "$cur" ) )
##### COMPREPLY=($(compgen -W "$(grep 'bundle' liste_routes.py | sed \"s/'bundle': '//\" | sed \"s/',//\" | grep -v '#')" -- "${COMP_WORDS}"))
}
complete -F _esbdeploy_completions d.py
complete -F _esbdeploy_completions deploy_karaf_v4.py
complete -F _esbdeploy_completions show.py
The problem is when I type
./d.py R20<TAB>
I get thoses suggestions :
R201_QA069_ETIQETTENS_FROMSAP R203_PP052_FULLSTOCKSAP_FROMSAP R206_LOG419_NOMENCLBOMPROD_FROMTDX R208_SAL363_ARTICLEPRICING_FROMSAP
R202_LOG287_LIVRAISONSORTANTE_FROMLSP R204_CO062_PRIXTRANSF_FROMOLGA R207_CERTIFNFGAZ R209_LOG451_WHSCON_FROMTDX
It misses R20_B1002_ORDERSB1_FROMB1 from my first grep test.
I don't think it deals with the underscore , as other tests with "./d.py R10" do suggests "R10_xxxx".

Related

Can't create a function in bash without getting a command not found error

I'm writing a bash script to edit a youtube video name and I wrote a sed function for it:
function longSed(){
ins0=$1
ins1=$( $ins0 | sed 's/and//g;s/ or//g;s/the//g;s/And//g;s/yet//g;s/the//g;s/so//g;s/ a//g;s/ A//g' )
return $ins1
}
v2=longSed v1
echo "$v1 --> $v2"
But I keep receiving a command not found error on the second-to-last line no matter what I do. What am I missing here?
EDIT : This is the entire script:
#!/bin/bash
v0=$(youtube-dl --skip-download --get-title --no-warnings $1 | sed 2d )
v1=$(youtube-dl --skip-download --get-title --no-warnings $1 | sed 2d | tr -dc '[:alnum:]\n\r ' | head -c 64 )
function longSed(){
ins0=$1
ins1=$( $ins0 | sed 's/and//g;s/ or//g;s/the//g;s/And//g;s/yet//g;s/the//g;s/so//g;s/ a//g;s/ A//g' )
return $ins1
}
v2=$(longSed v1)
echo "$v0 --> $v2"
I probably shouldn't be using sed for an entire wordlist like that, but I don't want a separate wordlist file.

I can't figure out how to extract a string in bash

I am trying to make a bash script that will download a youtube page, see the latest video and find its url. I have the part to download the page except I can not figure out how to isolate the text with the url.
I have this to download the page
curl -s https://www.youtube.com/user/h3h3Productions/videos > YoutubePage.txt
which will save it to a file.
But I cannot figure out how to isolate the single part of a div.
The div is
<a class="yt-uix-sessionlink yt-uix-tile-link spf-link yt-ui-ellipsis yt-ui-ellipsis-2" dir="ltr" title="Why I'm Unlisting the Leafyishere Rant" aria-describedby="description-id-877692" data-sessionlink="ei=a2lSV9zEI9PJ-wODjKuICg&feature=c4-videos-u&ved=CD4QvxsiEwicpteI1I3NAhXT5H4KHQPGCqEomxw" href="/watch?v=q6TNODqcHWA">Why I'm Unlisting the Leafyishere Rant</a>
And I need to isolate the href at the end but i cannot figure out how to do this with grep or sed.
With sed :
sed -n 's/<a [^>]*>/\n&/g;s/.*<a.*href="\([^"]*\)".*/\1/p' YoutubePage.txt
To just extract the video ahref :
$ sed -n 's/<a [^>]*>/\n&/g;s/.*<a.*href="\(\/watch\?[^"]*\)".*/\1/p' YoutubePage.txt
/watch?v=q6TNODqcHWA
/watch?v=q6TNODqcHWA
/watch?v=ix4mTekl3MM
/watch?v=ix4mTekl3MM
/watch?v=fEGVOysbC8w
/watch?v=fEGVOysbC8w
...
To omit repeated lines :
$ sed -n 's/<a [^>]*>/\n&/g;s/.*<a.*href="\(\/watch\?[^"]*\)".*/\1/p' YoutubePage.txt | sort | uniq
/watch?v=2QOx7vmjV2E
/watch?v=4UNLhoePqqQ
/watch?v=5IoTGVeqwjw
/watch?v=8qwxYaZhUGA
/watch?v=AemSBOsfhc0
/watch?v=CrKkjXMYFzs
...
You can also pipe it to your curl command :
curl -s https://www.youtube.com/user/h3h3Productions/videos | sed -n 's/<a [^>]*>/\n&/g;s/.*<a.*href="\(\/watch\?[^"]*\)".*/\1/p' | sort | uniq
You can use lynx which is a terminal browser, but have a -dump mode which will output a HTML parsed text, with URL extracted. This makes it easier to grep the URL:
lynx -dump 'https://www.youtube.com/user/h3h3Productions/videos' \
| sed -n '/\/watch?/s/^ *[0-9]*\. *//p'
This will output something like:
https://www.youtube.com/watch?v=EBbLPnQ-CEw
https://www.youtube.com/watch?v=2QOx7vmjV2E
...
Breakdown:
-n ' # Disable auto printing
/\/watch?/ # Match lines with /watch?
s/^ *[0-9]*\. *// # Remove leading index: " 123. https://..." ->
# "https://..."
p # Print line if all the above have not failed.
'

Unix. Call a variable inside another variable

Currently I have a script like this. The intended purpose of this script is to use the function Getlastreport and retreive the name of lastest report in a folder. The folders name are typical a random generated number every night. I want to call the variable Getlastreport and put it inside Maxcashfunc.
Example :
Getlast report = 3473843.
Use MAXcashfunc grep -r "Max*" /David/reports/$Getlastreport[[the number 3473843 should be here ]]/"Moneyfromyesterday.csv" > Report`
Script:
#!bin/bash
Getlastreport()
{
cd /David/reports/ | ls -l -rt | tail -1 | cut -d' ' -f10-
}
MAXcashfunc()
{
grep -r "Max*" /David/reports/$Getlastreport/"Moneyfromyesterday.csv" > Report
}
##call maxcash func
MAXcashfunc
You can use:
MAXcashfunc() {
grep -r "Max" /David/reports/`Getlastreport`/"Moneyfromyesterday.csv" > Report
}
`Getlastreport` - Call Getlastreport and get its output.
If I follow your question, you could use
function Getlastreport() {
cd /David/reports/ | ls -l -rt | tail -1 | cut -d' ' -f10-
}
function MAXcashfunc() {
grep -r "Max" /David/reports/$(Getlastreport)/"Moneyfromyesterday.csv" > Report
}

grep to ignore the first character

If I have a bunch of lines that have :
//mainHtml = "https://
mainHtml = "https:
//https:www.google.com
public String ydmlHtml = "https://remando
aasdfa dsfadsf a asdfasd fasd fsdafsdaf
Now I want to grep only those lines which have "https:" in them, but they should NOT start with "//"
So far I have :
cat $javaFile | grep -e '\^\/ *https:*\'
where $javaFile is the file I want to look for the words.
My output is a blank.
Please help :)
You can use character class to negate the start of lines. We use -E option to use ERE or Extended Regular Expression.
grep -E '^[^/]{2}.*https' file
With your sample data:
$ cat file
//mainHtml = "https://
mainHtml = "https:
//https:www.google.com
public String ydmlHtml = "https://remando
aasdfa dsfadsf a asdfasd fasd fsdafsdaf
$ grep -E '^[^/]{2}.*https' file
mainHtml = "https:
public String ydmlHtml = "https://remando
You may also choose to write it without the -E option by saying:
grep '^[^/][^/].*https' file
In two steps:
grep -v '^//' | grep 'https:'
grep -v '^//' removes the lines starting with //
grep 'https:' gets the lines containing http:

modify the contents of a file without a temp file

I have the following log file which contains lines like this
1345447800561|FINE|blah#13|txReq
1345447800561|FINE|blah#13|Req
1345447800561|FINE|blah#13|rxReq
1345447800561|FINE|blah#14|txReq
1345447800561|FINE|blah#15|Req
I am trying extract the first field from each line and depending on whether it belongs to blah#13 or blah#14, blah#15 i am creating the corresponding files using the following script, which seems quite in-efficient in terms of the number of temp files creates. Any suggestions on how I can optimize it ?
cat newLog | grep -i "org.arl.unet.maca.blah#13" >> maca13
cat newLog | grep -i "org.arl.unet.maca.blah#14" >> maca14
cat newLog | grep -i "org.arl.unet.maca.blah#15" >> maca15
cat maca10 | grep -i "txReq" >> maca10TxFrameNtf_temp
exec<blah10TxFrameNtf_temp
while read line
do
echo $line | cut -d '|' -f 1 >>maca10TxFrameNtf
done
cat maca10 | grep -i "Req" >> maca10RxFrameNtf_temp
while read line
do
echo $line | cut -d '|' -f 1 >>maca10TxFrameNtf
done
rm -rf *_temp
Something like this ?
for m in org.arl.unet.maca.blah#13 org.arl.unet.maca.blah#14 org.arl.unet.maca.blah#15
do
grep -i "$m" newLog | grep "txReq" | cut -d' ' -f1 > log.$m
done
I've found it useful at times to use ex instead of grep/sed to modify text files in place without using temps ... saves the trouble of worrying about uniqueness and writability to the temp file and its directory etc. Plus it just seemed cleaner.
In ksh I would use a code block with the edit commands and just pipe that into ex ...
{
# Any edit command that would work at the colon prompt of a vi editor will work
# This one was just a text substitution that would replace all contents of the line
# at line number ${NUMBER} with the word DATABASE ... which strangely enough was
# necessary at one time lol
# The wq is the "write/quit" command as you would enter it at the vi colon prompt
# which are essentially ex commands.
print "${NUMBER}s/.*/DATABASE/"
print "wq"
} | ex filename > /dev/null 2>&1

Resources