What is the output of the following command in bash shell script - bash

I just have the following cut code and would like to know what is the $DATA_STORE
x=`dd if=$FILEPATH bs=2 count=1`
log "---> x = $x"
DATA_STORE=`echo $x | od -c | awk '{print ($2 $3)}'`
log "---> Data_which_stored_is= $DATA_STORE"
i have a xml file for check start with :
.
i just want to know what exactly this command do .
Thanks for helping :).

This ugly piece of code using dd, awk and od tries to represent in a human readable way the two first bytes of a file.
You an achieve the same result by using only od:
DATA_STORE=$(od -c -N2 -An "$FILEPATH")
If necessary, you can remove the blanks from the output with some option (I can't remember) or with | tr -d ' '.

Related

Trouble Allocating Memory in Bash Script

I tried to automate the process of cleaning up various wordlists I am working with. This is the following code for it:
#!/bin/bash
# Removes spaces and duplicates in a wordlist
echo "Please be in the same directory as wordlist!"
read -p "Enter Worldlist: " WORDLIST
RESULT=$( awk '{print length, $0}' $WORDLIST | sort -n | cut -d " " -f2- )
awk '!(count[$0]++)' $RESULT > better-$RESULT
This is the error I recieve after running the program:
./wordlist-cleaner.sh: fork: Cannot allocate memory
First post, I hope I formatted it correctly.
You didn't describe your intentions or desired output, but I guess this may do what you want
awk '{print length, $0}' "$WORDLIST" | sort -n | cut -d " " -f2- | uniq > better-RESULT
Notice that it's better-RESULT instead of better-$RESULT as you don't want that as a filename.
Yeah okay I got it to run successfully. I was trying to clean up wordlists I was downloading of the net. I have some knowledge of the basic variable usage in Bash, but not enough of the text manipulation commands like sed or awk. Thanks for the support.

Extract specific string from line with standard grep,egrep or awk

i'm trying to extract a specific string from a grep output
uci show minidlna
produces a large list
.
.
.
minidlna.config.enabled='1'
minidlna.config.db_dir='/mnt/sda1/usb/db'
minidlna.config.enable_tivo='1'
minidlna.config.wide_links='1'
.
.
.
so i tried to narrow down what i wanted by running
uci show minidlna | grep -oE '\bdb_dir=\S+'
this narrows the output to
db_dir='/mnt/sda1/usb/db'
what i want is to output only
/mnt/sda1/usb/db
without the quotes and without the starting "db_dir" so i can run rm /mnt/sda1/usb/db/file.db
i've used the answers found here
How to extract string following a pattern with grep, regex or perl
and that's as close as i got.
EDIT: after using Ed Morton's awk command i needed to pass the output to rm command.
i used:
| ( read DB; (rm $DB/files.db) .
read DB passes the output into the vairable DB.
(...) combines commands.
rm $DB/files.db deletes the the file files.db.
Is this what you're trying to do?
$ awk -F"'" '/db_dir/{print $2}' file
/mnt/sda1/usb/db
That will work in any awk in any shell on every UNIX box.
If that's not what you want then edit your question to clarify your requirements and post more truly representative sample input/output.
Using sed with some effort to avoid single quotes:
sed -n 's/^minidlna.config.db_dir=\s*\S\(\S*\)\S\s*$/\1/p' input
Well, so you end up having a string like db_dir='/mnt/sda1/usb/db'.
I would first remove the quotes by piping this to
.... | tr -d "'"
Now you end up with a string like db_dir=/mnt/sda1/usb/db.
Say you have this string stored in a variable named confstr, then
${confstr##*=}
gives you just /mnt/sda1/usb/db, since *= denotes everything from the start to the equal sign, and ## denotes removal.
I would do this:
Once you either extracted your line about into file.txt (or pipe it into this command), split the fields using the quote character. Use printf to generate the rm command and pass this into bash to execute.
$ awk -F"'" '{printf "rm %s.db/file.db\n", $2}' file.txt | bash
rm: /mnt/sda1/usb/db.db/file.db: No such file or directory
With your original command:
$ uci show minidlna | grep -oE '\bdb_dir=\S+' | \
awk -F"'" '{printf "rm %s.db/file.db\n", $2}' | bash

xargs sed and command substitution in bash

I'm trying to pass a xargs string replace into a sed replacement inside of a substitution, here's the non-working code.
CALCINT=$CALCINT$(seq $CALCLINES | xargs -Iz echo $CALCINT' -F "invoiceid'z'="'$(sed -n '/invoiceid'z'/s/.*name="invoiceid'z'"\s\+value="\([^"]\+\).*/\1/p' output.txt))
Everything works up until the sed inside the second substitution. the 'z' should be a number 1-20 based on the $CALCLINES variable. I know it has something to do with not escaping properly for sed but I'm having trouble wrapping my head around how sed wants things escaped in this situation.
Here's the surrounding lines of code:
curl -b mycookiefile -c mycookiefile http://localhost/calcint.php > output.txt
CALCLINES=`grep -o 'class="addinterest"' output.txt | wc -l`
CALCINT=$CALCINT$(seq $CALCLINES | xargs -Iz echo $CALCINT' -F "invoiceid'z'="'$(sed -n '/invoiceid17/s/.*name="invoiceid17"\s\+value="\([^"]\+\).*/\1/p' output.txt))
echo $CALCINT
Output: (What I get now)
-F "invoiceid1=" -F "invoiceid2=" -F "invoiceid3=" -F "invoiceid4=" -F "invoiceid5=" -F "invoiceid6=" -F "invoiceid7=" -F "invoiceid8=" -F "invoiceid9=" -F "invoiceid10=" -F "invoiceid11=" -F "invoiceid12=" -F "invoiceid13=" -F "invoiceid14=" -F "invoiceid15=" -F "invoiceid16=" -F "invoiceid17=" -F "invoiceid18=" -F "invoiceid19=" -F "invoiceid20="
What I'm hoping to see as output is something like this
-F "invoiceid1=2342" -F "invoiceid2=456456" -F "invoiceid3=78987" ...etc etc
-------------------------EDIT-----------------------
FWIW...here's the output.txt and other things I've tried.
for i in $(seq -f "%02g" ${CALCLINES});do
sed -n "/interest$i/s/.*name=\"interest$i\"\s\+value=\"\([^\"]\+\).*/\1/p" output.txt > output2.txt
done
output2.txt contains nothing
Thanks to #janos response for clearing things up but taking a step back makes it clear to me that the root of the issue here is that I'm struggling to get the invoice ids out. It's dynamically generated HTML "....name="invoiceid7" value="556"..." so there isn't anything consistent in those particular tags that I can grep on, which is why I was counting another tag that IS consistent then trying to use a variable sed to basically deduce the tag name then extract the value.
Annd..output.txt https://pastebin.com/ewUaddVi
------UPDATE-----
Working solution
Stuff sed into a loop. Note how I had to use ' to use variables in the sed string. That is well documented elsewhere on here. :)
for i in $(seq ${CALCLINES});do
e="interest"$i`
CALCINT=$CALCINT' -F "'$e'='
CALCINT=$CALCINT$(sed -n '/'$e'/s/.*name="'$e'"\s\+value="\([^"]\+\).*/\1/p' output.txt)'"'
done
Please read through the comments on the solution below, there is a cleaner way of doing this.
Your current approach cannot work, specifically this part:
... | xargs -Iz echo -F "invoiceid'z'="$(sed ...)"
The problem is that the $(sed ...) will not be evaluated for each line in the input during the execution of xargs.
The shell will evaluate this once, before it actually runs xargs.
And you need there dynamic values from your input.
You can make this work by taking a different approach:
Extract the invoice ids. For example, write a grep or sed pipeline that produces as output simply the list of invoice ids
Transform the invoice list to the -F "invoiceidNUM=..." form that you need
For the second step, Awk could be practical. The script could be something like this:
curl -b mycookiefile -c mycookiefile http://localhost/calcint.php > output.txt
args=$(sed ... output.txt | awk '{ print "-F \"invoice" NR "=" $0 "\"" }')
echo $args
For example if the sed step produces 2342, 456456, 78987, then the output will be:
-F "invoice1=2342" -F "invoice2=456456" -F "invoice3=78987"

Linux commands to output part of input file's name and line count

What Linux commands would you use successively, for a bunch of files, to count the number of lines in a file and output to an output file with part of the corresponding input file as part of the output line. So for example we were looking at file LOG_Yellow and it had 28 lines, the the output file would have a line like this (Yellow and 28 are tab separated):
Yellow 28
wc -l [filenames] | grep -v " total$" | sed s/[prefix]//
The wc -l generates the output in almost the right format; grep -v removes the "total" line that wc generates for you; sed strips the junk you don't want from the filenames.
wc -l * | head --lines=-1 > output.txt
produces output like this:
linecount1 filename1
linecount2 filename2
I think you should be able to work from here to extend to your needs.
edit: since I haven't seen the rules for you name extraction, I still leave the full name. However, unlike other answers I'd prefer to use head rather then grep, which not only should be slightly faster, but also avoids the case of filtering out files named total*.
edit2 (having read the comments): the following does the whole lot:
wc -l * | head --lines=-1 | sed s/LOG_// | awk '{print $2 "\t" $1}' > output.txt
wc -l *| grep -v " total"
send
28 Yellow
You can reverse it if you want (awk, if you don't have space in file names)
wc -l *| egrep -v " total$" | sed s/[prefix]//
| awk '{print $2 " " $1}'
Short of writing the script for you:
'for' for looping through your files.
'echo -n' for printing the current file
'wc -l' for finding out the line count
And dont forget to redirect
('>' or '>>') your results to your
output file

How do you pipe input through grep to another utility?

I am using 'tail -f' to follow a log file as it's updated; next I pipe the output of that to grep to show only the lines containing a search term ("org.springframework" in this case); finally I'd like to make is piping the output from grep to a third command, 'cut':
tail -f logfile | grep org.springframework | cut -c 25-
The cut command would remove the first 25 characters of each line for me if it could get the input from grep! (It works as expected if I eliminate 'grep' from the chain.)
I'm using cygwin with bash.
Actual results: When I add the second pipe to connect to the 'cut' command, the result is that it hangs, as if it's waiting for input (in case you were wondering).
Assuming GNU grep, add --line-buffered to your command line, eg.
tail -f logfile | grep --line-buffered org.springframework | cut -c 25-
Edit:
I see grep buffering isn't the only problem here, as cut doesn't allow linewise buffering.
you might want to try replacing it with something you can control, such as sed:
tail -f logfile | sed -u -n -e '/org\.springframework/ s/\(.\{0,25\}\).*$/\1/p'
or awk
tail -f logfile | awk '/org\.springframework/ {print substr($0, 0, 25);fflush("")}'
On my system, about 8K was buffered before I got any output. This sequence worked to follow the file immediately:
tail -f logfile | while read line ; do echo "$line"| grep 'org.springframework'|cut -c 25- ; done
What you have should work fine -- that's the whole idea of pipelines. The only problem I see is that, in the version of cut I have (GNU coreutiles 6.10), you should use the syntax cut -c 25- (i.e. use a minus sign instead of a plus sign) to remove the first 24 characters.
You're also searching for different patterns in your two examples, in case that's relevant.

Resources