extract file content using a bash script - bash

It has been long time since my last bash script.
I m just trying to extract the content of a file from the the start variable to the stop one.
My source file is night4.info and it contains a list of .jpg files.The structure of this file is similar to:
./2014-11-02/18h/00mn/2014-11-02T18-00-00.048000-depth.jpg
./2014-11-02/18h/00mn/2014-11-02T18-00-00.182000-depth.jpg
./2014-11-02/18h/00mn/2014-11-02T18-00-00.316000-depth.jpg
This is the code so far :
#! /bin/bash
start=$(grep -n $1 night4.info | cut -d : -f 1)
stop=$(grep -n $2 night4.info | cut -d : -f 1)
echo "1" >> list.info
sed -n -e "$start,$stop p" night4.info >> list.info
And this is how I m running my script:
./script1.sh 2014-11-02T18-00-00.048000 2014-11-03T06-59-59.981000
There is no error message and the code doesn't give the right output.

You could use a Perl one-liner with the range operator:
perl -ne "print if /\Q$1\E/../\Q$2\E/" night4.info >> list.info

Related

make the bash script to be faster

I have a fairly large list of websites in "file.txt" and wanted to check if the words "Hello World!" in the site in the list using looping and curl.
i.e in "file.txt" :
blabla.com
blabla2.com
blabla3.com
then my code :
#!/bin/bash
put() {
printf "list : "
read list
run=$(cat $list)
}
put
scan_list() {
for run in $(cat $list);do
if [[ $(curl -skL ${run}) =~ "Hello World!" ]];then
printf "${run} Hello World! \n"
else
printf "${run} No Hello:( \n"
fi
done
}
scan_list
this takes a lot of time, is there a way to make the checking process faster?
Use xargs:
% tr '\12' '\0' < file.txt | \
xargs -0 -r -n 1 -t -P 3 sh -c '
if curl -skL "$1" | grep -q "Hello World!"; then
echo "$1 Hello World!"
exit
fi
echo "$1 No Hello:("
' _
Use tr to convert returns in the file.txt to nulls (\0).
Pass through xargs with -0 option to parse by nulls.
The -r option prevents the command from being ran if the input is empty. This is only available on Linux, so for macOS or *BSD you will need to check that file.txt is not empty before running.
The -n 1 permits only one file per execution.
The -t option is debugging, it prints the command before it is ran.
We allow 3 simultaneous commands in parallel with the -P 3 option.
Using sh -c with a single quoted multi-line command, we substitute $1 for the entries from the file.
The _ fills in the $0 argument, so our entries are $1.

Concatenate the output of 2 commands in the same line in Unix

I have a command like below
md5sum test1.txt | cut -f 1 -d " " >> test.txt
I want output of the above result prefixed with File_CheckSum:
Expected output: File_CheckSum: <checksumvalue>
I tried as follows
echo 'File_Checksum:' >> test.txt | md5sum test.txt | cut -f 1 -d " " >> test.txt
but getting result as
File_Checksum:
adbch345wjlfjsafhals
I want the entire output in 1 line
File_Checksum: adbch345wjlfjsafhals
echo writes a newline after it finishes writing its arguments. Some versions of echo allow a -n option to suppress this, but it's better to use printf instead.
You can use a command group to concatenate the the standard output of your two commands:
{ printf 'File_Checksum: '; md5sum test.txt | cut -f 1 -d " "; } >> test.txt
Note that there is a race condition here: you can theoretically write to test.txt before md5sum is done reading from it, causing you to checksum more data than you intended. (Your original command mentions test1.txt and test.txt as separate files, so it's not clear if you are really reading from and writing to the same file.)
You can use command grouping to have a list of commands executed as a unit and redirect the output of the group at once:
{ printf 'File_Checksum: '; md5sum test1.txt | cut -f 1 -d " " } >> test.txt
printf "%s: %s\n" "File_Checksum:" "$(md5sum < test1.txt | cut ...)" > test.txt
Note that if you are trying to compute the hash of test.txt(the same file you are trying to write to), this changes things significantly.
Another option is:
{
printf "File_Checksum: "
md5sum ...
} > test.txt
Or:
exec > test.txt
printf "File_Checksum: "
md5sum ...
but be aware that all subsequent commands will also write their output to test.txt. The typical way to restore stdout is:
exec 3>&1
exec > test.txt # Redirect all subsequent commands to `test.txt`
printf "File_Checksum: "
md5sum ...
exec >&3 # Restore original stdout
Operator &&
e.g. mkdir example && cd example

Concatenate String and Variable in Shell Script

Content of file is:
#data.conf
ip=127.0.0.1
port=7890
delay=10
key=1.2.3.4
debug=true
Shell Script:
#!/bin/bash
typeset -A config
config=()
config_file_path="./data.conf"
cmd="java -jar ./myprogram.jar"
#This section will read file and put content in config variable
while read line
do
#echo "$line"
if echo $line | grep -F = &>/dev/null
then
key=$(echo "$line" | cut -d '=' -f 1)
config[$key]=$(echo "$line" | cut -d '=' -f 2)
echo "$key" "${config["$key"]}"
fi
done < "$config_file_path"
cmd="$cmd -lh ${config["ip"]} -lp ${config["port"]} -u ${config["debug"]} -hah \"${config["key"]}\" -hap ${config["delay"]}"
echo $cmd
Expected output:
java -jar myprogram.jar -lh 127.0.0.1 -lp 7890 -u true -hah "1.2.3.4" -hap 10 -b
Output:
Every time some unexpected o/p
Ex. -lp 7890rogram.jar
Looks like it is overwriting same line again and again
In respect to the comments given and to have an additional automatic data cleansing within the script, you could have according How to convert DOS/Windows newline (CRLF) to Unix newline (LF) in a Bash script? and Remove carriage return in Unix
# This section will clean the input config file
sed -i 's/\r$//' "${config_file_path}"
within your script. This will prevent the error in future runs.

Grep command returns nothing in shell script

When I try to extract rows that are matched string which are in another file.But the grep command returns nothing.
#!/bin/bash
input="export.txt"
file="filename.csv"
val=`head -n 1 $file`
echo $val>export.csv
cat export.txt | while read line
do
val=`echo $line | tr -d '\n'`
echo $val
valu=`grep $val $file`
echo $valu
done
You can simply do this :
grep -f list.txt input.txt
Which will extract all the lines from input which match any word from list.txt.
If for some reason you want to save each match, you can do it in a Bash array as :
IFS=$'\n' read -d '' -a values <<< "$( grep -f list.txt input.txt )"
And then you can print a certain match as :
echo "${values[1]}"
Regards!

Create files using strings which delimited by specific character in BASH

Suppose we have the following command and its related output :
gsettings list-recursively org.gnome.Terminal.ProfilesList | head -n 1 | grep -oP '(?<=\[).*?(?=\])'
Output :
'b1dcc9dd-5262-4d8d-a863-c897e6d979b9', 'ca4b733c-53f2-4a7e-8a47-dce8de182546', '802e8bb8-1b78-4e1b-b97a-538d7e2f9c63', '892cd84f-9718-46ef-be06-eeda0a0550b1', '6a7d836f-b2e8-4a1e-87c9-e64e9692c8a8', '2b9e8848-0b4a-44c7-98c7-3a7e880e9b45', 'b23a4a62-3e25-40ae-844f-00fb1fc244d9'
I need to use gsettings command in a script and create filenames regarding to output ot gessetings command. For example a file name should be
b1dcc9dd-5262-4d8d-a863-c897e6d979b9
the next one :
ca4b733c-53f2-4a7e-8a47-dce8de182546
and so on.
How I can do this?
Another solution... just pipe the output of your command to:
your_command | sed "s/[ ']//g" | xargs -d, touch
You can use process substitution to read your gsettings output and store it in an array :
IFS=', ' read -r -a array < <(gsettings)
for f in "${array[#]}"
do
file=$(echo $f |tr -d "'" ) # removes leading and trailing quotes
touch "$file"
done

Resources