I have a text document that contains URL's was write in the same way:
https://google.com
https://youtube.com
This code should read strings and get the html status from each strings in file. So it can't find the URL, i guess
exec 0<$1 #(Where $1 is param to input the file)
while IFS='' read -r line
response=$(curl --write-out %{http_code} --silent --output /dev/null $line)
[[ -n "$line" ]]
do
echo "Text read from file: $line"
This code save as the HtmlStatus.sh
you can create file for example test.txt
text.txt : https://google.com
https://youtube.com
https://facebook.com
#!/bin/bash
while IFS='' read -r line || [[ -n "$line" ]]; do
echo -n "Read Url : $line"
curl -I $line | grep HTTP
done < "$1"
this code return the html status code at the test.txt file line in url.
run terminal :
chmod +x HtmlStatus.sh
./HtmlStatus.sh test.txt
Related
so the file contains data like
# entertainment
youtube.com
twitch.tv
# research
google.com
wikipedia.com
...
and I would like to pass that file as an argument in a script that would open all lines if they doesn't start with an #. Any clues on how to ?
so far what i have:
for Line in $Lines
do
case "# " in $Line start firefox $Line;; esac
done
some code that could be useful (?):
while read line; do chmod 755 "$line"; done < file.txt
grep -e '^[^#]' inputfile.txt | xargs -d '\n' firefox --new-tab
grep -e '^[^#]': Will print all lines that don't start with a sharp (comments)
xargs -d '\n' firefox --new-tab: Will pass each line that is not blank, as argument to Firefox.
Removes both the lines that start with # and empty lines.
#!/bin/bash
#
while read -r line
do
if [[ $(echo "$line" | grep -Ev "^#|^$") ]]
then
firefox --new-tab "$url" &
fi
done <file.txt
Skip the empty lines and the lines that starts with a #
#!/usr/bin/env bash
while IFS= read -r url; do
[[ "$url" == \#* || -z "$url" ]] && continue
firefox --new-tab "$url" &
done < file.txt
awk 'NF && $1!="#"{print "firefox --new-tab", $0, "&"}' file.txt | bash
I'm trying to name a file with the content of one variable doing this script
#!/bin/bash
IFS=$'\n'
file=mails.txt
lines=$(cat ${file})
for line in ${lines}; do
echo "'${line}'"
final=echo "'$line'" |sed -e 's/^\.\*\(.*\)\*\./\1/'
curl -XGET "https://google.es" > $final.'.txt'
sleep 2
done
IFS=""
exit ${?}
The content of mails.txt is something like:
.*john*.
.*peter*.
And when I do that I have the error ./testcur.sh line 7: .john*. :order not found, the same with peter. How can I do to name the files john.txt and peter.txt
You have three mistakes
"'$line'" -> "$line"
final=echo -> final=$(echo ...)
$final.'.txt' -> $final'.txt'
#!/bin/bash
IFS=$'\n'
file=mails.txt
lines=$(cat ${file})
for line in ${lines}; do
echo "'${line}'"
final=$(echo "$line" | sed -e 's/^\.\*\(.*\)\*\./\1/')
curl -XGET "https://google.es" > ${final}'.txt'
sleep 2
done
IFS=""
exit ${?}
I have a set of search strings in a file (File1) and a content file (File2). I am trying to loop through all the search strings within File1 and get a count of each of the search string within File2 and output it - I want to automate this and make it generic so I can search through multiple content files. However, I dont seem to be able to get the exact count when I execute this loop. I get a "0" count for each of the strings although I have those strings in the file. Unable to figure out what I am doing wrong and can use some help !
Below is the script I came up with:
#!/bin/bash
while IFS='' read -r line || [[ -n "$line" ]]; do
count=$(echo cat "$2" | grep -c "$line")
echo "$count - $line"
done < "$1"
Command I am using to run this script:
./scanscript.sh File1.log File2.log
I say this since I searched this command separately and get the right value. This command works by itself but I want to put this in a loop
cat File2.log | grep -c "Search String"
Sample Data for File 1 (Search Strings):
/SERVER_NAME/Root/DEV/Database/NJ-CONTENT/Procs/
/SERVER_NAME3/Root/DEV/Database/NJ-CONTENT/Procs/
Sample Data for File 2 (Content File):
./SERVER_NAME/Root/DEV/Database/NJ-CONTENT/Procs/test.test_proc.sql:29:
./SERVER_NAME2/Root/DEV/Database/NJ-CONTENT/Procs/test.test_proc.sql:100:
./SERVER_NAME3/Root/DEV/Database/NJ-CONTENT/Procs/test.test_proc.sql:143:
./SERVER_NAME4/Root/DEV/Database/NJ-CONTENT/Procs/test.test_proc.sql:223:
./SERVER_NAME5/Root/DEV/Database/NJ-CONTENT/Procs/test.test_proc.sql:5589:
Problem is this line:
count=$(echo cat "$2" | grep -c "$line")
That should be changed to:
count=$(grep -Fc "$line" "$2")
Also note -F is to be used for fixed string search instead of regex search.
Full code:
while IFS='' read -r line || [[ -n "$line" ]]; do
count=$(grep -Fc "$line" "$2");
echo "$count - $line";
done < "$1"
Run it as:
./scanscript.sh File1.log File2.log
Output:
1 - /SERVER_NAME/Root/DEV/Database/NJ-CONTENT/Procs/
1 - /SERVER_NAME3/Root/DEV/Database/NJ-CONTENT/Procs/
#!/bin/bash
file="/home/vdabas2/file2"
while IFS='' read -r line || [[ -n "$line" ]];
do
pbreplay -O "$line" >> output
done < "$file"
I am able to read a file line by line and the output of each line processed is being redirected to output by using the above shell script.
But I need a different file as a redirected output for each line processed and save it like output1, output2 and so on. So, if there are 10 lines in that file which are being passed on as arguments then I need 10 output files.
#!/bin/bash
file="/home/vdabas2/file2"
i=1
while IFS='' read -r line || [[ -n "$line" ]];
do
pbreplay -O "$line" >> output.${i}
((i++))
done < "$file"
Add increment for example.
I have used the below content to fetch some values .
But the grep in the code is not showing any results.
#!/bin/bash
file=test.txt
while IFS= read -r cmd;
do
check_address=`grep -c $cmd music.cpp`
if [ $check_address -ge 1 ]; then
echo
else
grep -i -n "$cmd" music.cpp
echo $cmd found
fi
done < "$file"
Note : there are no carriage return in my text file or .sh file.
i checked using
bash -x check.sh
It is just showing
+grep -i -n "$cmd" music.cpp