I have a shell script that download urls one by one and check for updates in static sites.Here is the code:
#!/bin/bash
input="file.in"
while IFS= read -r line
do
# check if input line is a comment
case "$line" in \#*) continue ;; esac
line2="${line//\//\\}"
if wget -q "$line" -O index.html > /dev/null; then
if [ -f "$line2.html" ]; then
cmp --silent "$line2".html index.html || echo "$line"
else
echo "$line INIT"
fi
else
echo "$line FAIL"
fi
mv index.html "$line2".html
done <"$input"
file.in is the list with urls. Example:
#List of addresses
http://www.google.com
http://www.spotify.com
http://www.flickr.com
http://www.soundcloud.com
https://www.facebook.com
https://en.wikipedia.org/wiki/Linux
And i want to change the script to download all urls at once and save them with the same way with wget or curl. thanks!
A naive approach:
while IFS= read -r line
do
[[ "$line" == \#* ]] && continue
(
line2="${line//\//\\}"
if wget -q "$line" -O index.html > /dev/null; then
if [ -f "$line2.html" ]; then
cmp --silent "$line2".html index.html || echo "$line"
else
echo "$line INIT"
fi
else
echo "$line FAIL"
fi
mv index.html "$line2".html
) &
done <"$input"
wait
echo "all done"
Related
My bash script is not working properly, i don't know why
test.sh
#!/bin/bash
CLEAN_FILES=".slug-post-clean"
while read file; do
[[ ! -n "${file}" ]] && continue
echo $file
echo "if [[ -d ${file}]]; then exists ";
if [[ -d "${file}" ]]; then
echo "file exists"
fi
done < ${CLEAN_FILES}
.slug-post-clean
src
public
node_modules/.cache
output
src
]]; then exists
public
]]; then exists
node_modules/.cache
]]; then exists dules/.cache
But this code works
if [[ -d "src" ]]; then
echo "if works"
fi
output
if works
My Ubuntu version is 20.04LTS
Anyone knows what's happening?
I searched and couldn't find anything, maybe I can't understand the problem properly.
I have a bash function who read files in current dir and sub dir's, I'm trying to arrange the text and analyze the data but somehow I'm losing lines if I'm using pipeline.
the code:
function recursiveFindReq {
for file in *.request; do
if [[ -f "$file" ]]; then
echo handling "$file"
echo ---------------with pipe-----------------------
cat "$file" | while read -a line; do
if (( ${#line} > 1 )); then
echo ${line[*]}
fi
done
echo ----------------without pipe----------------------
cat "$file"
echo
echo num of lines: `cat "$file" | wc -l`
echo --------------------------------------
fi
done
for dir in ./*; do
if [[ -d "$dir" ]]; then
echo cd to $dir
cd "$dir"
recursiveFindReq "$1"
cd ..
fi
done
}
the output is:
losing lines even when they meet requirements
I marked with 2 red arrows the place I'm losing info
I have a bash script which is intended to be idempotent. It creates symlinks, and it should be okay if the links are already there.
Here's an extract
L="/var/me/foo"
if [[ -e "$L" ]] && ! [[ -L "$L" ]];
then
echo "$L exists but is not a link."
exit 1;
elif [[ -e "$L" ]] && [[ -L "$L" ]];
then
echo "$L exists and is a link."
else
ln -s "/other/place" "$L" ||
{
echo "Could not chown ln -s for $L";
exit 1;
}
fi
The file /var/me/foo is already a symlink pointing to /other/place, according to ls -l.
Nevertheless, when I run this script the if and elif branches are not entered, instead we go into the else and attempt the ln, which fails because the file already exists.
Why do my tests not work?
Because you only check [ -L "$L" ] if [ -e "$L" ] is true, and [ -e "$L" ] returns false for a link pointing to a destination that doesn't exist, you don't detect links that point to locations that don't exist.
The below logic is a bit more comprehensive.
link=/var/me/foo
dest=/other/place
# because [[ ]] is in use, quotes are not mandatory
if [[ -L $link ]]; then
echo "$link exists as a link, though its target may or may not exist" >&2
elif [[ -e $link ]]; then
echo "$link exists but is not a link" >&2
exit 1
else
ln -s "$dest" "$link" || { echo "yadda yadda" >&2; exit 1; }
fi
Sorry for asking this question again. I have already received answer but with using find but unfortunately I need to write it without using any predefined commands.
I am trying to write a script that will loop recursively through the subdirectories in the current directory. It should check the file count in each directory. If file count is greater than 10 it should write all names of these file in file named "BigList" otherwise it should write in file "ShortList". This should look like:
---<directory name>
<filename>
<filename>
<filename>
<filename>
....
---<directory name>
<filename>
<filename>
<filename>
<filename>
....
My script only works if subdirectories don't include subdirectories in turn.
I am confused about this because it doesn't work as I expect.
Here is my script
#!/bin/bash
parent_dir=""
if [ -d "$1" ]; then
path=$1;
else
path=$(pwd)
fi
parent_dir=$path
loop_folder_recurse() {
local files_list=""
local cnt=0
for i in "$1"/*;do
if [ -d "$i" ];then
echo "dir: $i"
parent_dir=$i
echo before recursion
loop_folder_recurse "$i"
echo after recursion
if [ $cnt -ge 10 ]; then
echo -e "---"$parent_dir >> BigList
echo -e $file_list >> BigList
else
echo -e "---"$parent_dir >> ShortList
echo -e $file_list >> ShortList
fi
elif [ -f "$i" ]; then
echo file $i
if [ $cur_fol != $main_pwd ]; then
file_list+=$i'\n'
cnt=$((cnt + 1))
fi
fi
done
}
echo "Base path: $path"
loop_folder_recurse $path
How can I modify my script to produce the desired output?
This bash script produces the output that you want:
#!/bin/bash
bigfile="$PWD/BigList"
shortfile="$PWD/ShortList"
shopt -s nullglob
loop_folder_recurse() {
(
[[ -n "$1" ]] && cd "$1"
for i in */; do
[[ -d "$i" ]] && loop_folder_recurse "$i"
count=0
files=''
for j in *; do
if [[ -f "$j" ]]; then
files+="$j"$'\n'
((++count))
fi
done
if ((count > 10)); then
outfile="$bigfile"
else
outfile="$shortfile"
fi
echo "$i" >> "$outfile"
echo "$files" >> "$outfile"
done
)
}
loop_folder_recurse
Explanation
shopt -s nullglob is used so that when a directory is empty, the loop will not run. The body of the function is within ( ) so that it runs within a subshell. This is for convenience, as it means that the function returns to the previous directory when the subshell exits.
Hopefully the rest of the script is fairly self-explanatory but if not, please let me know and I will be happy to provide additional explanation.
for var in "$#"
do
if test -z $var
then
echo "missing operand"
elif [ -d $var ]
then
echo "This is a directory"
elif [ ! -f $var ]
then
echo "The file does not exist"
else
basename=$(basename $var)
dirname=$(readlink -f $var)
inodeno=$(ls -i $var| cut -d" " -f1)
read -p "remove regular file $#" input
if [ $input = "n" ]
then exit 1
fi
mv $var "$var"_"$inodeno"
echo "$basename"_"$inodeno":"$dirname" >> $HOME/.restore.info
mv "$var"_"$inodeno" $HOME/deleted
fi
done
**Hello, the above code is trying to mimic the rm command in unix. Its purpose is to remove the file .
Eg if I type in bash safe_rm file1 , it works however if type in
bash safe_rm file1 file 2 , it prompts me to remove file 1 twice and gives me a unary operater expected for line 27(if [ $input = "n" ]).
Why does it not work for two files, ideally I would like it to prompt me to remove file1 and file 2.
Thanks
read -p "remove regular file $#" input
should probably be
read -p "remove regular file $var" input
That's the basic.
And this is how I'd prefer to do it:
for T in "$#"; do
if [[ -z $T ]]; then
echo "Target is null."
elif [[ ! -e $T ]]; then
echo "Target does not exist: $T"
elif [[ -d $T ]]; then
echo "Target can't be a directory: $T"
else
BASE=${T##*/}
DIRNAME=$(exec dirname "$T") ## Could be simpler but not sure how you want to use it.
INODE_NUM=$(exec stat -c '%i' "$T")
read -p "Remove regular file $T? "
if [[ $REPLY == [yY] ]]; then
# Just copied. Not sure about its logic.
mv "$T" "${T}_${INODE_NUM}"
echo "${BASE}_${INODE_NUM}:${DIRNAME}" >> "$HOME/.restore.info"
mv "${T}_${INODE_NUM}" "$HOME/deleted"
fi
fi
done