I'm trying to write a bash script that will download all of the youtube videos from a playlist and save them to a specific file name based on the title of the youtube video itself. So far I have two separate pieces of code that do what I want but I don't know how to combine them together to function as a unit.
This piece of code finds the titles of all of the youtube videos on a given page:
curl -s "$1" | grep '<span class="title video-title "' | cut -d\> -f2 | cut -d\< -f1
And this piece of code downloads the files to a filename given by the youtube video id (e.g. the filename given by youtube.com/watch?v=CsBVaJelurE&feature=relmfu would be CsBVaJelurE.flv)
curl -s "$1" | grep "watch?" | cut -d\" -f4| while read video;
do youtube-dl "http://www.youtube.com$video";
done
I want a script that will output the youtube .flv file to a filename given by the title of the video (in this case BASH lesson 2.flv) rather than simply the video id name. Thanks in advance for all the help.
OK so after further research and updating my version of youtube-dl, it turns out that this functionality is now built directly into the program, negating the need for a shell script to solve the playlist download issue on youtube. The full documentation can be found here: (http://rg3.github.com/youtube-dl/documentation.html) but the simple solution to my original question is as follows:
1) youtube-dl will process a playlist link automatically, there is no need to individually feed it the URLs of the videos that are contained therein (this negates the need to use grep to search for "watch?" to find the unique video id
2) there is now an option included to format the filename with a variety of options including:
id: The sequence will be replaced by the video identifier.
url: The sequence will be replaced by the video URL.
uploader: The sequence will be replaced by the nickname of the person who uploaded the video.
upload_date: The sequence will be replaced by the upload date in YYYYMMDD format.
title: The sequence will be replaced by the literal video title.
ext: The sequence will be replaced by the appropriate extension (like
flv or mp4).
epoch: The sequence will be replaced by the Unix epoch when creating
the file.
autonumber: The sequence will be replaced by a five-digit number that
will be increased with each download, starting at zero.
the syntax for this output option is as follows (where NAME is any of the options shown above):
youtube-dl -o '%(NAME)s' http://www.youtube.com/your_video_or_playlist_url
As an example, to answer my original question, the syntax is as follows:
youtube-dl -o '%(title)s.%(ext)s' http://www.youtube.com/playlist?list=PL2284887FAE36E6D8&feature=plcp
Thanks again to those who responded to my question, your help is greatly appreciated.
If you want to use the title from youtube page as a filename, you could use -t option of youtube-dl. If you want to use the title from your "video list" page and you sure that there is exactly one watch? URL for every <span class="title video-title" title, then you can use something like this:
#!/bin/bash
TMPFILE=/tmp/downloader-$$
onexit() {
rm -f $TMPFILE
}
trap onexit EXIT
curl -s "$1" -o $TMPFILE
i=0
grep '<span class="title video-title "' $TMPFILE | cut -d\> -f2 | cut -d\< -f1 | while read title; do
titles[$i]=$title
((i++))
done
i=0
grep "watch?" $TMPFILE | cut -d\" -f4 | while read url; do
urls[$i]="http://www.youtube.com$url"
((i++))
done
i=0; while (( i < ${#urls[#]} )); do
youtube-dl -o "${titles[$i]}.%(ext)" "${urls[$i]}"
((i++))
done
I did not tested it because I have no "video list" page example.
this following method work and play you titanic from youtube
youtube-downloader.sh
youtube-video-url.sh
#!/bin/bash
decode() {
to_decode='s:%([0-9A-Fa-f][0-9A-Fa-f]):\\x\1:g'
printf "%b" `echo $1 | sed 's:&:\n:g' | grep "^$2" | cut -f2 -d'=' | sed -r $to_decode`
}
data=`wget http://www.youtube.com/get_video_info?video_id=$1\&hl=pt_BR -q -O-`
url_encoded_fmt_stream_map=`decode $data 'url_encoded_fmt_stream_map' | cut -f1 -d','`
signature=`decode $url_encoded_fmt_stream_map 'sig'`
url=`decode $url_encoded_fmt_stream_map 'url'`
test $2 && name=$2 || name=`decode $data 'title' | sed 's:+: :g;s:/:-:g'`
test "$name" = "-" && name=/dev/stdout || name="$name.vid"
wget "${url}&signature=${signature}" -O "$name"
#!/usr/bin/env /bin/bash
function youtube-video-url {
local field=
local data=
local split="s:&:\n:g"
local decode_str='s:%([0-9A-Fa-f][0-9A-Fa-f]):\\x\1:g'
local yt_url="http://www.youtube.com/get_video_info?video_id=$1"
local grabber=`command -v curl`
local args="-sL"
if [ ! "$grabber" ]; then
grabber=`command -v wget`
args="-qO-"
fi
if [ ! "$grabber" ]; then
echo 'No downloader available.' >&2
test x"${BASH_SOURCE[0]}" = x"$0" && exit 1 || return 1
fi
function decode {
data="`echo $1`"
field="$2"
if [ ! "$field" ]; then
field="$1"
data="`cat /dev/stdin`"
fi
data=`echo $data | sed $split | grep "^$field" | cut -f2 -d'=' | sed -r $decode_str`
printf "%b" $data
}
local map=`$grabber $args $yt_url | decode 'url_encoded_fmt_stream_map' | cut -f1 -d','`
echo `decode $map 'url'`\&signature=`decode $map 'sig'`
}
[ $SHLVL != 1 ] && export -f youtube-video-url
bash youtube-player.sh saalGKY7ifU
#!/bin/bash
decode() {
to_decode='s:%([0-9A-Fa-f][0-9A-Fa-f]):\\x\1:g'
printf "%b" `echo $1 | sed 's:&:\n:g' | grep "^$2" | cut -f2 -d'=' | sed -r $to_decode`
}
data=`wget http://www.youtube.com/get_video_info?video_id=$1\&hl=pt_BR -q -O-`
url_encoded_fmt_stream_map=` decode $data 'url_encoded_fmt_stream_map' | cut -f1 -d','`
signature=` decode $url_encoded_fmt_stream_map 'sig'`
url=`decode $url_encoded_fmt_stream_map 'url'`
test $2 && name=$2 || name=`decode $data 'title' | sed 's:+: :g;s:/:-:g'`
test "$name" = "-" && name=/dev/stdout || name="$name.mp4"
# // wget "${url}&signature=${signature}" -O "$name"
mplayer -zoom -fs "${url}&signature=${signature}"
It uses decode and bash, that you may have installed.
I use this bash script to download a given set of songs from a given youtube's playlist
#!/bin/bash
downloadDirectory = <directory where you want your videos to be saved>
playlistURL = <URL of the playlist>
for i in {<keyword 1>,<keyword 2>,...,<keyword n>}; do
youtube-dl -o ${downloadDirectory}"/youtube-dl/%(title)s.%(ext)s" ${playlistURL} --match-title $i
done
Note: "keyword i" is the title (in whole or part; if part, it should be unique to that playlist) of a given video in that playlist.
Edit: You can install youtube-dl by pip install youtube-dl
#!/bin/bash
# Coded by Biki Teron
# String replace command in linux
echo "Enter youtube url:"
read url1
wget -c -O index.html $url1
################################### Linux string replace ##################################################
sed -e 's/%3A%2F%2F/:\/\//g' index.html > youtube.txt
sed -i 's/%2F/\//g' youtube.txt
sed -i 's/%3F/?/g' youtube.txt
sed -i 's/%3D/=/g' youtube.txt
sed -i 's/%26/\&/g' youtube.txt
sed -i 's/%252/%2/g' youtube.txt
sed -i 's/sig/&signature/g' youtube.txt
## command to get filename
nawk '/<title>/,/<\/title>/' youtube.txt > filename.txt ## Print the line between containing <title> and <\/title> .
sed -i 's/.*content="//g' filename.txt
sed -i 's/">.*//g' filename.txt
sed -i 's/.*<title>//g' filename.txt
sed -i 's/<.*//g' filename.txt
######################################## Coding to get all itag list ########################################
nawk '/"fmt_list":/,//' youtube.txt > fmt.html ## Print the line containing "fmt_list": .
sed -i 's/.*"fmt_list"://g' fmt.html
sed -i 's/, "platform":.*//g' fmt.html
sed -i 's/, "title":.*//g' fmt.html
# String replace command in linux to get correct itag format
sed -i 's/\\\/1920x1080\\\/99\\\/0\\\/0//g' fmt.html ## Replace \/1920x1080\/99\/0\/0 by blank .
sed -i 's/\\\/1920x1080\\\/9\\\/0\\\/115//g' fmt.html ## Replace \/1920x1080\/9\/0\/115 by blank.
sed -i 's/\\\/1280x720\\\/99\\\/0\\\/0//g' fmt.html ## Replace \/1280x720\/99\/0\/0 by blank.
sed -i 's/\\\/1280x720\\\/9\\\/0\\\/115//g' fmt.html ## Replace \/1280x720\/9\/0\/115 by blank.
sed -i 's/\\\/854x480\\\/99\\\/0\\\/0//g' fmt.html ## Replace \/854x480\/99\/0\/0 by blank.
sed -i 's/\\\/854x480\\\/9\\\/0\\\/115//g' fmt.html ## Replace \/854x480\/9\/0\/115 by blank.
sed -i 's/\\\/640x360\\\/99\\\/0\\\/0//g' fmt.html ## Replace \/640x360\/99\/0\/0 by blank.
sed -i 's/\\\/640x360\\\/9\\\/0\\\/115//g' fmt.html ## Replace \/640x360\/9\/0\/115 by blank.
sed -i 's/\\\/640x360\\\/9\\\/0\\\/115//g' fmt.html ## Replace \/640x360\/9\/0\/115 by blank.
sed -i 's/\\\/320x240\\\/7\\\/0\\\/0//g' fmt.html ## Replace \/320x240\/7\/0\/0 by blank.
sed -i 's/\\\/320x240\\\/99\\\/0\\\/0//g' fmt.html ## Replace \/320x240\/99\/0\/0 by blank.
sed -i 's/\\\/176x144\\\/99\\\/0\\\/0//g' fmt.html ## Replace \/176x144\/99\/0\/0 by blank.
# Command to cut a part of a file between any two strings
nawk '/"url_encoded_fmt_stream_map":/,//' youtube.txt > url.txt
sed -i 's/.*url_encoded_fmt_stream_map"://g' url.txt
#Display video resolution information
echo ""
echo "Video resolution:"
echo "[46=1080(.webm)]--[37=1080(.mp4)]--[35=480(.flv)]--[36=180(.3gpp)]"
echo "[45=720 (.webm)]--[22=720 (.mp4)]--[34=360(.flv)]--[17=144(.3gpp)]"
echo "[44=480 (.webm)]--[18=360 (.mp4)]--[5=240 (.flv)]"
echo "[43=360 (.webm)]"
echo ""
echo "itag list= "`cat fmt.html`
echo "Enter itag number: "
read fmt
####################################### Coding to get required resolution #################################################
## cut itag=?
sed -e "s/.*,itag=$fmt//g" url.txt > "$fmt"_1.txt
sed -e 's/\u0026quality.*//g' "$fmt"_1.txt > "$fmt".txt
sed -i 's/.*u0026url=//g' "$fmt".txt ## Ignore all lines before \u0026url= but print all lines after \u0026url=.
sed -e 's/\u0026type.*//g' "$fmt".txt > "$fmt"url.txt ## Ignore all lines after \u0026type but print all lines before \u0026type.
sed -i 's/\\/\&/g' "$fmt"url.txt ## replace \ by &
sed -e 's/.*\u0026sig//g' "$fmt".txt > "$fmt"sig.txt ## Ignore all lines before \u0026sig but print all lines after \u0026sig.
sed -i 's/\\/\&ptk=machinima/g' "$fmt"sig.txt ## replace \ by &
echo `cat "$fmt"url.txt``cat "$fmt"sig.txt` > "$fmt"url.txt ## Add string at the end of a line
echo `cat "$fmt"url.txt` > link.txt ## url and signature content to 44url.txt
rm "$fmt"sig.txt
rm "$fmt"_1.txt
rm "$fmt".txt
rm "$fmt"url.txt
rm youtube.txt
########################################### Coding for filename with correct extension #####################################
if [ $fmt -eq 46 ]
then
echo `cat filename.txt`.webm > filename.txt
elif [ $fmt -eq 45 ]
then
echo `cat filename.txt`.webm > filename.txt
elif [ $fmt -eq 44 ]
then
echo `cat filename.txt`.webm > filename.txt
elif [ $fmt -eq 43 ]
then
echo `cat filename.txt`.webm > filename.txt
elif [ $fmt -eq 37 ]
then
echo `cat filename.txt`.mp4 > filename.txt
elif [ $fmt -eq 22 ]
then
echo `cat filename.txt`.mp4 > filename.txt
elif [ $fmt -eq 18 ]
then
echo `cat filename.txt`.mp4 > filename.txt
elif [ $fmt -eq 35 ]
then
echo `cat filename.txt`.flv > filename.txt
elif [ $fmt -eq 34 ]
then
echo `cat filename.txt`.flv > filename.txt
elif [ $fmt -eq 5 ]
then
echo `cat filename.txt`.flv > filename.txt
elif [ $fmt -eq 36 ]
then
echo `cat filename.txt`.3gpp > filename.txt
else
echo `cat filename.txt`.3gpp > filename.txt
fi
rm fmt.html
rm url.txt
filename=`cat filename.txt`
linkdownload=`cat link.txt`
wget -c -O "$filename" $linkdownload
echo "Download Finished!"
read
Related
I have been attempting to extract a CSV file full of URL's of images (about 1000).
Each row is a specific product with the first cell labelled "id".
I have taken the ID of each line in excel and created directories for them using a loop with mkdir.
My issue now is that I can't seem to figure out how to download the image, and then immediately store it into these folder's.
What I am attempting here is to use wget by concatenating "fold_name" and "EXT" to get it like a directory "/name_of_folder", and then getting the links to the images (in cell 5,6,7 and 8) and then using wget from these cells, into the directory.
Can anyone assist me with this?
I think this should be straight forward enough.
Thank you!
#!/usr/bin/bash
EXT='/'
while read line
do
fold_name= cut -d$',' -f1
concat= "%EXT" + "%fold_name"
img1= cut -d$',' -f5
img2= cut -d$',' -f6
img3= cut -d$',' -f7
img4= cut -d$',' -f8
wget -O "%img1" "%concat"
wget -O "%img2" "%concat"
wget -O "%img1" "%concat"
wget -O "%img2" "%concat"
done < file.csv
You might use -P switch to designate target directory, consider following simple example using some files from test-images/png repository
mkdir -p black
mkdir -p gray
mkdir -p white
wget -P black https://raw.githubusercontent.com/test-images/png/main/202105/cs-black-000.png
wget -P gray https://raw.githubusercontent.com/test-images/png/main/202105/cs-gray-7f7f7f.png
wget -P white https://raw.githubusercontent.com/test-images/png/main/202105/cs-white-fff.png
will lead to following structure
black
cs-black-000.png
gray
cs-gray-7f7f7f.png
white
cs-white-fff.png
You should use variables names that are less ambiguous.
You need to provide the directory as part of the output filename.
"%" is not a bash variable designator. That is a formatting directive (for bash, awk, C, etc.).
The following will provide what you want.
#!/usr/bin/bash
DBG=1
INPUT="${1}"
INPUT="file.csv"
cat >"${INPUT}" <<"EnDoFiNpUt"
#topic_1,junk01,junk02,junk03,img_101.png,img_102.png,img_103.png,img_104.png
#topic_2,junk04,junk05,junk06,img_201.png,img_202.png,img_203.png,img_204.png
#
topic_1,junk01,junk02,junk03,https://raw.githubusercontent.com/test-images/png/main/202105/cs-black-000.png,https://raw.githubusercontent.com/test-images/png/main/202105/cs-gray-7f7f7f.png,https://raw.githubusercontent.com/test-images/png/main/202105/cs-white-fff.png
EnDoFiNpUt
if [ ${DBG} -eq 1 ]
then
echo -e "\n Input file:"
cat "${INPUT}" | awk '{ printf("\t %s\n", $0 ) ; }'
echo -e "\n Hit return to continue ..." ; read k
fi
REPO_ROOT='/tmp'
grep -v '^#' "${INPUT}" |
while read line
do
topic_name=$(echo "${line}" | cut -f1 -d\, )
test ${DBG} -eq 1 && echo -e "\t topic_name= ${topic_name} ..."
folder="${REPO_ROOT}/${topic_name}"
test ${DBG} -eq 1 && echo -e "\t folder= ${folder} ..."
if [ ! -d "${folder}" ]
then
mkdir "${folder}"
else
rm -f "${folder}/"*
fi
if [ ! -d "${folder}" ]
then
echo -e "\n Unable to create directory '${folder}' for saving downloads.\n Bypassing 'wget' actions ..." >&2
else
test ${DBG} -eq 1 && ls -ld "${folder}" | awk '{ printf("\n\t %s\n", $0 ) ; }'
url1=$(echo "${line}" | cut -d\, -f5 )
url2=$(echo "${line}" | cut -d\, -f6 )
url3=$(echo "${line}" | cut -d\, -f7 )
url4=$(echo "${line}" | cut -d\, -f8 )
test ${DBG} -eq 1 && {
echo -e "\n URLs extracted:"
echo -e "\n\t ${url1}\n\t ${url2}\n\t ${url3}\n\t ${url4}"
}
#imageFile1=$( basename "${url1}" | sed 's+^img_+yourImagePrefix_+' )
#imageFile2=$( basename "${url2}" | sed 's+^img_+yourImagePrefix_+' )
#imageFile3=$( basename "${url3}" | sed 's+^img_+yourImagePrefix_+' )
#imageFile4=$( basename "${url4}" | sed 's+^img_+yourImagePrefix_+' )
imageFile1=$( basename "${url1}" | sed 's+^cs-+yourImagePrefix_+' )
imageFile2=$( basename "${url2}" | sed 's+^cs-+yourImagePrefix_+' )
imageFile3=$( basename "${url3}" | sed 's+^cs-+yourImagePrefix_+' )
test ${DBG} -eq 1 && {
echo -e "\n Image filenames assigned:"
#echo -e "\n\t ${imageFile1}\n\t ${imageFile2}\n\t ${imageFile3}\n\t ${imageFile4}"
echo -e "\n\t ${imageFile1}\n\t ${imageFile2}\n\t ${imageFile3}"
}
test ${DBG} -eq 1 && {
echo -e "\n WGET process log:"
}
### This form of wget does NOT work for me, although man page says it should.
#wget -P "${folder}" -O "${imageFile1}" "${url1}"
### This form of wget DOES work for me
wget -O "${folder}/${imageFile1}" "${url1}"
wget -O "${folder}/${imageFile2}" "${url2}"
wget -O "${folder}/${imageFile3}" "${url3}"
#wget -O "${folder}/${imageFile3}" "${url3}"
test ${DBG} -eq 1 && {
echo -e "\n Listing of downloaded files:"
ls -l /tmp/topic* 2>>/dev/null | awk '{ printf("\t %s\n", $0 ) ; }'
}
fi
done
The script is adapted for what I had to work with. :-)
I'm still new to the shell and need some help.
I have a file stapel_old.
Also I have in the same directory files like english_old_sync, math_old_sync and vocabulary_old_sync.
The content of stapel_old is:
english
math
vocabulary
The content of e.g. english is:
basic_grammar.md
spelling.md
orthography.md
I want to manipulate all files which are given in stapel_old like in this example:
take the first line of stapel_old 'english', (after that math, and so on)
convert in this case english to english_old_sync, (or after that what is given in second line, e.g. math to math_old_sync)
search in english_old_sync line by line for the pattern '.md'
And append to each line after .md :::#a1
The result should be e.g. of english_old_sync:
basic_grammar.md:::#a1
spelling.md:::#a1
orthography.md:::#a1
of math_old_sync:
geometry.md:::#a1
fractions.md:::#a1
and so on. stapel_old should stay unchanged.
How can I realize that?
I tried with sed -n, while loop (while read -r line), and I'm feeling it's somehow the right way - but I still get errors and not the expected result after 4 hours inspecting and reading.
Thank you!
EDIT
Here is the working code (The files are stored in folder 'olddata'):
clear
echo -e "$(tput setaf 1)$(tput setab 7)Learning directories:$(tput sgr 0)\n"
# put here directories which should not become flashcards, command: | grep -v 'name_of_directory_which_not_to_learn1' | grep -v 'directory2'
ls ../ | grep -v 00_gliederungsverweise | grep -v 0_weiter | grep -v bibliothek | grep -v notizen | grep -v Obsidian | grep -v z_nicht_uni | tee olddata/stapel_old
# count folders
echo -ne "\nHow much different folders: " && wc -l olddata/stapel_old | cut -d' ' -f1 | tee -a olddata/stapel_old
echo -e "Are this learning directories correct? [j ODER y]--> yes; [Other]-->no\n"
read lernvz_korrekt
if [ "$lernvz_korrekt" = j ] || [ "$lernvz_korrekt" = y ];
then
read -n 1 -s -r -p "Learning directories correct. Press any key to continue..."
else
read -n 1 -s -r -p "Learning directories not correct, please change in line 4. Press any key to continue..."
exit
fi
echo -e "\n_____________________________\n$(tput setaf 6)$(tput setab 5)Found cards:$(tput sgr 0)$(tput setaf 6)\n"
#GET && WRITE FOLDER NAMES into olddata/stapel_old
anzahl_zeilen=$(cat olddata/stapel_old |& tail -1)
#GET NAMES of .md files of every stapel and write All to 'stapelname'_old_sync
i=0
name="var_$i"
for (( num=1; num <= $anzahl_zeilen; num++ ))
do
i="$((i + 1))"
name="var_$i"
name=$(cat olddata/stapel_old | sed -n "$num"p)
find ../$name/ -name '*.md' | grep -v trash | grep -v Obsidian | rev | cut -d'/' -f1 | rev | tee olddata/$name"_old_sync"
done
(tput sgr 0)
I tried to add:
input="olddata/stapel_old"
while IFS= read -r line
do
sed -n "$line"p olddata/stapel_old
done < "$input"
The code to change only the english_old_sync is:
lines=$(wc -l olddata/english_old_sync | cut -d' ' -f1)
for ((num=1; num <= $lines; num++))
do
content=$(sed -n "$num"p olddata/english_old_sync)
sed -i "s/"$content"/""$content":::#a1/g"" olddata/english_old_sync
done
So now, this need to be a inner for-loop, of a outer for-loop which holds the variable for english, right?
stapel_old should stay unchanged.
You could try a while + read loop and embed sed inside the loop.
#!/usr/bin/env bash
while IFS= read -r files; do
echo cp -v "$files" "${files}_old_sync" &&
echo sed '/^.*\.md$/s/$/:::#a1/' "${files}_old_sync"
done < olddata/staple_old
convert in this case english to english_old_sync, (or after that what is given in second line, e.g. math to math_old_sync)
cp copies the file with a new name, if the goal is renaming the original file name from the content of the file staple_old then change cp to mv
The -n and -i flag from sed was ommited , include it, if needed.
The script also assumes that there are no empty/blank lines in the content of staple_old file. If in case there are/is add an addition test after the line where the do is.
[[ -n $files ]] || continue
It also assumes that the content of staple_old are existing files. Just in case add an additional test.
[[ -e $files ]] || { printf >&2 '%s no such file or directory.\n' "$files"; continue; }
Or an if statement.
if [[ ! -e $files ]]; then
printf >&2 '%s no such file or directory\n' "$files"
continue
fi
See also help test
See also help continue
Combining them all together should be something like:
#!/usr/bin/env bash
while IFS= read -r files; do
[[ -n $files ]] || continue
[[ -e $files ]] || {
printf >&2 '%s no such file or directory.\n' "$files"
continue
}
echo cp -v "$files" "${files}_old_sync" &&
echo sed '/^.*\.md$/s/$/:::#a1/' "${files}_old_sync"
done < olddata/staple_old
Remove the echo's If you're satisfied with the output so the script could copy/rename and edit the files.
I use youtube-dl to archive specific blogs. I use a custom bash script (called tvify) to help me organize my content into Plex-ready filenames for later replay via my home Plex server.
Archiving the content works fine, unless a blogger posts more than one video on the same date - if that happens my script creates more than one file for a given month/date and plex sees a duplicate episode. In the plex app, it stuffs them together as distinct 'versions' of the same episode. The result is that the description of the video no longer matches its contents, and only one 'version' appears unless I access an additional sub menu.
The videos get downloaded by you tube-dl kicked off from a cron-job, and that downloader script runs the following to help format their filenames and stuff them into appropriate folders for 'seasons'.
The season is the year when the video was released, and the episode is the combination of the month and date in MMDD format.
Below is my 'tvify' script, which helps perform the filename manipulation and stuffs the file into the proper folder for the season.
#!/bin/bash
mySuff="$1"
echo mySuff="$mySuff"
if [ -z "$1" ]; then
mySuff="*.mp4"
fi
for i in $mySuff
do
prb=`ffprobe -- "$i" 2>&1`
myDate=`echo "$prb" | grep -E 'date\s+:' | cut -d ':' -f 2`
myartist=`echo "$prb" | grep -E 'artist\s+:' | cut -d ':' -f 2`
myTitle=`echo "$prb" | grep -E 'title\s+:' | cut -d ':' -f 2 | sed 's/\//_/g'`
cwd_stub=`pwd | awk -F'/' '{print $NF}'`
if [ -d "s${myDate:1:4}" ]; then echo "Directory found" > /dev/null; else mkdir "s${myDate:1:4}"; fi
[ -d "s${myDate:1:4}" ] && mv -- "$i" "s${myDate:1:4}/${myartist[#]:1} - s${myDate:1:4}e${myDate:5:8} - ${myTitle[#]:1:40} _$i" || mv -- "$i" "${myartist[#]:1} - s${myDate:1:4}e${myDate:5:8} - ${myTitle[#]:1:40} _$i"
done
How can I modify that script to identify if a conflicting year/MMDD file exists, and if so, append an appropriate suffix to the episode number so that plex will interpret them as distinct episodes?
I ended up implementing an array, counting the number of elements in the array, and using that to append the integer:
#!/bin/bash
mySuff="$1"
echo mySuff="$mySuff"
if [ -z "$1" ]; then
mySuff="*.mp4"
fi
for i in $mySuff
do
prb=`ffprobe -- "$i" 2>&1`
myDate=`echo "$prb" | grep -E 'date\s+:' | cut -d ':' -f 2`
myartist=`echo "$prb" | grep -E 'artist\s+:' | cut -d ':' -f 2`
myTitle=`echo "$prb" | grep -E 'title\s+:' | cut -d ':' -f 2 | sed 's/\//_/g'`
cwd_stub=`pwd | awk -F'/' '{print $NF}'`
readarray -t conflicts < <(find . -maxdepth 2 -iname "*s${myDate:1:4}e${myDate:5:8}*" -type f -printf '%P\n')
[ ${#conflicts[#]} -gt 0 ] && _inc=${#conflicts[#]} || _inc=
if [ -d "s${myDate:1:4}" ]; then echo "Directory found" > /dev/null; else mkdir "s${myDate:1:4}"; fi
[ -d "s${myDate:1:4}" ]
&& mv -- "$i" "s${myDate:1:4}/${myartist[#]:1} - s${myDate:1:4}e${myDate:5:8}$_inc - ${myTitle[#]:1:40} _$i"
|| mv -- "$i" "${myartist[#]:1} - s${myDate:1:4}e${myDate:5:8}$_inc - ${myTitle[#]:1:40} _$i"
done
I created a bash script to read information such as username, group etc., from a text file and create users based on it in linux. The code seems to function properly and creates the users as desired. But the user information in the last line of the text file always gets misinterpreted. Even if i delete it then the next last line gets misinterpreted i.e., the text is read wrongly.
`
#!/bin/bash
userfile="users.txt"
IFS=$'\n'
if [ ! -f "$userfile" ]
then
echo "File does not exist. Specify a valid file and try again. "
exit
fi
groups=(`cut -f 4 "$userfile" | sed 's/ //'`)
fullnames=(`cut -f 1 "$userfile" | sed 's/,//' | sed 's/"//g'`)
username1=(`cut -f 1 "$userfile" |sed 's/,//' | sed 's/"//' | tr [A-Z] [a-z] | awk '{print substr($2,1,1) substr($3,1,1) substr($1,1,1)}'`)
username2=(`cut -f 4 "$userfile" | tr [A-Z] [a-z] | awk '{print substr($1,1,1)}'`)
i=0
n=${#username1[#]}
for (( q=0; q<n; q++ ))
do
usernames[$q]=${username1[$q]}"${username2[$q]}"
done
declare -a usernames
x=0
created=0
for user in ${usernames[*]}
do
adduser -c ${fullnames[$x]} -p 123456789 -f 15 -m -d /home/${groups[$x]}/$user -K LOGIN_RETRIES=3 -K PASS_MAX_DAYS=30 -K PASS_WARN_AGE=3 -N -s /bin/bash $user 2> /dev/null
usermod -g ${groups[$x]} $user
chage -d 0 $user
let created=$created+1
x=$x+1
echo -e "User $user created "
done
echo "$created Users created"
enter image description here`
#!/bin/bash
userfile="./users.txt"; # <-- Config
while read line; do
# FULL NAME
# Capture all between quotes as full name
fullname=$(printf '%s' "${line}" | sed 's/^"\(.*\)".*/\1/')
# Remove spaces and punctuations???:
fullname=$(printf '%s' "${fullname}" | tr -d '[:punct:][:blank:]')
# Right-side names:
partb=$(printf '%s' "${line}" | sed "s/^\".*\"//g")
# CODE 1, capture second row
code1=$(printf '%s' "${partb}" | cut -f 2 )
# CODE 2, capture third row
code2=$(printf '%s' "${partb}" | cut -f 3 )
# GROUP, capture fourth row
group=$(printf '%s' "${partb}" | cut -f 4 )
# Print only for report
echo "fullname: ${fullname}\n code 1: ${code1}\n code 2: ${code2}\n group: ${group}\n"
done <${userfile}
Maybe these are the fields that you want, now you have it in variables for manipulate them: $fullname, $code1, $code2 and $group.
Although maybe the fail that you observed was due to some misplaced quotation mark in the text file or the line breaks, on the attached screenshot I can see one missed quote.
I am attempting to grep all of the /home directory for certain words. If these words occur it writes a /var/logfile and then from there I would like this to be read line by line and then echo each line with information. This is what I have so far.
grep -r -i "word\|word\|word\|word\|word" /home >>/var/testlog
UNAME=`grep -r -i "word\|word\|word\|word\|word" /var/testlog | cut -f 3 -d "/"`
BLINE=`grep -r -i "word\|word\|word\|word\|word" /var/testlog | cut -f 2 -d ":"`
FILEP=`grep -r -i "word\|word\|word\|word\|word" /var/testlog | cut -f 1 -d ":"`
while [ "$UNAME" == "true" ] && [ "$BLINE" == "true" ] && [ "$FILEP" == "true" ];
do
echo "User is: $UNAME, The line flag with the word is: $BLINE, and the file path for the text is: $FILEP."
done
One solution is:
grep -r -i "word\|word\|word\|word\|word" /home >>/var/testlog
while IFS= read -r line
do
[[ $line =~ (/home)/([^/]+)/([^:]*):(.*) ]] || echo Failed on line=$line
echo "User is: ${BASH_REMATCH[2]}, The line flag with the word is: ${BASH_REMATCH[4]}, and the file path for the text is: ${BASH_REMATCH[1]/${BASH_REMATCH[2]}}/${BASH_REMATCH[3]}."
done <testlog
If the use of sed is allowed, then the while loop is unnecessary:
grep -r -i "word\|word\|word\|word\|word" /home >>/var/testlog
sed -r 's|(/home/(\w+)/[^:]*):(.*)|User is: \2, The line flag with the word is: \3, and the file path for the text is: \1.|' testlog