I have multiple zip files inside a folder and another zip file exists within each of these zip folders. I would like to unzip the first and the second zip folders and create their own directories.
Here is the structure
Workspace
customer1.zip
application/app1.zip
customer2.zip
application/app2.zip
customer3.zip
application/app3.zip
customer4.zip
application/app4.zip
As shown above, inside the Workspace, we have multiple zip files, and within each of these zip files, there exists another zip file application/app.zip. I would like to unzip app1, app2, app3, and app4 into new folders. I would like to use the same name as the parent zip folder to place each of the results. I tried the following answers but this unzips just the first folder.
sh '''
for zipfile in ${WORKSPACE}/*.zip; do
exdir="${zipfile%.zip}"
mkdir "$exdir"
unzip -d "$exdir" "$zipfile"
done
'''
Btw, I am running this command inside my Jenkins pipeline.
No idea about Jenkins but what you need is a recursive function.
recursiveUnzip.sh
#!/bin/dash
recursiveUnzip () { # $1=directory
local path="$(realpath "$1")"
for file in "$path"/*; do
if [ -d "$file" ]; then
recursiveUnzip "$file"
elif [ -f "$file" -a "${file##*.}" = 'zip' ]; then
# unzip -d "${file%.zip}" "$file" # variation 1
unzip -d "${file%/*}" "$file" # variation 2
rm -f "$file" # comment this if you want to keep the zip files.
recursiveUnzip "${file%.zip}"
fi
done
}
recursiveUnzip "$1"
Then call the script like this
./recursiveUnzip.sh <directory>
In you case, probably like this
./recursiveUnzip.sh "$WORKSPACE"
Related
ok, I have these urls
https://www.ppppppppppp.com/it/yyyy/911-omicidio-al-telefono/stagione-1-appesa-a-un-filo
https://www.ppppppppppp.com/it/yyyy/avamposti-dispacci-dal-confine/stagione-1-cerignola
https://www.ppppppppppp.com/it/yyyy/belle-da-morire/stagione-1-bellezza-stalking
I try to create these folder with these names
911-omicidio-al-telefono
avamposti-dispacci-dal-confine
belle-da-morire
extracting the name from urls
for example I would like the file from the url
https://www.ppppppppppp.com/it/yyyy/911-omicidio-al-telefono/stagione-1-appesa-a-un-filo
to download directly inside the folder name extracted from the url
911-omicidio-al-telefono
but this seems problematic because no folder names are extracted and each file is downloaded outside their folderURL name
To solve this problem I created a script.sh with this code
#!/bin/bash
# Extract the folder name from the URL
url=$1
folder_name=$(echo $url | cut -d "/" -f6)
echo "folder_name: $folder_name"
if [[ "$folder_name" == "NA" ]]
then
echo "Can't extract folder name from $url"
exit
fi
# Create the folder if it doesn't exist
mkdir -p "$folder_name"
echo "file_path: $file_path"
# Download the video and audio files
ffmpeg -i "$file_path.fdash-video=6157520.mp4" -i "$file_path.fdash-audio_eng=160000.m4a" -c copy "$file_path.mp4"
# Move the file to the correct folder and rename it with .mp4 extension
mv "$file_path.mp4" "$folder_name/$file_path.mp4"
and then from bash terminal I call it in this way
yt-dlp --referer "https://ppppppppppp.com/" --add-header "Cookie:COOKIE" --batch-file links_da_scaricare.txt -o '%(playlist)s/%(title)s.%(ext)s' --exec "~/script.sh {}"
I use cygwin and script.sh is in C:\cygwin64\home\Administrator but I test also with ubuntu and problem is the same: it creates a folder called NA and download inside that folder.
All files are downloaded in the same NA folder and not in their folders, in other word are not downloaded in the folders having names extracted from the url from which the files are downloaded
EDIT
I use SpellCheck and I fix code of script.sh and now I haven't issues
#!/bin/bash
url=$1
file_path=$2
# Extract the folder name from the URL
folder_name=$(echo "$url" | cut -d "/" -f4)
echo "folder_name: $folder_name"
# Create the folder if it doesn't exist
mkdir -p "$folder_name"
echo "The script is running and creating folder: $folder_name" > ~/script.log
# Move the file to the correct folder and rename it with .mp4 extension
mv "$file_path" "$folder_name/$folder_name.mp4"
but when I try to run this command from Cygwin terminal
yt-dlp --referer "https://pppppppppp.com" --add-header "Cookie:COOKIE" --batch-file links_da_scaricare.txt -o '%(playlist)s/%(title)s.%(ext)s' --exec "C:\cygwin64\home\Administrator\script.sh {} $file_path"
NA folder is still created and no other folders are created so files are downloaded only into NA and not in their folders
I think you are trying to do something like this:
#!/bin/bash
#QUESTION: https://stackoverflow.com/questions/75088710/files-are-not-downloaded-in-the-folders-having-names-extracted-from-the-url
# Extract the folder name from the URL
#url=$1
url="https://www.ppppppppppp.com/it/yyyy/911-omicidio-al-telefono/stagione-1-appesa-a-un-filo"
folder_name=$(echo $url | cut -d "/" -f6 )
### Missing
if [ "${folder_name}" = "" ] ; then folder_name="NA" ; fi
echo "folder_name: $folder_name"
if [[ "$folder_name" == "NA" ]]
then
echo "Can't extract folder name from $url"
exit
fi
### Missing
file_path="$(pwd)/${folder_name}"
# Create the folder if it doesn't exist
mkdir -p "${file_path}"
echo "file_path: ${file_path}"
download="arbitrary_name"
wget -O "${file_path}/${download}" "${url}"
# Download the video and audio files
ffmpeg -i "${file_path}/${download}.fdash-video=6157520.mp4" -i "${file_path}/${download}.fdash-audio_eng=160000.m4a" -c copy "${file_path}/${download}.mp4"
Some of that can't be verified by myself, because I don't have an account to access primevideo, at
https://www.primevideo.com/detail/0S0UEN2OCD7CTY5TQF3N6KB1ET/ref=atv_dp_season_select_s1
Also, I don't think you can download the two, video and audio, separately.
I'm now using Ubuntu, and increasingly using terminal.
I would like to delete files from Trash via command line.
So, I've gotta delete files from ~/.local/share/Trash/files dir.
All right, here's the question:
When I move some file to trash, it also creates a file_name.trashinfo file in ~/.local/share/Trash/info.
How could I automatically delete the corresponding .trashinfo file when I delete something in ../files?
You can use the following script to delete both files simultaneously. Save it in some file in the ~/.local/share/Trash directory, and call then bash <script.sh> <path-to-file-to-be-deleted-in-files-dir>.
A sample call to delete the file test if you named the script del.sh: bash del.sh files/test
#!/bin/bash
file=$1
if [ -e "$file" ] # check if file exists
then
rm -rf "$file" # remove file
base=$(basename "$file")
rm -rf "info/$base.trashinfo" # remove second file in info/<file>.trashinfo
echo 'files deleted!'
fi
I have a zip file that contains a tar.gz file. I would like to access the content of the tar.gz file but without unzipping it
I could list the files in the zip file but of course when trying to untar one of those files bash says : "Cannot open: No such file or directory" since the file does not exist
for file in $archiveFiles;
#do echo ${file: -4};
do
if [[ $file == README.* ]]; then
echo "skipping readme, not relevant"
elif [[ $file == *.tar.gz ]]; then
echo "this is a tar.gz, must extract"
tarArchiveFiles=`tar -tzf $file`
for tarArchiveFile in $tarArchiveFiles;
do echo $tarArchiveFile
done;
fi
done;
Is this possible to extract it "on the fly" without storing it temporarily. I have the impression that this is doable in python
You can't do it without unzipping (obviously), but I assume what you mean is, without unzipping to the filesystem.
unzip has -c and -p options which both unzip to stdout. -c outputs the filename. -p just dumps the binary unzipped file data to stdout.
So:
unzip -p zipfile.zip path/within/zip.tar.gz | tar zxf -
Or if you want to list the contents of the tarfile:
unzip -p zipfile.zip path/within/zip.tar.gz | tar ztf -
If you don't know the path of the tarfile within the zipfile, you'd need to write something more sophisticated that consumes the output of unzip -c, recognises the filename lines in the output. It may well be better to write something in a "proper" language in this case. Python has a very flexible ZipFile library function, and most mainstream languages have something similar.
You can pipe an individual member of a zip file to stdout with the -p option
In your code change
tarArchiveFiles=`tar -tzf $file`
to
tarArchiveFiles=`unzip -p zipfile $file | tar -tzf -`
replace "zipfile" with the name of the zip archive where you sourced $archiveFiles from
I want to unzip file automatically after being uploaded into server.
I'm not experienced in bash but I've tried this
for file in *.zip
do
unzip -P pcp9100 "$file" -d ./
done
It's not working as I want.
Okay, assuming you want this to be continuously done in a loop, you can do something like:
while true; do
for file in *.zip; do
unzip -P pcp9100 "${file}" -d ./
rm "${file}"
done
sleep 3
done
Of course there are several things that can go wrong here.
File has an incorrect password
The file inside is also a zip file and does not have the same password
Permissions are incorrect
First, your permissions should be correct. Secondly, you can create a directory called "ExtractedFiles" and one called "IncorrectPasswords" which you can do something like:
while true; do
for file in *.zip; do
unzip -P pcp9100 "${file}" -d ./ExtractedFiles || mv "${file}" ./IncorrectPasswords
rm "${file}"
done
sleep 3
done
I am having trouble with a simple grading script I am writing. I have a directory called HW5 containing a folder for each student in the class. From my current directory, which contains the HW5 folder, I would like to copy all files starting with the word mondial, to each of the students' folders. My script runs but does not copy any of the files over. Any suggestions?
#!/bin/bash
for file in ./HW5; do
if [ -d $file ]; then
cp ./mondial.* ./$file;
fi
done
Thanks,
The first loop was executing only once, with file equal ./HW5. Add the star to actually select the files or directories inside it.
#!/bin/bash
for file in ./HW5/*; do
if [ -d "$file" ]; then
cp ./mondial.* ./"$file"
fi
done
As suggested by Mark Reed, this can be simplified:
for file in ./HW5/*/; do
cp ./mondial.* ./"$file"
done