bash script to unzip recently uploaded file into server - bash

I want to unzip file automatically after being uploaded into server.
I'm not experienced in bash but I've tried this
for file in *.zip
do
unzip -P pcp9100 "$file" -d ./
done
It's not working as I want.

Okay, assuming you want this to be continuously done in a loop, you can do something like:
while true; do
for file in *.zip; do
unzip -P pcp9100 "${file}" -d ./
rm "${file}"
done
sleep 3
done
Of course there are several things that can go wrong here.
File has an incorrect password
The file inside is also a zip file and does not have the same password
Permissions are incorrect
First, your permissions should be correct. Secondly, you can create a directory called "ExtractedFiles" and one called "IncorrectPasswords" which you can do something like:
while true; do
for file in *.zip; do
unzip -P pcp9100 "${file}" -d ./ExtractedFiles || mv "${file}" ./IncorrectPasswords
rm "${file}"
done
sleep 3
done

Related

how to unzip a zip file inside another zip file?

I have multiple zip files inside a folder and another zip file exists within each of these zip folders. I would like to unzip the first and the second zip folders and create their own directories.
Here is the structure
Workspace
customer1.zip
application/app1.zip
customer2.zip
application/app2.zip
customer3.zip
application/app3.zip
customer4.zip
application/app4.zip
As shown above, inside the Workspace, we have multiple zip files, and within each of these zip files, there exists another zip file application/app.zip. I would like to unzip app1, app2, app3, and app4 into new folders. I would like to use the same name as the parent zip folder to place each of the results. I tried the following answers but this unzips just the first folder.
sh '''
for zipfile in ${WORKSPACE}/*.zip; do
exdir="${zipfile%.zip}"
mkdir "$exdir"
unzip -d "$exdir" "$zipfile"
done
'''
Btw, I am running this command inside my Jenkins pipeline.
No idea about Jenkins but what you need is a recursive function.
recursiveUnzip.sh
#!/bin/dash
recursiveUnzip () { # $1=directory
local path="$(realpath "$1")"
for file in "$path"/*; do
if [ -d "$file" ]; then
recursiveUnzip "$file"
elif [ -f "$file" -a "${file##*.}" = 'zip' ]; then
# unzip -d "${file%.zip}" "$file" # variation 1
unzip -d "${file%/*}" "$file" # variation 2
rm -f "$file" # comment this if you want to keep the zip files.
recursiveUnzip "${file%.zip}"
fi
done
}
recursiveUnzip "$1"
Then call the script like this
./recursiveUnzip.sh <directory>
In you case, probably like this
./recursiveUnzip.sh "$WORKSPACE"

Is there a way I can take user input and make it into a file?

I am not able to find a way to make bash create a file with the same name as the file the user dragged into the terminal.
read -p 'file: ' file
if [ "$file" -eq "" ]; then
cd desktop
mkdir
fi
I am trying to make this part of the script take the name of the file they dragged in so for example /Users/admin/Desktop/test.app cd into it copy the "contents" file make another folder with the same name so test.app for this example and then paste the contents file into that folder and delete the old file.
From your .app example, I assume you are using MacOS. Therefore you will need to test this script yourself since I don't have MacOS, but I think it should be doing what you want. Execute it as bash script.sh and it will give you your desired directory test.app/contents in the current working directory.
#! /bin/bash
read -rp 'file: ' file
if [ -e "$file" ]; then
if [ -e "$file"/contents ]; then
base=$(basename "$file")
mkdir "$base"
cp -R "$file"/contents "$base"
rm -rf "$file"
else
echo "The specified file $file has no directory 'contents'."
fi
else
echo "The specified file $file does not exist."
fi

shell script - Download files with wget only when file name is in my list

I will download a lot of files from a server with wget. But the files should only be stored when the file name is in a given list. Otherwise wget should stop getting these file and start the next one.
I tried the following:
#!/bin/bash
etsienURL="http://www.etsi.org/deliver/etsi_en"
etsitsURL="http://www.etsi.org/deliver/etsi_ts"
listOfStandards=("en_302571" "en_3023630401" "en_3023630501" "en_3023630601" "en_30263702" "en_30263703" "en_302663" "en_302931" "ts_10153901" "ts_10153903" "ts_1026360501" "ts_1027331" "ts_10286801" "ts_10287103" "ts_10289401" "ts_10289402" "ts_102940" "ts_102941" "ts_102942" "ts_102943" "ts_103097" "ts_10324601" "ts_10324603")
wget -r -nd -nc -e robots=off -A.pdf $etsienURL
wget -r -nd -nc -e robots=off -A.pdf $etsitsURL
for file in *.pdf
do
relevant=false
for t in "${listOfStandards[#]}"
do
if [[ $(basename "$file" .pdf) == *"$t"* ]]
then
relevant=true
break
fi
done
if [ $relevant == false ]
then
rm "$file"
fi
done
With this code all files will be downloaded. After the download the script checks, if the filename or a part of this is in the list. Otherwise the script delete the file. But this cost a lot of disc space. I will only download a file, if the file name contains one if the list items.
Perhaps somebody can help to find a solution.
Found the solution. I forgot the --no-parent tag for wget.

Get date from filename and sort into folders?

I'm running wget to get data from an FTP server like this:
wget -r -nH -N --no-parent ftp://username:password#example.com/ -P /home/data/
All of the files are in a format similar to this:
2016_07_10_bob-randomtext.csv.gz
2016_07_11_joe-importantinfo.csv.gz
Right now it's putting all of these files into /home/data/.
What I want to do is get the time from the filename and put it into their own folders based on the date. For example:
/home/data/2016_07_10/2016_07_10_bob-randomtext.csv.gz
/home/data/2016_07_11/2016_07_11_joe-importantinfo.csv.gz
Based off the answers here, it is possible to get the date from a file name. However, I'm not really sure how to turn that into a folder automatically...
Sorry if this is a bit confusing. Any help or advice would be appreciated.
Keeping the download of all the files into one directory, /home/files
destination=/home/data
for filename in /home/files/*; do
if [[ -f "$filename" ]]; then # ignore it if it's a directory (not a file)
name=$(basename "$filename")
datedir=$destination/${name:0:10} # first 10 characters of the filename
mkdir -p "$datedir" # create the directory if it doesn't exist
mv "$filename" "$datedir"
fi
done

Recursively copying a file into multiple directories, if a directory does not exist in Bash

so I need to copy the file /home/servers/template/craftbukkit.jar into every folder inside of /home/servers, Ex. /home/servers/server1, /home/servers/server2, etc.
But I only want to do it if /home/servers/whateverserveritiscurrentlyon/mods does not exsist. This is what I came up with and was wondering if it will work:
echo " Script to copy a file to all server directories, only if mods does not exist in that directory"
for i in /home/servers/*/; do
if [ ! -d "$i/mods" ]; then
cp -f /home/servers/template/craftbukkit.jar "$i"
fi
done
echo " completed script ..."
Looks like it should work. To non-destructively test, change the cp -f ... line to say echo cp -f ... and review the output.
It could also be somewhat shortened, but it wouldn't affect efficiency much:
for i in /home/servers/*/
do
[[ -d "${i}/mods" ]] || cp -f /home/servers/template/craftbukkit.jar "${i}/."
done

Resources