Unzip 7z archive with password using bash script - bash

I want to unzip a folder with multiple *.7z archives, all with same password.
Unfortunately, using this:
#!/bin/bash
password="12345678"
cd /server/disc/folders.../folderWithArchives
for package in ./*.7z;
do
7z -x -P{$password} $package
done
gives me
Error:
Incorrect command line
Have you any ideas how to fix it?
I've tried shellcheck and it gave me this:
#!/bin/bash
password="12345678"
cd /server/disc/folders.../folderWithArchives || exit
for package in ./*.7z;
do
7z -x -P$password "$package"
done
but it still doesn't work
OS: Ubuntu 16.04.6 LTS
shell: GNU bash, version 4.3.48

I found a solution.
Everything works OK after only deleting '-' before 'x'.
Now it works and looks like this:
#!/bin/bash
password="12345678"
cd /server/disc/folders.../folderWithArchives || exit
for package in ./*.7z;
do
7z x -P$password "$package"
done

Related

Extracting file with folders then using bash completion

I don't work much with bash scripting and was trying to do something like this:
#!/bin/bash
wget https://johnvansickle.com/ffmpeg/builds/ffmpeg-git-i686-static.tar.xz
tar xvf ffmpeg-git-i686-static.tar.xz
cd ./ffmpeg-git-20210501-i686-static/
cp ./ffmpeg-git-20210501-i686-static/ffmpeg /etc/bin
Is there a variable or a way I can determine what that extracted folder is called during script execution. For example with bash completion at the command line I would use cd ./ffmpeg (since I know it starts with ffmpeg)
Make sense?
There is no way to know beforehand the directory structure, but you can use a wildcard in the copy command:
cp ./*/ffmpeg /etc/bin
Although, I do not recommend installing tarball extracted executable within /etc/bin.
I'd put a symbolic link inside /usr/local/bin/ instead:
#!/usr/bin/env sh
__opwd="$PWD"
trap 'cd "$__opwd"' EXIT ABRT INT
wget https://johnvansickle.com/ffmpeg/builds/ffmpeg-git-i686-static.tar.xz || exit 1
# Create place where to install the unpacked archive
mkdir -p '/opt/ffmpeg-git-i686-static' || exit 1
# Unpack the archive
cd '/opt/ffmpeg-git-i686-static' || exit 1
tar xf "$__opwd/ffmpeg-git-i686-static.tar.xz" || exit 1
# Create a symbolic link from the ffmpeg command into `/usr/local/bin/`
ln -sf /opt/ffmpeg-git-i686-static/*/ffmpeg /usr/local/bin/

how to unzip a file using unzip command?

I have an script which creates a folder named "data". Then it downloads a file using wget and these files (.zip format) are moved from the current directory to the folder "data". After that what I want is to unzip these files. I'm using unzip filename.zip and it works when I use it on the cmd, however I don't know why it's not working in the script.
Here is the script:
#!/bin/bash
mkdir data
wget http://187.191.75.115/gobmx/salud/datos_abiertos/datos_abiertos_covid19.zip && mv datos_abiertos_covid19.zip data && unzip datos_abiertos_covid19.zip
wget http://187.191.75.115/gobmx/salud/datos_abiertos/diccionario_datos_covid19.zip && mv diccionario_datos_covid19.zip data && unzip diccionario_datos_covid19.zip
datos_abiertos_covid19.zip and diccionario_datos_covid19.zip are the files I want to unzip once they are in my folder "data". I would really appreciate if someone can help me. Thanks in advance!
It fails because unzip foo.zip assumes foo.zip is in the current directory, but you just moved it to a subdirectory data. Interactively, you probably cd data first and that's why it works.
To make it work in your script, just have your script cd data as well:
#!/bin/bash
mkdir data
cd data || exit 1
wget http://187.191.75.115/gobmx/salud/datos_abiertos/datos_abiertos_covid19.zip && unzip datos_abiertos_covid19.zip
That way, the file is downloaded directly to the data directory so no mv is necessary, and the unzip command works as expected.
My approach:
#!/bin/bash
set -e # Exit if any command fails
mkdir data
pushd ./data >/dev/null
for i in 'datos_abiertos_covid19.zip' 'diccionario_datos_covid19.zip'; do
# Don't unzip (or exit) if 'wget' fails, don't exit if 'unzip' fails
wget "http://187.191.75.115/gobmx/salud/datos_abiertos/$i" -O "./$i" || continue
unzip "./$i" || true
done
popd >/dev/null
The file names don't need to be quoted in this case, but I did so anyway, to emphasise you can/should do so if necessary
You could of course use variables for the file list, URL, download dir, etc. if you wanted to build a more general script for downloading zip files
I know it's marked bash, but worth mentioning: pushd and popd are not defined in POSIX, you can change those to cd ./data and cd .. for more portability. Obviously wget is not POSIX either, but very common (see this thread for interesting info on that topic)

wp-cli works in windows cmd, but not in Gitbash

I've installed the wp-cli using Git-bash, created the relevant PATH variables.
I am now able type 'wp' into the Windows CMD and it works, but Git-bash doesn't recognize the command.
What must I do get this working with Git-bash, and why doesn't it work out of the box?
I run into the same issue. For example command "wp cli version" is working in cmd but not in cygwin.
Check following tutorial: https://deluxeblogtips.com/install-wp-cli-windows/
If you are using cygwin, you will need to create another wp file (without .bat) extension. Just name it wp with following content:
#!/usr/bin/env sh
dir=$(d=${0%[/\\]*}; cd "$d"; pwd)
# See if we are running in Cygwin by checking for cygpath program
if command -v 'cygpath' >/dev/null 2>&1; then
# Cygwin paths start with /cygdrive/ which will break windows PHP,
# so we need to translate the dir path to windows format. However
# we could be using cygwin PHP which does not require this, so we
# test if the path to PHP starts with /cygdrive/ rather than /usr/bin
if [[ $(which php) == /cygdrive/* ]]; then
dir=$(cygpath -m $dir);
fi
fi
dir=$(echo $dir | sed 's/ /\ /g')
"${dir}/wp-cli.phar" "$#"

Uninstalled Anaconda still shows up in PATH (Mac OS X)

I have installed Anaconda a few months ago but then uninstalled it and removed all anaconda files by using
rm -rf ~/anaconda
but when I run
echo $PATH
it still outputs a path that point to an Anaconda folder but when I search for it, it doesn't even exist, why is that happening?
What makes you think that non-existent directory are automatically
removed from $PATH? They are not. As an example I can make a new dir
and go there:
$ mkdir /tmp/new-path-dir && cd /tmp/new-path-dir
Add it to the $PATH:
$ PATH=/tmp/new-path-dir:$PATH
$ echo $PATH
/tmp/new-path-dir:<REST_OF_PATH>
Make a new olleh.so (hello spelled backwards) executable inside
it:
$ echo 'echo hi' > olleh.so && chmod +x olleh.so
Then go back to ~:
$ cd ~
And start a olleh.so:
$ olleh.so
hi
Now I can safely remove /tmp/new-path-dir:
$ rm -r /tmp/new-path-dir/
And it still will be shown in my $PATH:
$ echo $PATH
/tmp/new-path-dir:<REST_OF_PATH>
But I won't be able to run olleh.so any more:
$ olleh.so
bash: /tmp/new-path-dir/olleh.so: No such file or directory
And as paths to executables are cached by bash I can get rid of
olleh.so permanently like this:
$ hash -r
$ olleh.so
bash: olleh.so: command not found

Cannot Create Directories On Ubuntu With Bash Shell Script

I'm trying to run this bash shell script to create directories for vim syntax highlighting on Ubuntu 13.04 (via Vagrant 1.4.1 on Windows 7).
#!/usr/bin/env bash
basevim="$HOME/.vim"
ftdetect="${basevim}/ftdetect"
indent="${basevim}/indent"
syntax="${basevim}/syntax"
echo "Setting up VIM for syntax highlighting"
#Create directories for vim syntax highlighting
if [ ! -d "$basevim" ]; then
echo "Adding VIM syntax highlighting dirs"
mkdir "$basevim"
mkdir "$ftdetect"
mkdir "$indent"
mkdir "$syntax"
else
if [ ! -d "$ftdetect" ]; then
mkdir "$ftdetect"
fi
if [ ! -d "$indent" ]; then
mkdir "$indent"
fi
if [ ! -d "$syntax" ]; then
mkdir "$syntax"
fi
fi
This is executing as a provision.sh script for Vagrant so as far as I know it should run as root. I can see the echo'd message so it's taking the first branch. But for the life of me I can't seem to get this to work; no complaints but the directories don't get created. If I set those variables on an interactive prompt, I need to do sudo mkdir ftdetect (etc.) to get the directories created. Strangely I don't need to sudo to get the .vim directory created--at least that's what I recall.
I tried
if [ ! -d "${basevim}" ]; then
but that didn't do anything. I also tried
basevim="{$HOME}/.vim"
--also no dice. Any thoughts of what I may be missing? As I say, as far as I know it shouldn't be necessary to sudo on a provisioning script on Vagrant. I can tell the script is getting run because those echo'd messages are getting output.
Your script could be replaced by
mkdir -p "$HOME/.vim"/{ftdetect,indent,syntax}
As for the directories not appearing... Where are you looking for them?
Running this as root would create them in root's home directory, /root/, and not in the user's home directory /home/username. When in doubt, use absolute path names (and chown as needed afterwards).

Resources