append unique id file then scp in bash - bash

The bash (thank you #Charles Duffy) stores each uniqueid in an array and then passes them to %q to get the unique path. That seems to work what I am having trouble with is renaming each .png with the unique value in %q. I thought it was working but upon closer inspection, a .png file is being sent to scp.... but only 1 and with the wrong uniqueid. It seems like the first .png is being used by scp, but with the last uniqueid. In this example there are 2, but there may be more or less. I added a loop and that did not seem to work, I am at a loss. Thank you :).
I hope this help and thank you :).
├──/path/to/ ---- common path after ssh ---
│ ├── ID1* --- unique %q represents the unique id and * represents random text after it ---
│ │ └── /%q*/folder
│ ├── ID2* --- unique %q represents the unique id and * represents random text after it ---
│ │ └── /%q/folder
Description:
After ssh to the common path on the server, each unique ID from %q is used to further navigate to folder. In each folder there is a png (cn_results), that the unique ID from %q is append to (ID-cn_results) and this append file is scp to xxx#xxx.xx.xx.xxx:/path/to/%q*/destination.
declare -p array='([0]="ID1" [1]="ID2")' --- this is where the rename value are ---
current output in each /path/to/%q*/folder --- on the server---
cn_results.png
desired output in each /path/to/%q*/destination after scp
uniqueid1-cn_results.png
uniqueid2-cn_results.png
I can manually ssh into directory and the .png is there, though it is only cn_results before the scp where it is renamed/ append with the array value and then scp. I tried to add the loop to scp and rename as such:
printf -v cmd_q '(cd /path/to/%q*/*/folder && for ID in "${array[#]}" ; do exec sshpass -f file.txt scp "$ID" xxx#xxx.xx.xx.xxx:path/to/destination/${ID}-cn_results.png)\n' "${array[#]}" ; done
sshpass -f out.txt ssh -o strictHostKeyChecking=no -t xxx#xxx.xx.xx.xx "$cmd_q"

Here's a command that I think might produce the desired results. It copies all files in the source-folder with the name cn_results.png to the target folder, appending a unique id to the start using $(cat /proc/sys/kernel/random/uuid)
find ./source-folder -name 'cn_results.png' -exec sh -c 'cp "$1" "./target-folder/$(cat /proc/sys/kernel/random/uuid)_memo_content.xml"' _ {} \;

Related

Creating variables in running script

I'm trying convert some files to read only in backup environment. Data Domain has retention-lock feature that can lock files with external trigger which touch -a -t "dateuntillocked" /backup/foo.
In this situation there is also metadata files in folder that should not be locked otherwise next backup job cannot update metadata file and fails.
I extracted metadata file names but file count can be changed. For exp.
foo1.meta foo2.meta . . fooN.meta
Is it possible to create a variable for each entry and add to command dynamically?
Like:
var1=/backup/foo234.meta
var2=/backup/foo322.meta
.
.
varN=/backup/fooNNN.meta
<find command> | grep -v $var1 $var2....varN | while read line; do touch -a -t "$dateuntillocked" "$line"; done
another elaboration of the case is
for example you executed a ls in a folder but amount of file can differs in time. script will create a variable for every file and use in a touch command with while loop. if 3 files in folder, script will create 3 variable and use 3 variable with touch in while loop. if "ls" result find 4 files, script dynamically create 4 variable fof files and use all in while loop etc. I am not a programmer so my logic can differ. May be another way to do this with easier way.
Just guessing what your intentions might be.
You can combine find | grep | command into a single command:
find /backup -name 'foo*.meta' -exec touch -a -t "$dateuntillocked" {} +

Why does Rsync of tree structure into root break filesystem on Raspberry Pi?

I have developed an application which I am trying to install on raspberry pi via a script. The directory structure I have is this:
pi#raspberrypi:~/inetdrm $ tree files.rpi/
files.rpi/
├── etc
│   └── config
│   └── inetdrm
├── lib
│   └── systemd
│   └── system
│   └── inetdrm.service
└── usr
└── local
└── bin
└── inetdrm
When I try to install the tree structure onto the pi with this install.sh: script
#! /bin/bash
FILES="./files.rpi"
sudo rsync -rlpt "$FILES/" /
sudo chmod 644 /lib/systemd/system/inetdrm.service
sudo chmod +x /usr/local/bin/inetdrm
#sudo systemctl start inetdrm.service
#sudo systemctl enable inetdrm.service
The filesystem on the pi breaks. I loose all access to commands, the script fails, as shown on this transcript.
pi#raspberrypi:~/inetdrm $ ./install.sh
./install.sh: line 4: /usr/bin/sudo: No such file or directory
./install.sh: line 5: /usr/bin/sudo: No such file or directory
pi#raspberrypi:~/inetdrm $ ls
-bash: /usr/bin/ls: No such file or directory
pi#raspberrypi:~/inetdrm $ pwd
/home/pi/inetdrm
pi#raspberrypi:~/inetdrm $ ls /
-bash: /usr/bin/ls: No such file or directory
pi#raspberrypi:~/inetdrm $
Rebooting the pi results in kernel panic due to no init. Does anyone know what's going on?
I encountered the same issue. Turns out Rsync is not the right tool for the job. My solution was to deploy with the script below. Before writing the files to the target destination, it checks if the file contents are different. So it won't overwrite if the files are already there. You could even run this automatically on every reboot.
#!/usr/bin/env bash
FILES="files.rpi"
deploy_dir () {
shopt -s nullglob dotglob
for SRC in "${1}"/*; do
# Strip files dir prefix to get destination path
DST="${SRC#$FILES}"
if [ -d "${SRC}" ]; then
if [ -d "${DST}" ]; then
# Destination directory already exists,
# go one level deeper
deploy_dir "${SRC}"
else
# Destination directory doesn't exist,
# copy SRC dir (including contents) to DST
echo "${SRC} => ${DST}"
cp -r "${SRC}" "${DST}"
fi
else
# Only copy if contents aren't the same
# File attributes (owner, execution bit etc.) aren't considered by cmp!
# So if they change somehow, this deploy script won't correct them
cmp --silent "${SRC}" "${DST}" || echo "${SRC} => ${DST}" && cp "${SRC}" "${DST}"
fi
done
}
deploy_dir "${FILES}"
Ok, so after a good nights sleep, I worked out what is going on.
Rsync doesn't just do a simple copy or replace operation. It first makes a temporary copy of what it is replacing, and then moves that temporary copy into place. When doing a folder merge, it seems it does something similar causing (in my case) all the binaries in the /usr/* tree to be replaced while some are still in use.
The solution:
use --inplace
ie:
sudo rsync --inplace -rlpt "$FILES/" /
which causes rsync to work on the files (and directories, it seems) in their existing location rather than doing a copy-and-move.
I have tested the solution and confirmed it works, but I can not find any explicit mention of how rsync handles directory merge without the --inplace flag, so if someone can provide more info, that'd be great.
UPDATE: I found that when using --inplace the issue still occurs if rsync is interrupted for some reason. I'm not entirely certain about the inner workings of directory merge in rsync, so I have concluded that it may not be the best tool for this job. Instead I wrote my own deployment function. Here it is in case anyone stumbling across this post finds it useful:
#! /bin/bash
FILES="files.rpi"
installFiles(){
FILELIST=$(find "$1" -type f)
for SRC in $FILELIST; do
DEST="/$(echo "$SRC"| cut -f 2- -d/)"
DIR=$(dirname "$DEST")
if [ ! -d "$DIR" ]; then
sudo mkdir -p "$DIR"
fi
echo "$SRC => $DEST"
sudo cp "$SRC" "$DEST"
done
}
installFiles "$FILES"

youtube-dl get channel / playlist name and pass it to bash one line script

I'm trying to download an entire Youtube channel and that worked.
But I'm having the directories' names like below and thus I need to change that all manually.
I need a way to pass channel / playlist name and id to the script instead of fetching the url.
Script I used :
# get working/beginning directory
l=$(pwd);
clear;
# get playlists data from channel list
youtube-dl -j --flat-playlist \
"https://www.youtube.com/channel/UC-QDfvrRIDB6F0bIO4I4HkQ/playlists" \
|cut -d ' ' -f4 \
|cut -c 2-73 \
|while IFS= read -r line;
do;
# loop: do with every playlist link
# make a directory named by the playlist identifier in url
mkdir ${line:38:80};
# change directory to new directory
cd $l/${line:38:80};
# download playlist
youtube-dl -f mp4 "$line";
# print playlist absolute dir to user
pwd;
# change directory to beginning directory
cd $l;
done;
Names of directories :
.
├── PLxl69kCRkiI0oIqgQW3gWjDfI-e7ooTUF
├── PLxl69kCRkiI0q0Ib8lm3ZJsG3HltLQDuQ
├── PLxl69kCRkiI1Ebm-yvZyUKnfoi6VVNdQ7
├── ...
└── PLxl69kCRkiI3u-k02uTpu7z4wzYLOE3sq
This is not working :
https://github.com/ytdl-org/youtube-dl/issues/23442
# any playlist is seen as private
youtube-dl -J \
https://m.youtube.com/playlist?list=PL3GeP3YLZn5jOiHM8Js1_S0p_5HeS7TbY \
| jq -r '.title'
How to use youtube-dl from a python program?
I need it for bash script not for python
Edit: simply explained
How to get channel name from bash youtube-dl and replace it with list id for file name in this script
Consider the following:
#!/usr/bin/env bash
# if we don't delete slashes from titles there are serious security issues here
slash=/
# note that this url, being in quotes, needs no backslash escaping.
url='https://www.youtube.com/playlist?list=PLXmMXHVSvS-CoYS177-UvMAQYRfL3fBtX'
# First, get the name for the whole playlist
playlist_title=$(youtube-dl -J --flat-playlist "$url" | jq -r .title) || exit
# ...and, for the rest of the script, work under a directory named by that title
mkdir -p -- "${playlist_title//$slash/}" || exit
cd "${playlist_title//$slash/}" || exit
# Finally, loop over the individual videos and download them one at a time.
# ...arguably, you could tell youtube-dl to do this itself; call it an exercise.
youtube-dl -j --flat-playlist "$url" | # one JSON document per playlist entry
jq -r '[.id, .title] | #tsv' | # write id, then title, tab-separated
while IFS=$'\t' read -r id title; do ( # read that id and title from jq
# because of the ()s, this is a subshell; exits just go to the next item
# ...which is also why exec doesn't end the entire script.
dir=${title//$slash/} # take slashes out of the title to form directory
mkdir -p -- "$dir" || exit
cd -- "$dir" || exit # if cd fails, do not download anything
exec youtube-dl "$id" # exec is a minor perf optimization; consume subshell
); done
Note:
We're using jq to convert the JSON to a more safely-readable format that we can parse without byte offsets.
The extra backslashes were removed from the URL to prevent the 404 error described in the comments to the question.
Putting the body of the loop in a subshell with parenthesis means that the cd inside that subshell is automatically reversed when the parenthesized section is exited.
We don't trust titles not to contain slashes -- you don't need someone naming their video /etc/sudoers.d/hi-there or otherwise placing files in arbitrarily-chosen places.
Note the use of "$dir" instead of just $dir. That's important; see shellcheck warning SC2086.

What is the reson my expansion is not working

I have this code to generate a random directory structure given some parameters, and a running against the eval limits, so I am trying to use Xargs to work around as per #TheOtherGuy but I am doing something wrong.
DIRCMD="mkdir -p $OUTDIR/\"$FLDIR\"/$FLCHILDREN"
In VS CODE watch
declare -- DIRCMD="mkdir -p ./rndpath/\"LUl\"/{\"KYh\",\"NQ \",\"NU\",\"Hjn\",\"lS\",\"TEW\"}/{\"Rbf\",\"DU\",\"N4\",\"Da7o\",\"aNK\",\"2oS\"}"
And do
eval "$DIRCMD"
Everything works unless I hit the eval expansion limits.
As Per #ThatOtherGuy trying to work around the limitation I tried
dircmd1="printf "%s\0 " $OUTDIR/\"$FLDIR\"/$FLCHILDREN"
and
eval "$dircmd1" | xargs -0 mkdir -p
[admin#119 rndpath]$ tree -a --dirsfirst -s ./
./
└── [ 4096] LUl
└── [ 4096] {KYh,NQ,NU,Hjn,lS,TEW}
└── [ 4096] {Rbf,DU,N4,Da7o,aNK,2oS}
What am I doing wrong?
Here is the answer
DIRCMD="printf \"%s\0\" ./\'$FLDIR\'/$FLCHILDREN" then eval "$DIRCMD" | xargs -0 mkdir -p
somehow without the backlashes it did not work.
I have no idea why and would be nice to know.

Makefile: Reading/splitting an array

Say you had the following directory structure:
# directory structure
├── GIT-REPO
│   ├── dev
│   ├── production
│   ├── mgmt
I'm looking for a way in a Makefile to find the environment based on what directory it is living in. I found a way to do this in bash with the following:
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
IFS='/' read -r -a DIR_ARRAY <<< "$DIR"
GIT_REPO=some-repo
for ((i=0; i < ${#DIR_ARRAY[#]}; i++)) do
if [ "${DIR_ARRAY[$i]}" = "$GIT_REPO" ] ; then
echo ${DIR_ARRAY[$i+1]}
fi
done
But I'm having a hard time translating this into a Makefile. Each of these environment directories will have a Makefile as well as subdirectories. I want to be able to dynamically look up what environment it is under by finding the name of the directory to the right of the $GIT_REPO directory.
So here's an example:
/home/user/git_repo/mgmt
/home/user/git_repo/prod
/home/user/git_repo/prod/application/
/home/user/git_repo/dev/
/home/user/my_source_files/git_repo/prod/application
You'll see there's some similarities, but the overall length of dir's is different. They all share the git_repo and all contain an environment (prod, dev, mgmt). At the top level of each directory above is a Makefile where I want to pull the environment. My bash example was a lot more complicated than I needed it to be and could use sed instead. This is what is in my Makefile now:
GIT_REPO="my_repo"
ENV=$(shell pwd | sed "s/^.*\/$(GIT_REPO)\///" | cut -d / -f 1)
What this will do is look for the Git repository text and strip the repository name and any root directory before it. Then we apply cut and separate it by the '/' path and grab the first element. This will always return the environment folder.
I have a very specific use case where I want to dynamically get the environment in my Makefile rather than statically defining it each time.

Resources