How to move all files in a subfolder except certain files [closed] - bash

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I usually download with filezilla in the directory /Public/Downloads on my nas.
I made a script executed by filezilla when download queue is finished, so all my downloads are moved to /Public/Downloads/Completed. My directory /Public/Downloads contains also two files and three directories that must not be moved.
folder.jpg
log.txt
Temp
Cache
Completed
I tried this command:
find /Public/Downloads/* -maxdepth 1 | grep -v Completed | grep -v Cache | grep -v Temp | grep -v log.txt | grep -v folder.jpg | xargs -i mv {} /Public/Downloads/Completed
This works for downloaded files and folders named without special characters: they are moved to /Public/Downloads/Completed
But when there is a <space> or an à or something else special, xarg is complaining unmatched single quote; by default quotes are special to xargs unless you use the -0 option
I've searched a solution by myself but haven't find something for my needs combining find, grep and xargs for files and directories.
How do I have to modify my command ?

This is just a suggestion for your change strategy and not about xargs. You only need the bash shell and mv for the external tool.
#!/usr/bin/env bash
shopt -s nullglob extglob
array=(
folder.jpg
log.txt
Temp
Cache
Completed
)
to_skip=$(IFS='|'; printf '%s' "*#(${array[*]})")
for item in /Public/Downloads/*; do
[[ $item == $to_skip ]] && continue
echo mv -v "$item" /Public/Downloads/Completed/ || exit
done
Remove the echo if you think that the output is correct.
Add the -x e.g. set -x (after the shebang) option to see which/what the code is doing or bash -x my_script, assuming my_script is the name of your script.

So i've changed my strategy for a loop using a temporary text file:
#!/bin/bash
ls -p /Public/Downloads | grep -v "Cache/" | grep -v "Temp/" | grep -v "Completed/" | grep -v 'log.txt' | grep -v 'folder.jpg' >/Public/Downloads/Completed/temp.txt
cat /Public/Downloads/Completed/temp.txt |\
while IFS='' read -r CURRENT || [ -n "$CURRENT" ]; do
mv /Public/Downloads/"$CURRENT" /Public/Downloads/Completed
done
rm /Public/Downloads/Completed/temp.txt
1) I write a list of directories and files to be moved with "ls" in "temp.txt"
2) Each line of "temp.txt" is passed into the $CURRENT variable. So each file and directory is moved one by one with "mv". The variable $CURRENT is double quoted in the mv command in case of "space" character in the name of the directory or the file
3) "temp.txt" is deleted

Based on OP input, the main issue is files name with "special" characters, including space. Two options:
If none of the input files has embedded new line in the file name (which is the case), the problem can be addressed by explicit specification of the delimited string to new line (-d '\n'). See below
If any of the files can contain new line in the name, the whole pipeline has to use zero-terminated strings. Does not seems to be the case here.
find /Public/Downloads/* -maxdepth 1 |
grep -v Completed |
grep -v Cache |
grep -v Temp |
grep -v log.txt |
grep -v folder.jpg |
xargs -d '\n' -i mv {} /Public/Downloads/Completed

Related

How to oneline two variables via echo?

I try to search for files and seperate path and version as variable because each will be needed later for creating a directory and to unzip a .jar in desired path.
file=$(find /home/user/Documents/test/ -path *.jar)
version=$(echo "$file" | grep -P -o '[0-9].[0-9].[0-9].[0-9]')
path=$(echo "$file" | sed 's/\(.*\)[/].*/\1/')
newpath=$(echo "${path}/${version}")
echo "$newpath"
result
> /home/user/Documents/test/gb0500
> /home/user/Documents/test/gb0500 /home/user/Documents/test/gb0500
> /home/user/Documents/test /home/user/Documents/test/1.3.2.0
> 1.3.2.1
> 1.3.2.2
> 1.2.0.0
> 1.3.0.0
It's hilarious that it's only working at one line.
what else I tried:
file=$(find /home/v990549/Dokumente/test/ -path *.jar)
version=$(grep -P -o '[0-9].[0-9].[0-9].[0-9]')
path=$(sed 's/\(.*\)[/].*/\1/')
while read $file
do
echo "$path$version"
done
I have no experience in scripting. Thats what I figured out some days ago. I am just practicing and trying to make life easier.
find output:
/home/user/Documents/test/gb0500/gb0500-koetlin-log4j2-web-1.3.2.0-javadoc.jar
/home/user/Documents/test/gb0500/gb0500-koetlin-log4j2-web-1.3.2.1-javadoc.jar
/home/user/Documents/test/gb0500/gb0500-koetlin-log4j2-web-1.3.2.2-javadoc.jar
/home/user/Documents/test/gb0500-co-log4j2-web-1.2.0.0-javadoc.jar
/home/user/Documents/test/gb0500-commons-log4j2-web-1.3.0.0-javadoc.jar
As the both variables version and path are newline-separated, how about:
file=$(find /home/user/Documents/test/ -path *.jar)
version=$(echo "$file" | grep -P -o '[0-9].[0-9].[0-9].[0-9]')
path=$(echo "$file" | sed 's/\(.*\)[/].*/\1/')
paste -d "/" <(echo "$path") <(echo "$version")
Result:
/home/user/Documents/test/gb0500/1.3.2.0
/home/user/Documents/test/gb0500/1.3.2.1
/home/user/Documents/test/gb0500/1.3.2.2
/home/user/Documents/test/1.2.0.0
/home/user/Documents/test/1.3.0.0
BTW I do not recommend to store multiple filenames in a single variable
as a newline-separated variable due to several reasons:
Filenames may contain a newline character.
It is not easy to manipulate the values of each line.
For instance you could simply say
the third line as path=${file%/*} if file contains just one.
Hope this helps.

How to write a Bash script to edit many text files using the same commands? [duplicate]

This question already has answers here:
Run script on multiple files
(3 answers)
Closed 3 years ago.
I'm very new to bash. I have ten text files that I want to edit with the same line of code.
#!/bin/bash
sed -i -e 's/.\{6\}/&\n/g' -e 's/edit/edit2/g' | tr -d "\n" | sed 's/edit2/edit/g'| grep -o "here.*there" | sed -r '/^.{,100}$/d'
< files 1-10
I know I could use sed -f sed.sh <file1 >file1 but that only works with sed commands and it only works one file at a time?
Do I have to run a loop?
There's some great existing answers on the Unix stack exchange that help deal with your problem. Specifically, from this post, they use a loop to recursively loop through all the files in a particular directory, as follows:
( shopt -s globstar dotglob;
for file in **; do
if [[ -f $file ]] && [[ -w $file ]]; then
sed -i -- 's/foo/bar/g' "$file"
fi
done
)
Note the line, shopt -s globstar dotglob;, which allows us to use globbing patterns in the for loop. We also enclose the code in brackets, to prevent the shopt -s globstar dotglob; line option from becoming a global setting.
If you would like to apply this example to your file, you can just place your files in the current directory, and the code would probably look something like this:
( shopt -s globstar dotglob;
for file in **; do
if [[ -f $file ]] && [[ -w $file ]]; then
sed -i -e 's/.\{6\}/&\n/g' -e 's/edit/edit2/g' | tr -d "\n" | sed 's/edit2/edit/g' | grep -o "here.*there" | sed -r '/^.{,100}$/d' "$file"
fi
done
)
Note that we have placed a "$file" variable beside each of the seds that you used in your code, this replaces the name of the file for each command.
There is another example given in the code that allows you to pick which files to run on, rather than all the files in a directory, which you can also re-purpose for your code, as given here:
( shopt -s globstar dotglob
sed -i -- 's/foo/bar/g' **baz*
sed -i -- 's/foo/bar/g' **.baz
)
To answer your question of doing a loop on each line, you will need to put a loop for each line inside your for loop, like so:
while read line ; do
: sed -i -e 's/.\{6\}/&\n/g' -e 's/edit/edit2/g' | tr -d "\n" | sed 's/edit2/edit/g' | grep -o "here.*there" | sed -r '/^.{,100}$/d' "$line”
done
)
Although the for loop can be useful for dealing with files in recursive directories, I would recommend against also using another loop to grab lines, since it muddies your code, and it’s possible there is a better way to do it without parsing line by line.
The linked question is a fairly complete guide to many of the cases you may come across, and is also worth a read if you want to learn more.
Hope that helps!
You could use a for loop.
You could use the tool parallel.
Example
Create a set of test files using a for-loop
mkdir -p /tmp/so58333536
cd /tmp/so58333536
for i in 1.txt 2.txt 3.txt 4.txt 5.txt;do echo "The answer is 41" > $i;done
cat /tmp/so58333536/*
Now correct your mistake using parallel [1].
mkdir /tmp/so58333536.new
ls /tmp/so58333536/* |parallel "sed 's/41/42/' {} > /tmp/so58333536.new/{/}"
cat /tmp/so58333536.new/*
{}:: refers to the current file
{/}:: refers to name of the current file (path is removed)
Reads: List all files in so58333536 and apply the following sed command to each file and write the output to so58333536.new.
[1] Another option is to use sed -i for in-place editing.
Be very carefull with this!! Mistakes can cause serious damages!
# !! Do not use -i option regularly !!
ls /tmp/so58333536/* |parallel "sed -i 's/41/42/'"

How to escape space in file path in a bash script

I have a bash script which needs to go through files in a directory in an iOS device and remove files one by one.
To list files from command line I use the following command:
ios-deploy --id UUID --bundle_id BUNDLE -l | grep Documents
and to go one by one on each file I use the following for loop in my script
for line in $(ios-deploy --id UUID --bundle_id BUNDLE -l | grep Documents); do
echo "${line}"
done
Now the problem is that there are files which names have spaces in them, and in such cases the for loop treats them as 2 separate lines.
How can I escape that whitespace in for loop definition so that I get one line per each file?
This might solve your issue:
while IFS= read -r -d $'\n'
do
echo "${REPLY}"
done < <(ios-deploy --id UUID --bundle_id BUNDLE -l | grep Documents)
Edit per Charles Duffy recommendation:
while IFS= read -r line
do
echo "${line}"
done < <(ios-deploy --id UUID --bundle_id BUNDLE -l | grep Documents)

how to print names of files being downloaded

I'm trying to write a bash script that downloads all the .txt files from a website 'http://www1.ncdc.noaa.gov/pub/data/ghcn/daily/'.
So far I have wget -A txt -r -l 1 -nd 'http://www1.ncdc.noaa.gov/pub/data/ghcn/daily/' but I'm struggling to find a way to print the name of each file to the screen (when downloading). That's the part I'm really stuck on. How would one print the names?
Thoughts?
EDIT this is what I have done so far, but I'm trying to remove a lot of stuff like ghcnd-inventory.txt</a></td><td align=...
wget -O- $LINK | tr '"' '\n' | grep -e .txt | while read line; do
echo Downloading $LINK$line ...
wget $LINK$line
done
LINK='http://www1.ncdc.noaa.gov/pub/data/ghcn/daily/'
wget -O- $LINK | tr '"' '\n' | grep -e .txt | grep -v align | while read line; do
echo Downloading $LINK$line ...
wget -nv $LINK$line
done
Slight optimization of Sundeep's answer:
LINK='http://www1.ncdc.noaa.gov/pub/data/ghcn/daily/'
wget -q -O- $LINK | sed -E '/.*href="[^"]*\.txt".*/!d;s/.*href="([^"]*\.txt)".*/\1/' | wget -nv -i- -B$LINK
The sed command eliminates all lines not matching href="xxx.txt" and extracts only the xxx.txt part of the others. It then passes the result to another wget that uses it as the list of files to retrieve. The -nv option tells wget to be as less verbose as possible. It will thus print the name of the file it currently downloads but almost nothing else. Warning: this works only for this particular web site and does not descend in the sub-directories.

Why is grep displaying command as part of output?

I have written a bash script that finds any executable files in our scripts directory, then performs a grep on the resulting files to display a description, if it was included in the file.
A "description" is identified in each file as a line beginning with "# DESC:"
For some reason, the script also includes the grep command that is being run (but only once). Does anyone know why this is?
Script and output shown below. Why does the second line in the output happen?
Script
#!/bin/bash
# Find any FILES that are EXECUTABLE in the SCRIPTS
# directory and display any description, if there is one
find /opt/scripts/. -perm -111 -type f -maxdepth 1 | while read line ;
do
file=$(basename "$line")
printf "\033[1m%10s\033[0m : " $file
grep "# DESC:" "$line" | cut -c 9-
done
Output
desc : Displays all the scripts and their descriptions
DESC:" "$line" | cut -c 9-
showhelp : Displays the script help file
test : Script to perform system testing
Reason
Presumably your grepping script is also in /opt/scripts?
So it finds itself, and finds the grep subject '#DESC' and prints that.
You could fix that by adding a # DESC line to the top of your grep script, and just outputting the first result found by each grep using 'grep -m1'
grep -m1 '# DESC' "$line" | cut -c 9-
<humour>Otherwise its just turtles all the way down... ;-) </humour>
Alternative Fix
You could also improve the grep by using a regular expression and anchoring to the beginning of the line:
egrep -m1 '^# DESC' "$line" | cut -c 9-

Resources