FINAL EDIT: The $DATE variable was what screwed me up. For some reason when I reformatted it, it works fine. Does anyone know why that was an issue?
Here's the final backup script:
#!/bin/bash
#Vars
OUTPATH=/root/Storage/Backups
DATE=$(date +%d-%b)
#Deletes backups that are more than 2 days old
find "$OUTPATH"/* -mtime +2 -type f -delete
#Actual backup operation
dd if=/dev/mmcblk0 | gzip -1 - | dd of="$OUTPATH"/bpi-"$DATE".img.gz bs=512 count=60831745
OLD SCRIPT:
#!/bin/bash
#Vars
OUTPATH=~/Storage/Backups
DATE=$(date +%d-%b_%H:%M)
#Deletes backups that are more than 2 days old
find "$OUTPATH"/* -mtime +2 -type f -delete
#Actual backup operation
dd if=/dev/mmcblk0 | gzip -1 - | dd of="$OUTPATH"/bpi_"$DATE".img.gz bs=512 count=60831745
This is a script to backup my banana pi image to an external hard drive. I am new to bash scripting, so I know this will be an easy fix most likely but here is my issue:
I am running the script from ~/scripts
and the output file is ~/Storage/Backups (the mount point for the external HDD, specified in my /etc/fstab.
The commands work fine when the OUTPATH=., i.e. it just backs up to the current directory that the script is running from. I know I could just move the script to the backup folder and run it from there, but I am trying to add this to my crontab, so if I could keep all scripts in one directory just for organizational purposes that would be good.
Just wondering how to correctly make the script write my image to that $OUTPATH variable.
EDIT: I tried changing the $OUTPATH variable to a test directory that is located on /dev/root/ (on the same device that the script itself is also located) and it worked, so I'm thinking it's just an issue trying to write the image to a device that is different from the one that the script itself is located in.
My /etc/fstab line relating to the external HDD I would like to use is as follows:
/dev/sdb1 /root/Storage exfat defaults 0 0
The /root/Storage/Backups folder is where I am trying to write the image to
Populate OUTPATH with the full pathname of your backups directory.
In
OUTPATH=~/Storage/Backups
tilde expansion is not performed when putting "$OUTPATH" in find
find "$OUTPATH"/* ....
You may replace the ~ with the fullpath in OUTPATH or replace the OUTPATH with the actual path in find.
Related
I am trying to move files older than one hour, which are being populated almost every minute very rapidly to another folder whose name specifies the particular hour, in aix.
The script i was trying to run is:
find /log/traces/ -type f -mmin +59 -exec mv '{}' /Directory \;
The above script gives me an error:
find: bad starting directory
I am a newbie to shell scripting.
Any help will be highly appreciated.
------------------Edited-----------------------
I have been able to move the files older than 1 hour, but if the specified folder does not exist, it creates a file with the name specified in command and dumps all the files in it. The script i am running now is:
find /log/traces -type f -mmin +59 -exec mv '{}' /Directory/ABC-$(date +%Y%m%d_%H) \;
It creates a file named ABC-[Current hour]. I want to create a directory and move all the files into it.
If you are not running as a root user you may be getting this problem because of read permissions on /log/traces/.
To see the permission level of this directory run ls -l /log/traces/ the left most column will display something like this drwxr-xr-x which is an explanation of what permission settings that directory has. To understand more read this.
You need to ensure the user you are executing your command as has read access to /log/traces/ - that should fix your error.
Does the directory /Directory- (Timestamp) exist before the script exist? I am guessing the directory is not there to move the files. Make sure the directory exists before you start moving.
You can create a shell script for moving the files that will take the target directory as a parameter along with the file name. Tf the target directory does not exist the script creates the directory. After that script will execute mv command to move the file to the target directory.
Don't bother moving the files, just create them in the right folder directly. How?
Let crontab run each hour and create a dir /Directory/ABC-$(date +%Y%m%d_%H).
And now make a symbolic link between /log/traces and the new directory.
When the link /log/traces already exists, you must replace the link (and not make a link in a subdir).
newdir="/Directory/ABC-$(date +%Y%m%d_%H)"
mkdir -p "${newdir}"
ln -snf "${newdir} /log/traces
The first time you will need to mv /log/traces /log/traces_old and make the first link during a small moment that no new files are created.
Please test first with /log/testtraces first, checking the correct rights for the crontab user.
I just during the weekend decided to try out zsh and have a bit of fun with it. Unfortunately I'm an incredible newbie to shell scripting in general.
I have this folder with a file, which filename is a hash (4667e85581f80b6936f8811f0a7493c70eae4ee7) without a file-extension.
What I would like to do is copy this file to another folder and rename it to "screensaver.png".
I've tried with the following code:
#!/usr/bin/zsh
KUVVA_CACHE="$HOME/Library/Containers/com.kuvva.Kuvva-Wallpapers/Data/Library/Application Support/Kuvva"
DEST_FOLDER="/Library/Desktop Pictures/Kuvva/$USERNAME/screensaver.png"
for wallpaper in ${KUVVA_CACHE}; do
cp -f ${wallpaper} ${DEST_FOLDER}
done
This returns the following error:
cp: /Users/Morten/Library/Containers/com.kuvva.Kuvva-Wallpapers/Data/Library/Application Support/Kuvva is a directory (not copied).
And when I try to echo the $wallpaper variable instead of doing "cp" then it just echo's the folder path.
The name of the file changes every 6 hour, which is why I'm doing the for-loop. So I never know what the name of the file will be, but I know that there's always only ONE file in the folder.
Any ideas how I can manage to do this? :)
Thanks a lot!
Morten
It should work with regular filename expansion (globbing).
KUVVA_CACHE="$HOME/Library/Containers/com.kuvva.Kuvva-Wallpapers/Data/Library/Application Support/Kuvva/"
And then copy
cp -f ${KUVVA_CACHE}/* ${DEST_FOLDER}
You can add the script to your crontab so it will be run at a certain interval. Edit it using 'crontab -e' and add
30 */3 * * * /location/of/your/script
This will run it every third hour. First digit is minutes. Star indicates any. Exit the editor by pressing the escape-key, then shift+: and type wq and press enter. These vi-commands.
Don't forget to 'chmod 0755 file-name' the script so it becomes executable.
Here is the script.
#!/bin/zsh
KUVVA_CACHE="$HOME/Library/Containers/com.kuvva.Kuvva-Wallpapers/Data/Library/Application Support/Kuvva"
DEST_FOLDER="/Library/Desktop Pictures/Kuvva/$USERNAME/screensaver.png"
cp "${KUVVA_CACHE}/"* "${DEST_FOLDER}"
I have a lot of files named the same, with a directory structure (simplified) like this:
../foo1/bar1/dir/file_1.ps
../foo1/bar2/dir/file_1.ps
../foo2/bar1/dir/file_1.ps
.... and many more
As it is extremely inefficient to view all of those ps files by going to the
respective directory, I'd like to copy all of them into another directory, but include
the name of the first two directories (which are those relevant to my purpose) in the
file name.
I have previously tried like this, but I cannot get which file is from where, as they
are all named consecutively:
#!/bin/bash -xv
cp -v --backup=numbered {} */*/dir/file* ../plots/;
Where ../plots is the folder where I copy them. However, they are now of the form file.ps.~x~ (x is a number) so I get rid of the ".ps.~*~" and leave only the ps extension with:
rename 's/\.ps.~*~//g' *;
rename 's/\~/.ps/g' *;
Then, as the ps files have hundreds of points sometimes and take a long time to open, I just transform them into jpg.
for file in * ; do convert -density 150 -quality 70 "$file" "${file/.ps/}".jpg; done;
This is not really a working bash script as I have to change the directory manually.
I guess the best way to do it is to copy the files form the beginning with the names
of the first two directories incorporated in the copied filename.
How can I do this last thing?
If you just have two levels of directories, you can use
for file in */*/*.ps
do
ln "$file" "${file//\//_}"
done
This goes over each ps file, and hard links them to the current directory with the /s replaced by _. Use cp instead of ln if you intend to edit the files but don't want to update the originals.
For arbitrary directory levels, you can use the bash specific
shopt -s globstar
for file in **/*.ps
do
ln "$file" "${file//\//_}"
done
But are you sure you need to copy them all to one directory? You might be able to open them all with yourreader */*/*.ps, which depending on your reader may let browse through them one by one while still seeing the full path.
You should run a find command and print the names first like
find . -name "file_1.ps" -print
Then iterate over each of them and do a string replacement of / to '-' or any other character like
${filename/\//-}
The general syntax is ${string/substring/replacement}. Then you can copy it to the required directory. The complete script can be written as follows. Haven't tested it (not on linux at the moment), so you might need to tweak the code if you get any syntax error ;)
for filename in `find . -name "file_1.ps" -print`
do
newFileName=${filename/\//-}
cp $filename YourNewDirectory/$newFileName
done
You will need to place the script in the same root directory or change the find command to look for the particular directory if you are placing the above script in some other directory.
References
string manipulation in bash
find man page
Is it possible to run bash script in a temporary folder other than the one in which it actually resides ?
My script uses a lot of filenames .I am concerned that one of the many names may coincide with others in the folder . I have named the files according to the data contained , taking reusability into consideration .
Does mktemp -d and tempfile -d do the same ? If so , can someone please illustrate its usage with an example.
Thanks in advance for the replies .
You can switch directories in a running script easily. Bash has a notion of the present working directory, which you can change at any time. For example:
dir=$(mktemp -d)
cd "$dir"
echo "Current directory changed: $PWD"
cd "$OLDPWD"
echo "Back in the old directory: $PWD"
Is it possible to run bash script in a temporary folder other than the
one in which it actually resides?
Yes, you can use cd in your script to change current directory
Does mktemp -d and tempfile -d do the same ? If so , can someone
please illustrate its usage with an example.
It does not consider contents, it creates a random name and makes sure there is no such directory:
tmpdir=$(mktemp -d)
cd $tmpdir
I don't see tempfile on a standard Linux system, and I'm not familiar with the command.
In the old days, we simply appended $$ at the end of file and directory names:
mkdir "mydir.$$"
But, mktemp replaces that with a much more secure and safer method.
Usage is generally:
$ my_temp_dir=$(mktemp -d -tmpdir=$temp_dir -t $template)
The $template is optional. It allows you to set a name. A template contains a series of XXX which the program can use to guarantee a unique name. If you don't specify $temp_dir, it will generally put the directory under /tmp.
The syntax takes advantage that mktemp creates a temporary directory and then echos out the name. Thus, you can capture the name of the temporary directory created.
I'm trying to recover a mates hard drive, there is no structure what so ever so music and images are everywhere but in named folders sometimes >5 folders deep, I've managed to write a one-liner that finds the files and copies them to a mounted drive but it preserves the file structure completely. What I'm after is a bit of code that searches the drive and copies to another location and copies just the parent folder with the mp3/jpg files within and not the complete path. The other issue I have is the music is /folder/folder/folder/Artist/1.mp3..2.mp3..10.mp3 etc etc so I have to preserve the folder 'Artist' to give him any hope of finding his tracks again.
What I have working currently:
find /media/HP/ -name *.mp3 -fprintf /media/HP/MUSIC/Script.sh 'mkdir -p "/media/HP/MUSIC/%h" \n cp "%h/%f" "/media/HP/MUSIC/%h/"\n'
I then run the script.sh and it does all the copying.
Many Thanks
What you probably want to do will be along the lines of:
mkdir "$dest/$(basename $(dirname $source))"
OK folks - thanks for the input it did make me think deeper about this and I've come up with a result with the help of a colleague (thanks SiG):
This one-liner finds the files, and writes a script file to run separately but does copy across just the last folder as I wanted initially.
The Code:
find /some/folder/ -name *.mp3 | awk 'BEGIN{FS="/"}{print "mkdir -p \"/some/new/place/" $(NF-1) "\"\ncp -v -p \"" $0 "\" \"/some/new/place/" $(NF-1) "/" $NF "\""}' > script.sh
The output is:
mkdir -p "/media/HP/MUSIC/Satize" cp -v -p "/media/HP/Users/REBEKAH/Music/Satize/You Don't Love Me.mp3" "/media/HP/MUSIC/Satize/You Don't Love Me.mp3"
When script.sh is run it does all the work and I end up with a very reduced file structure I can copy to a new drive.
Thanks again folks much appreciated.
KjF
If you are doing the operation recursively ( entering directory by directory ), what you can do is everytime save your path as: Road_Dir=$(pwd)(let's say dir1/dir2/dir3/)
Then you detect your artist_name directory you save it is Music_Dir=$(pwd)
Finally you could extract your artist_name directory with the simple command:
Final_Dir=${Music_Dir##$Road_Dir/} (wich means take out the string $Road_Dir/ from $Music_Dir beginning from the left of $Music_Dir)
With this Final_Dir will contain artist_name, and you can copy your music file as $Final_Dir/Music.mp3 ...