I have a business scenario where a unix user ftp files to unix box in the following format 'BusinessData_date.dat' Please note that date part is dynamic and hence keeps on changing daily. e.g 'BusinessData_20131210.dat'
How can i run copy command to copy the file to a different directory daily and also archive the previous day file so that it does not read twice.
Trying out the following...getting an error
$ cp -pr /Tickets/data/BusinessData_"$(date+%Y%m%d)".dat /sftpdata/dataloader/data/BusinessData_"$(date+%Y%m%d)".csv
You need a space to split the actual command & the arguments. Also you dont need the quotes.
cp -pr ..../BusinessData_$(date +%Y%m%d).dat ..../BusinessData_$(date +%Y%m%d).csv
cp -p /Tickets/data/BusinessData_"$(date +%Y%m%d)".dat /sftpdata/dataloader/data/BusinessData_"$(date +%Y%m%d)".csv
Related
I am using the command cp -a <source>/* <destination> for copying and pasting the files inside one particular destination. In the destination the above command only replaces the files inside a folder that is present in source as well. If the there are other files present in destination, the command will not do anything and leave as it is. Now before doing the pasting, I want to take the back up of the files that are about to be replaced with the copy paste. Is there an option in the cp command that does this?
There is no such option in cp command. Here you need to create a shell script. First execute a ls command in your destination directory and store the output in a file like history.txt. Now just before cp command execute a grep command with the file you want to copy in the history file to check whether that file is already available in history file or not. If the file is available in destination directory (that means file available in history file) back up the file in destination directory first with todays datestamp and then copy the same file name from source to destination.
If you want to backup these files that will be copied from source, use -b option, available in GNU cp
cp -ab <source>/* <destination>
There is 2 caveats that you should know about.
This command, in my knoledge, is not available in non GNU
system (like BSD systems)
It will ask for confirmation for each existing file in target. We can reduce the probleme with the -u option but this is unusable in a script.
It appears to me that you are trying to make a backup (copy files to another location, don't erase them, don't overwrite those already in them), you probably want to take a look at the rsync command. This same command would be written
rsync -ab --suffix=".bak" <source>/ <destination>
and the rsync command is much more flexible to handle this sort of things.
I like to create tar-files to distribute some scripts using bash.
For every script certain configuration-files and libraries (or toolboxes) are needed,
e.g. a script called CheckTool.py needs Checks.ini, CheckToolbox.py and CommontToolbox.py to run, which are stored in specific folders on my harddisk and need to be copied in the same manner on the users harddisk.
I can create a tarfile manually for each script, but i like to have it more simple.
For this i have the idea to define a list of all needed files and their pathes for a specific script and read this in a bashscript, which creates the tar file.
I started with:
#!/bin/bash
while read line
do
echo "$line"
done < $1
Which is reading the files and pathes. In my example the lines are:
./CheckTools/CheckMesh.bs
./Configs/CheckMesh.ini
./Toolboxes/CommonToolbox.bs
./Toolboxes/CheckToolbox.bs
My question is how do I have to organize the data to make a tar file with the specified files using bash?
Or is there someone having a better idea?
No need for a complicated script, use option -T of tar. Every file listed in there will be added to the tar file:
-T, --files-from FILE
get names to extract or create from FILE
So your script becomes:
#!/bin/bash
tar -cvpf something.tar -T listoffiles.txt
listoffiles.txt format is super easy, one file per line. You might want to put full path to ensure you get the right files:
./CheckTools/CheckMesh.bs
./Configs/CheckMesh.ini
./Toolboxes/CommonToolbox.bs
./Toolboxes/CheckToolbox.bs
You can add tar commands to the script as needed, or you could loop on the list files, from that point on, your imagination is the limit!
I am new to bash scripting and I have to create a script that will run on all computers within my group at work (so it's not just checking one computer). We have a spreadsheet that keeps certain file information, and I am working to automate the updating of that spreadsheet. I already have an existing python script that gathers the information needed and writes to the spreadsheet.
What I need is a bash script (cron job, maybe?) that is activated anytime a user deletes a file that matches a certain extension within the specified file path. The script should hold on to the file name before it is completely deleted. I don't need any other information besides the name.
Does anyone have any suggestions for where I should begin with this? I've searched a bit but not found anything useful yet.
It would be something like:
for folders and files in path:
if file ends in .txt and is being deleted:
save file name
To save the name of every file .txt deleted in some directory path or any of its subdirectories, run:
inotifywait -m -e delete --format "%w%f" -r "path" 2>stderr.log | grep '\.txt$' >>logfile
Explanation:
-m tells inotifywait to keep running. The default is to exit after the first event
-e delete tells inotifywait to only report on file delete events.
--format "%w%f" tells inotifywait to print only the name of the deleted file
path is the target directory to watch.
-r tells inotifywait to monitor subdirectories of path recursively.
2>stderr.log tells the shell to save stderr output to a file named stderr.log. As long as things are working properly, you may ignore this file.
>>logfile tells the shell to redirect all output to the file logfile. If you leave this part off, output will be directed to stdout and you can watch in real time as files are deleted.
grep '\.txt$' limits the output to files with .txt extensions.
Mac OSX
Similar programs are available for OSX. See "Is there a command like “watch” or “inotifywait” on the Mac?".
I have a requirement where I need to copy some files from one location to other (Where the file may exist). While doing so,
I need to take a backup if the file already exists.
Copy the new file to the same location
I am facing problem in point 2. While I am trying to get the destination path for copying files, I am unable to extract the directory of the file. I tried using various options of find command, but was unable to crack it.
I need to trim the file name from the full file path so that it can be used in cp command. I am new to shell scripting. Any pointers are appreciated.
You can use
cp --backup
-b'--backup[=METHOD]'
*Note Backup options::. Make a backup of each file that would
otherwise be overwritten or removed. As a special case, `cp'
makes a backup of SOURCE when the force and backup options are
given and SOURCE and DEST are the same name for an existing,
regular file. One useful application of this combination of
options is this tiny Bourne shell script:
#!/bin/sh
# Usage: backup FILE...
# Create a GNU-style backup of each listed FILE.
for i; do
cp --backup --force -- "$i" "$i"
done
If you need only the filename, why not do a
basename /root/wkdir/index.txt
and assign it to a variable which would return only the filename?
Need some help with this as my shell scripting skills are somewhat less than l337 :(
I need to gzip several files and then copy newer ones over the top from another location. I need to be able to call this script in the following manner from other scripts.
exec script.sh $oldfile $newfile
Can anyone point me in the right direction?
EDIT: To add more detail:
This script will be used for monthly updates of some documents uploaded to a folder, the old documents need to be archived into one compressed file and the new documents, which may have different names, copied over the top of the old. The script needs to be called on a document by document case from another script. The basic flow for this script should be -
The script file should create a new gzip
archive with a specified name (created from a prefix constant in the script and the current month and year e.g. prefix.september.2009.tar.gz) only if it
does not already exist, otherwise add to the existing one.
Copy the old file into the archive.
Replace the old file with the new one.
Thanks in advance,
Richard
EDIT: Added mode detail on the archive filename
Here's the modified script based on your clarifications. I've used tar archives, compressed with gzip, to store the multiple files in a single archive (you can't store multiple files using gzip alone). This code is only superficially tested - it probably has one or two bugs, and you should add further code to check for command success etc. if you're using it in anger. But it should get you most of the way there.
#!/bin/bash
oldfile=$1
newfile=$2
month=`date +%B`
year=`date +%Y`
prefix="frozenskys"
archivefile=$prefix.$month.$year.tar
# Check for existence of a compressed archive matching the naming convention
if [ -e $archivefile.gz ]
then
echo "Archive file $archivefile already exists..."
echo "Adding file '$oldfile' to existing tar archive..."
# Uncompress the archive, because you can't add a file to a
# compressed archive
gunzip $archivefile.gz
# Add the file to the archive
tar --append --file=$archivefile $oldfile
# Recompress the archive
gzip $archivefile
# No existing archive - create a new one and add the file
else
echo "Creating new archive file '$archivefile'..."
tar --create --file=$archivefile $oldfile
gzip $archivefile
fi
# Update the files outside the archive
mv $newfile $oldfile
Save it as script.sh, then make it executable:
chmod +x script.sh
Then run like so:
./script.sh oldfile newfile
something like frozenskys.September.2009.tar.gz, will be created, and newfile will replace oldfile. You can also call this script with exec from another script if you want. Just put this line in your second script:
exec ./script.sh $1 $2
A good refference for any bash scripting is Advanced Bash-Scripting Guide.
This guide explains every thing bash scripting.
The basic approach I would take is:
Move the files you want to zip to a directory your create.
(commands mv and mkdir)
zip the directory. (command gzip, I assume)
Copy the new files to the desired location (command cp)
In my experience bash scripting is mainly knowing how to use these command well and if you can run it on the command line you can run it in your script.
Another command that might be useful is
pwd - this returns the current directory
Why don't you use version control? It's much easier; just check out, and compress.
(apologize if it's not an option)