Grouping files in tarballs - bash

I need your help so as to create some tarballs, so as to group some files by year. I am using the following script but I get the error message:
tar: 2067_*.inp: Cannot stat: No such file or directory
tar: Error exit delayed from previous errors
Code:
for i in `seq 1960 2100` ; do
tar cvf ${i}_74_1.tar ${i}_*.inp
done
Where the *.inp files have the following structure: 1960_smt.inp, 1960_smt1.inp, etc.
I understand that my error is the * symbol that can't "understand" that I want to take any character. Could someone please help me fix it?

2067_*.inp: Cannot stat: No such file
or directory tar
Sounds more like you don't actually have any files named 2067_XXXX.inp for tar to archive
You'll likely want to check for a matching file to the pattern before you attempt to tar it up:
#!/bin/bash
shopt -u nullglob
for i in {1960..2100}; do
[ -f ${i}_*.inp ] && tar cvf ${i}_74_1.tar ${i}_*.inp
done
P.S.
Does anybody know why replacing [ with [[ ]] as in [[ -f ${i}_*.inp ]] breaks the pattern matching?

Related

must use full path with `-prune` in the find command? [duplicate]

This a really short question. But is there something syntatically wrong with placing a variable $example as an argument for tar in a bash file?
I have the file written as
//only portion that really matters
#!/bin/bash
...
tar -cvpzf $filename $backup_source
//here's the actual code
#!/bin/bash
backup_source="~/momobobo"
backup_dest="~/momobobo_backup/"
dater=`date '+%m-%d-%Y-%H-%M-%S'`
filename="$backup_dest$dater.tgz"
echo “Backing Up your Linux System”
tar -cvpzf $filename $backup_source
echo tar -cvpzf $filename $backup_source
echo “Backup finished”
//and heres the error
“Backing Up your Linux System”
tar: ~/momobobo: Cannot stat: No such file or directory
tar (child): ~/momobobo_backup/07-02-2013-18-34-12.tgz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
tar -cvpzf ~/momobobo_backup/07-02-2013-18-34-12.tgz ~/momobobo
Notice the "echo tar ...". When I copy and paste the output and run it in my terminal there is no problem taring the file. I'm currently running Xubuntu and I already did an update.
~ doesn't expand to your home directory in double quotes.
Just remove the double quotes:
backup_source=~/momobobo
backup_dest=~/momobobo_backup/
In cases where you have things you would want to quote, you can use ~/"momobobo"

How to check if a file exists or not and create/delete if does/does not exist in shell

In shell, I want to check if a file exists or not then create if it doesn't exist or delete if it exists. For this I need a one liner and am trying to do something like:
ls | awk '\filename\' <if exist delete else create>
I need the ls as my problem has some command that outputs a list of strings that need to be pipelined to awk then possibly touch/mkdir.
#!/bin/bash
if [ -z "$1" ] || [ ! -f "$1" ] # $1 is input filename and -f check if $1 is a regular file
then
rm "$1" #delete the file
else
touch "$1" #create the file
fi
save the file as filecreator.sh
change the permission to allow execution with sudo chmod a+rx
while running the script use ./filecreator.sh yourfile.extension
You can see the file in your directory.
Using oc projects and oc new-project instad of ls and touch as indicated in a comment.
oc projects |
while read -r proj; do
if [ -d "$proj" ]; then
rm -rf "$proj"
else
oc new-project "$proj"
fi
done
I don't think there is a useful way to write this as a one-liner. If you like, you can replace the newlines with semicolons, except after then and else.
You really should put your actual requirements in the question itself. ls is a superbly useless example because it cannot list a file which doesn't already exist, and you should not use ls in scripts at all.
rm yourfile 2>/dev/null || touch yourfile
If the file existed before, rm will succeed and erase the file, and the touch won't be executed. You end up with no file afterwards.
If the file did not exist before, rm will fail (but the error message is not visible, since it is directed to the bitbucket), and due to the non-zero exit code of rm, the touch will be executed. You end up with an empty file afterwards.
Caveat: If the file exists, but you don't have permissions to remove it, you won't notice this error, due to the redirection of stderr. Hence, for debugging and later diagnosis, it might be better to redirect stderr to some file instead.

Error in Bash script to check existing file in Solaris

im gong to compile oracle forms on Solaris and create a script.
the script should check if .fmx is created then removes .err file.
here is my script but I've received below error
Code to remove error files
export FORMS_PATH=export FORMS_PATH=/apps/apps/frmcompile/cmteam/hla
for FILE in `ls $FORMS_PATH/*.fmx`; do
if exist "$FILE/*.fmx";
then
rm $FILE/err
fi
done
Error Encountered
rmerr.sh[3]: exist: not found [No such file or directory]
Regular File test is done using "-f"
export FORMS_PATH=export FORMS_PATH=/apps/apps/frmcompile/cmteam/hla
for FILE in `ls $FORMS_PATH/*.fmx`; do
# True if file exists and is a regular file.
if [ -f "$FILE/*.fmx"]; then
rm $FILE/err
fi
done
This might be what you want to do, but it is unclear where .fmx and .err files are located:
export FORMS_PATH=/apps/apps/frmcompile/cmteam/hla
for FILE in $FORMS_PATH/*.fmx; do
b=$(basename $FILE)
[ -f "$b" ] && rm ${b%fmx}err
done
".err" is a file, but you list "err" here.
Some other problem here:
export FORMS_PATH=export FORMS_PATH=/apps/apps/frmcompile/cmteam/hla
Replace with "FORMS_PATH=/apps/apps/frmcompile/cmteam/hla"
for FILE in ls $FORMS_PATH/*.fmx; do
FILE contains every file ending in ".fmx"
if exist "$FILE/.fmx";
Result eg in "/apps/apps/frmcompile/cmteam/hla/blaba.fmx/.fmx" with shell expansion and "exist" - what's this - try "test" or "[]".
rm $FILE/err
Results in "/apps/apps/frmcompile/cmteam/hla/blaba.fmx/err or .err in subfolder and that you don't like, or?
So best use this:
#!/bin/sh OR #!/bin/bash
FORMS_PATH=/apps/apps/frmcompile/cmteam/hla
for fmx in $FORMS_PATH/*.fmx; do
# remove your files ending in .err instead of .fmx
/bin/rm "${fmx%.fmx}.err # only valid with bash
done
Tom

Unix Bash Alias Command

I am trying to simplify my work with the help of Alias commands in my bash shell.
Problem Statement:
I want to copy different files from different directories to one single folder. The syntax i am using here is as below
cp <folder>/<file> <path>/file.dir
Here I want to save the destination file with filename.directory for easy identification. To achieve the same, I have written the below alias.
Alias Script
cp $Folder/$fileName ~/<path>/$fileName.$Folder
OR
cp $1/$2 ~/<path>/$2.$1
Expected output,
cp bin/file1 ~/Desktop/personal/file1.bin
cp etc/file2 ~/Desktop/personal/file2.etc*
However, It's failing at parsing the source file. i.e. $Folder is not replaced with my first argument.
cp: cannot stat `/file1': No such file or directory
I am writing the above script only to reduce my command lengths. As I am not expert in the above code, seeking any expert help in resolving the issue.
Rather than using an alias you could use a function which you define in some suitable location such as .profile or .bashrc
For example:
mycp()
{
folder=$1
filename=$2
if [ $# -ne 2 ]
then
echo "Two parameters not entered"
return
fi
if [ -d $folder -a -r $folder/$filename ]
then
cp $folder/$filename ~/playpen/$filename.$folder
else
echo "Invalid parameter"
fi
}
There is no way a bash alias can use arguments as you are trying to do. However, perl based rename can probably help you here. Note that it will effectively mv the files, not cp them.
rename 's|([^/]*)/(.*)|/home/user/path/$2.$1|' */*
Limitations: You can only process the files in 1 sub-directory level.
So, below alias can work (with above limitation):
$ alias backupfiles="rename 's|([^/]*)/(.*)|/home/user/path/\$2.\$1|'"
$ backupfiles */*
You can make more sophisticated perl expression if you want to work with multi-directory-level file structure.
A directory contains some files say ~/Documents/file1.d contains newfile.txt
joe#indiana:~/Documents$ ls -l $file
total 1
-rw-r--r-- 1 joe staff 0 May 5 11:39 newfile.txt
Add the variable 'file' in .bashrc for example my .bashrc is shown here
alias ll='ls -la'
file=~/Documents/file1.d
Now whenever you copy to '$file' it will copy to file1.d directory under ~/Documents :)

tar: file changed as we read it

I am using make and tar to backup. When executing makefile, tar command shows file changed as we read it. In this case,
the tar package is ok when the warning comes up
but it stops the tar command for the following backup
the file showing the warning in fact doesn't change -- it is really strange that the warning comes up
the files showing the warning come up randomly, I mean, everytime I run my makefile, the files showing the warning are different
--ignore-failed-read doesn't help. I am using tar 1.23 in MinGW
I just changed my computer to WIN7 64 bit. The script works well in old WIN7 32 bit. But the tar version is not as new as the 1.23.
How can I stop the tar's warning to stop my backup following the warning?
Edit-2: it might be the reason
As I said above, the bash shell script worked well in my old computer. Comparing with the old computer, the msys version is different. So is the version of tar command. In the old computer, tar is 1.13.19 and it is 1.23 in the new computer. I copied the old tar command without copying its dependency msys-1.0.dll to the new computer and renamed it tar_old. And I also updated the tar command in the shell script and run the script. Then everything is ok. So, it seemed that the problem is the tar command. I am sure that there is no any file changed when taring. Is it a bug for tar command in new version? I don't know.
Edit-1: add more details
The backup is invoked by a bash shell script. It scans the target directory and builds makefile then invokes make to use tar command for backup. Followed is a typical makefile built by the bash shell script.
#--------------------------------------------
# backup VC
#--------------------------------------------
# the program for packing
PACK_TOOL=tar
# the option for packing tool
PACK_OPTION=cjvf
# M$: C driver
WIN_C_DIR=c:
# M$: D driver
WIN_D_DIR=d:
# M$: where the software is
WIN_PRG_DIR=wuyu/tools
# WIN_PRG_DIR=
# where to save the backup files
BAKDIR=/home/Wu.Y/MS_bak_MSYS
VC_FRAMEWORK=/home/Wu.Y/MS_bak_MSYS/tools/VC/VC_framework.tar.bz2
VC_2010=/home/Wu.Y/MS_bak_MSYS/tools/VC/VC_2010.tar.bz2
.PHONY: all
all: $(VC_FRAMEWORK) $(VC_2010)
$(VC_FRAMEWORK): $(WIN_C_DIR)/$(WIN_PRG_DIR)/VC/Framework/*
#$(PACK_TOOL) $(PACK_OPTION) "$#" --ignore-failed-read /c/$(WIN_PRG_DIR)/VC/Framework
$(VC_2010): $(WIN_C_DIR)/$(WIN_PRG_DIR)/VC/VS2010/*
#$(PACK_TOOL) $(PACK_OPTION) "$#" --ignore-failed-read /c/$(WIN_PRG_DIR)/VC/VS2010
As you can see, the tar package is stored in ~/MS_bak_MSYS/tools/VC/VC_2010.tar.bz2. I run the script in ~/qqaa. ~/MS_bak_MSYS is excluded from tar command. So, the tar file I am creating is not inside a directory I am trying to put into tar file. This is why I felt it strange that the warning came up.
I also encounter the tar messages "changed as we read it". For me these message occurred when I was making tar file of Linux file system in bitbake build environment. This error was sporadic.
For me this was not due to creating tar file from the same directory. I am assuming there is actually some file overwritten or changed during tar file creation.
The message is a warning and it still creates the tar file. We can still suppress these warning message by setting option
--warning=no-file-changed
(http://www.gnu.org/software/tar/manual/html_section/warnings.html
)
Still the exit code return by the tar is "1" in warning message case:
http://www.gnu.org/software/tar/manual/html_section/Synopsis.html
So if we are calling the tar file from some function in scripts, we can handle the exit code something like this:
set +e
tar -czf sample.tar.gz dir1 dir2
exitcode=$?
if [ "$exitcode" != "1" ] && [ "$exitcode" != "0" ]; then
exit $exitcode
fi
set -e
Although its very late but I recently had the same issue.
Issue is because dir . is changing as xyz.tar.gz is created after running the command. There are two solutions:
Solution 1:
tar will not mind if the archive is created in any directory inside .. There can be reasons why can't create the archive outside the work space. Worked around it by creating a temporary directory for putting the archive as:
mkdir artefacts
tar -zcvf artefacts/archive.tar.gz --exclude=./artefacts .
echo $?
0
Solution 2:
This one I like. create the archive file before running tar:
touch archive.tar.gz
tar --exclude=archive.tar.gz -zcvf archive.tar.gz .
echo $?
0
If you want help debugging a problem like this you need to provide the make rule or at least the tar command you invoked. How can we see what's wrong with the command if there's no command to see?
However, 99% of the time an error like this means that you're creating the tar file inside a directory that you're trying to put into the tar file. So, when tar tries to read the directory it finds the tar file as a member of the directory, starts to read it and write it out to the tar file, and so between the time it starts to read the tar file and when it finishes reading the tar file, the tar file has changed.
So for example something like:
tar cf ./foo.tar .
There's no way to "stop" this, because it's not wrong. Just put your tar file somewhere else when you create it, or find another way (using --exclude or whatever) to omit the tar file.
Here is a one-liner for ignoring the tar exit status if it is 1. There is no need to set +e as in sandeep's script. If the tar exit status is 0 or 1, this one-liner will return with exit status 0. Otherwise it will return with exit status 1. This is different from sandeep's script where the original exit status value is preserved if it is different from 1.
tar -czf sample.tar.gz dir1 dir2 || [[ $? -eq 1 ]]
To enhance Fabian's one-liner; let us say that we want to ignore only exit status 1 but to preserve the exit status if it is anything else:
tar -czf sample.tar.gz dir1 dir2 || ( export ret=$?; [[ $ret -eq 1 ]] || exit "$ret" )
This does everything sandeep's script does, on one line.
Simply using an outer directory for the output, solved the problem for me.
sudo tar czf ./../31OCT18.tar.gz ./
Exit codes for tar are restricted, so you don't get to much information.
You can assume that ec=1 is safe to ignore, but it might trip - i.e. the gzip-example in other posts (exit code from external programs).
The reason for the file changed as we read it error/warning can be varying.
A log file inside the directory.
Writing to a tar file in the same directory you are trying to back up.
etc.
Possible workarounds can involve:
exclude known files (log files, tar-files, etc)
ensure log files are written to other directories
This can be quite involved, so you might want to still just run the tar command and preferably safely ignore some errors / warnings.
To do this you will have to:
Save the tar output.
Save the exit code
Check the output against known warnings and errors, not unlike tar's own ignore.
Conditionally pass another exit code to the next program in the pipe.
In OP's case this would have to be wrapped in a script and run as PACK_TOOL.
# List of errors and warnings from "tar" which we will safely ignore.
# Adapt to your findings and needs
IGNORE_ERROR="^tar:.*(Removing leading|socket ignored|file changed as we read it)"
# Save stderr from "tar"
RET=$(tar zcf $BACKUP --exclude Cache --exclude output.log --exclude "*cron*sysout*" $DIR 2>&1)
EC=$? # Save "tar's" exit code
echo "$RET"
if [ $EC -ne 0 ]
then
# Check the RET output, remove (grep -v) any errors / warning you wish to ignore
REAL_ERRORS=$(echo "$RET" | grep "^tar: " | grep -Ev "${IGNORE_ERROR:?}")
# If there is any output left you actually got an error to check
if [ -n "$REAL_ERRORS" ]
then
echo "ERROR during backup of ${DIR:?} to ${BACKUP:?}"
else
echo "OK backup of (warnings ignored) ${DIR:?}"
EC=0
fi
else
echo "OK backup of ${DIR:?}"
fi
It worked for me by adding a simple sleep timeout of 20 sec.
This might happen if your source directory is still writing. Hence put a sleep so that the backup would finish and then tar should work fine. This also helped me in getting the right exit status.
sleep 20
tar -czf ${DB}.${DATE}.tgz ./${DB}.${DATE}
I am not sure does it suit you but I noticed that tar does not fail on changed/deleted files in pipe mode. See what I mean.
Test script:
#!/usr/bin/env bash
set -ex
tar cpf - ./files | aws s3 cp - s3://my-bucket/files.tar
echo $?
Deleting random files manually...
Output:
+ aws s3 cp - s3://my-bucket/files.tar
+ tar cpf - ./files
tar: ./files/default_images: File removed before we read it
tar: ./files: file changed as we read it
+ echo 0
0
Answer should be very simple: Don't save your tar file while "Taring" it in the same directory.
Just do: tar -cvzf resources/docker/php/php.tar.gz .
Eventually,
it will tar the current directory and save it to another directory.
That's easy peasy, lemon squeezy fellas

Resources