how to compress the output of cat command - bash

The title is not good at all, but here is an example of the result of cat :
/var/oracle/oradata/DB11G/system01.dbf
/var/oracle/oradata/DB11G/sysaux01.dbf
/var/oracle/oradata/DB11G/undotbs01.dbf
/var/oracle/oradata/DB11G/users01.dbf
/var/oracle/oradata/DB11G/example01.dbf
/var/oracle/oradata/jabba/jabba01.dbf
/var/oracle/oradata/DB11G/control01.ctl
/var/oracle/flash_recovery_area/DB11G/control02.ctl
/var/oracle/oradata/DB11G/redo03.log
/var/oracle/oradata/DB11G/redo02.log
/var/oracle/oradata/DB11G/redo01.log
The cat command gives the path to this files
I need to compress this files into a tar.gz
How can i do it?

As a simple workaround, if your version of tar does not support --files-from option, you can use tr to produce a command line of files.
Given:
$ cat files.txt
/var/oracle/oradata/DB11G/system01.dbf
/var/oracle/oradata/DB11G/sysaux01.dbf
/var/oracle/oradata/DB11G/undotbs01.dbf
/var/oracle/oradata/DB11G/users01.dbf
/var/oracle/oradata/DB11G/example01.dbf
/var/oracle/oradata/jabba/jabba01.dbf
/var/oracle/oradata/DB11G/control01.ctl
/var/oracle/flash_recovery_area/DB11G/control02.ctl
/var/oracle/oradata/DB11G/redo03.log
/var/oracle/oradata/DB11G/redo02.log
/var/oracle/oradata/DB11G/redo01.log
You can do:
$ cat files.txt | tr '\n' ' '
/var/oracle/oradata/DB11G/system01.dbf /var/oracle/oradata/DB11G/sysaux01.dbf /var/oracle/oradata/DB11G/undotbs01.dbf /var/oracle/oradata/DB11G/users01.dbf /var/oracle/oradata/DB11G/example01.dbf /var/oracle/oradata/jabba/jabba01.dbf /var/oracle/oradata/DB11G/control01.ctl /var/oracle/flash_recovery_area/DB11G/control02.ctl /var/oracle/oradata/DB11G/redo03.log /var/oracle/oradata/DB11G/redo02.log /var/oracle/oradata/DB11G/redo01.log
Then use that on the command line of tar.
Most tar implementations (I believe) will also not choke on \n in the list of files, so you can do this directly:
$ tar -c $(cat files.txt)
Or try:
$ tar -c $(cat files.txt | tr '\n' ' ')
Of course if your tar supports --files-from, use that.

Related

How do I unzip a ".zip" archive and redirect the output a specified file using Perl?

Lets say I have a zip file called myPartial.zip and I want to unzip this myPartial.zip archive and redirect the output to a file called output.txt.
To achieve that I could use the following shell script:
unzip -Z1 myPartial.zip | grep -v "/$" >> my.log
My question is how do I do the same in Perl?
There are a number of options for unzipping in Perl.
First one is to just run the shell command in Perl.
system 'unzip -Z1 myPartial.zip | grep -v "/$" >> my.log';
Next is use one of the zip modules. There are a few, including Archive::Zip, IO::Uncompress::Unzip and Archive::Zip::SimpleUnzip.
Whether you need to get into using one of the modules depends on the requirements of what you are doing.
Okay, so unzip -Z1 foo.zip appears to list the filenames of the files in the zip archive, not extract the files. That makes more sense for wanting everything in a single file. And you just want files, not directories.
So a perl one-liner:
perl -MArchive::Zip -E '$, = "\n"; say grep { m![^/]$! } Archive::Zip->new($ARGV[0])->memberNames' foo.zip >> my.log
But really, it's easier to just use unzip/zipinfo like you already are.
You could try running the shell command directly using the system method:
system("unzip -Z1 myPartial.zip | grep -v "/$" >> my.log");
If you'd like to terminate your script after execution is complete use the exec method:
exec("unzip -Z1 myPartial.zip | grep -v "/$" >> my.log");
If you want to handle the program's output directly in PERL, simply use the backticks to execute the command and get it's output:
$response = `unzip -Z1 myPartial.zip | grep -v "/$" >> my.log`;
You could then use print to preview the output, like this:
print $response;
Good luck.

how to print names of files being downloaded

I'm trying to write a bash script that downloads all the .txt files from a website 'http://www1.ncdc.noaa.gov/pub/data/ghcn/daily/'.
So far I have wget -A txt -r -l 1 -nd 'http://www1.ncdc.noaa.gov/pub/data/ghcn/daily/' but I'm struggling to find a way to print the name of each file to the screen (when downloading). That's the part I'm really stuck on. How would one print the names?
Thoughts?
EDIT this is what I have done so far, but I'm trying to remove a lot of stuff like ghcnd-inventory.txt</a></td><td align=...
wget -O- $LINK | tr '"' '\n' | grep -e .txt | while read line; do
echo Downloading $LINK$line ...
wget $LINK$line
done
LINK='http://www1.ncdc.noaa.gov/pub/data/ghcn/daily/'
wget -O- $LINK | tr '"' '\n' | grep -e .txt | grep -v align | while read line; do
echo Downloading $LINK$line ...
wget -nv $LINK$line
done
Slight optimization of Sundeep's answer:
LINK='http://www1.ncdc.noaa.gov/pub/data/ghcn/daily/'
wget -q -O- $LINK | sed -E '/.*href="[^"]*\.txt".*/!d;s/.*href="([^"]*\.txt)".*/\1/' | wget -nv -i- -B$LINK
The sed command eliminates all lines not matching href="xxx.txt" and extracts only the xxx.txt part of the others. It then passes the result to another wget that uses it as the list of files to retrieve. The -nv option tells wget to be as less verbose as possible. It will thus print the name of the file it currently downloads but almost nothing else. Warning: this works only for this particular web site and does not descend in the sub-directories.

Creating a file using parameters of other file using UNIX Shell Script

I have a series of files
484_mexico_201401.dat
484_mexico_201402.dat
484_mexico_201403.dat
… so on
I want to make files
484_mexico_201401.mft which will have below containt
484 | datfile name | line count for the .dat file
Example:
484|484_mexico_201401.dat|6000
can anyone help with a shell script for this ?
You can try bash,
for file in 484_*
do
new_file=${file%.*};
echo "$(sed 's/^\([^_]\+\)_.*/\1/'<<<$file)|$file|$(wc -l $file|cut -d' ' -f1)" > "$new_file.mft";
done
you can also try this.
location="./";for file in $(ls $location);do echo "$(echo $file|cut -d '_' -f 1)|$file|$(wc -l $file | cut -d ' ' -f 1)" >> output.txt;done
Then you will be able to read the new file buy typing cat output.txt
If you want to full script for it, then you may need to add the #!/bin/bash to the first line in the script.
#!/bin/bash
location="./";for file in $(ls $location);do echo "$(echo $file|cut -d '_' -f 1)|$file|$(wc -l $file | cut -d ' ' -f 1)" >> output.txt;done
save that into a file where you want the script to be then run chmod 555 scriptname.sh and you should be able to run.

Self-extracting script in sh shell

How would I go about making a self extracting archive that can be executed on sh?
The closest I have come to is:
extract_archive () {
printf '<archive_contents>' | tar -C "$extract_dir" -xvf -
}
Where <archive_contents> contains a tarball with null characters, %, ' and \ characters escaped and enclosed between single quotes.
Is there any better way to do this so that no escaping is required?
(Please don't point me to shar, makeself etc. I want to write it from scratch.)
Alternative variant is to use marker for end of shell script and use sed to cut-out shell script itself.
Script selfextract.sh:
#!/bin/bash
sed '0,/^#EOF#$/d' $0 | tar zx; exit 0
#EOF#
How to use:
# create sfx
cat selfextract.sh data.tar.gz >example_sfx.sh
# unpack sfx
bash example_sfx.sh
Since shell scripts are not compiled, but executed statement by statement, you can mix binary and text content using a pattern like this (untested):
#!/bin/sh
sed -e '1,/^exit$/d' "$0" | tar -C "${1-.}" -zxvf -
exit
<binary tar gzipped content here>
You can add those two lines to the top of pretty much any tar+gzip file to make it self extractable.
To test:
$ cat header.sh
#!/bin/sh
sed -e '1,/^exit$/d' "$0" | tar -C "${1-.}" -zxvf -
exit
$ tar -czf header.tgz header.sh
$ cat header.sh header.tgz > header.tgz.sh
$ sh header.tgz.sh
header.sh
Some good articles on how to do exactly that could be found at:
http://www.linuxjournal.com/node/1005818.
https://community.linuxmint.com/tutorial/view/1998
Yes, you can do it natively with xtar.
Build xtar elf64 tar self-extractor header (you free to modify it to support elf32, pe and other executable formats), it is based on lightweight bsdtar untar and std elf lib.
cc contrib/xtar.c -o ./xtar
Copy xtar binary to yourTar.xtar
cp ./xtar yourTar.xtar
Append yourTar.tar archive to the end of yourTar.xtar
cat yourTar.tar >> yourTar.xtar
chmod +x yourTar.xtar

Using DOS file contents as command line arguments in BASH

This is a follow-up to this question's answer.
How can I modify the code so that the annoying CRLF of a DOS created file can be stripped away before being passed to xargs?
Example file 'arglist.dos'.
# cat > arglist.unix
src/file1 dst/file1
src/file2 dst/file2
src/file3 dst/file3
^c
# sed 's/$/\r/' arglist.unix > arglist.dos
The unix variant of the file works with this:
$ xargs -n2 < arglist.unix echo cp
cp src/file1 dst/file1
cp src/file2 dst/file2
cp src/file3 dst/file3
For my own education, how can I change it to accept either the 'arglist.unix' or 'arglist.dos' files on the same command line?
cat arglist.dos | tr -d "\r" | xargs -n2 echo cp
gives you the same result as
cat arglist.unix | tr -d "\r" | xargs -n2 echo cp
so it works on both files.
tr -d "\r" removes all the CR characters
Use d2u to remove the CR before passing the file to xargs.

Resources