Edit file contents of folder before compress it - shell

I want to compress the contents of a folder. The catch is that I need to modify the content of a file before compressing it. The modification should not alter the contents in the original folder but should be their in the compressed file
So far I was able to figure out altering file contents using sed command-
sed 's:/site_media/folder1/::g' index.html >index.html1
where /site_media/folder1/ is the string which I want to replace with empty string. Currently this code is creating another file named index.html1 as I don't want to make the changes inplace for the file index.html.
I tried pipelining this command with the zip command as follows
sed 's:/site_media/folder1/::g' folder1/index.html > index1.html |zip zips/folder1.zip folder1/
but I am not getting any contents when I unzip the file folder1.zip. Also the modified file in the compressed folder should be named index.html (and not index.html1)

You want to do two things. Then, say command1 && command2. In your case:
sed '...' folder1/index.html > index1.html && zip zips/folder1.zip folder1/
If you pipe commands, you use the output of the first to feed the second, which is something you don't want in this case.

Related

Linux Bash - modifying extracted text from stdout

I would like to recursively scan a given directory for all .zip files, extract text from each such a file using Apache Tika (in my case this is /opt/solr/bin/post script) into a single text file and put that text file into the same directory where the original zip file is.
To find all zip files recursively and extract all the content I use:
find . -name "*zip" -exec sh -c 'f="{}"; /opt/solr/bin/post "$f" \
-params="...params..." > "$f.txt"' \;
The content of the extracted file is:
java -classpath /opt/solr/dist/solr-core-8.7.0.jar -Dauto=yes -Dout=yes -
Dparams=literal.search_area=test&extractOnly=true
&extractFormat=text&defaultField=text -Dc=mycoll
-Ddata=files org.apache.solr.util.SimplePostTool zip.zip
SimplePostTool version 5.0.0
Posting files to [base] url http://localhost:8983/solr/mycoll/update?
literal.search_area=test&extractOnly=true&extractFormat=text
&defaultField=text...
Entering auto mode. File endings considered are
xml,json,jsonl,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,
odt,odp,ods,ott,otp,ots,rtf,htm,html,txt,log
POSTing file zip.zip (application/octet-stream) to [base]/extract
{
"responseHeader":{
"status":0,
"QTime":1614},
"":"**EXTRACTED TEXT**",
"null_metadata":[
"stream_size",["79855"],
"X-Parsed-By",["org.apache.tika.parser.DefaultParser",
"org.apache.tika.parser.pkg.PackageParser"],
"stream_content_type",["application/octet-stream"],
"resourceName",["/mnt/remote/users/zhilov/!tmp/zip.zip"],
"Content-Type",["application/zip"]]}
1 files indexed.
COMMITting Solr index changes to http://localhost:8983/solr/mycoll/update?
literal.search_area=test&extractOnly=true&
extractFormat=text&defaultField=text...
Time spent: 0:00:03.495
From that output I would like to cut out the beginning and the end of the file leaving only EXTRACTED TEXT inside of the generated file for further indexing.
Is that possible to do all those operations in one bash command line? Or at least with a bash script?
Try this:
sed -n '/QTime/{N;s/.*\n.*:.//;s/.,$//p;}'
This question addresses the UTF-8 problem.

Shell script - replace a string in the all the files in a directory based on file name

I want to replace a constant string in multiple files based on the name of the file.
Example:
In a directory I have many files named like 'X-A01', 'X-B01', 'X-C01'.
In each file there is a string 'SS-S01'.
I want to replace string 'SS-S01' in the first file with 'X-A01', second file with 'X-B01' and third file with 'X-C01'.
Please help me how can we do it as I have hundreds of files like this and do not want to manually edit all files one by one.
Remember to back up your files(!) before running this command, since I have not actually tried it myself:
You could do something like:
for file in <DIR>/*; do sed -i "s/SS-S01/${file##*/}/" "$file"; done
This will loop over each file in <DIR> and for each loop iteration assign the file name to $file. For each file, sed will replace the first occurence of SS-S01 in that file by the file name.

Want to grep in .tar.gz file in Solaris

i have a file in .tar.gz format in solaris. i want to grep some line from that. i am using zipgrep command but unable to find the line. Below is my sample file and command that i am using.
Sample file:
BD201701.tar.gz
BD201702.tar.gz
BD201703.tar.gz
i am using below command to search a line that contains bangladesh.
zipgrep 'bangladesh' BD2017*
But it;s showing below error.
[BD201701.tar.gz]
End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive.
zipinfo: cannot find zipfile directory in one of BD201701.tar.gz or
BD201701.tar.gz.zip, and cannot find BD201701.tar.gz.ZIP, period.
/usr/bin/zipgrep: test: argument expected
zipgrep has been made for processing PKZIP archive files. In PKZIP archives the compression is applied individually on each contained file, so the process boils down to a sequence of operations like this (not actual code!):
foreach file in archive:
unzip file to tmpfile
grep tmpfile
A compressed tar archive is different. First you pack a large bunch of files into an archive, and then the compression is applied to the whole bunch. So to search inside that the whole archive has to be unpacked first, i.e. something like this (pseudocode again):
make and change to temporary directory
tar -xZf ${archive}
grep -R ${seachstring} ./
However a tar archive itself is just a bunch of files "glued" together, with some information about their filename and size inbetween. So you could simply decompress the archive into a pipe, disregarding file spearation and search through that
zcat file | grep
zgrep does not work on solaris servers. In case the requirement is to find a pattern in a/all file(s) inside a directory given that the files are zipped, the following command can be used.
gzgrep 'pattern' filename
or
gzgrep 'bangladesh' BD2017*
use zgrep:
zgrep 'bangladesh' BD2017*

How to combine url with filename from file

Text file (filename: listing.txt) with names of files as its contents:
ace.pdf
123.pdf
hello.pdf
Wanted to download these files from url http://www.myurl.com/
In bash, tried to merged these together and download the files using wget eg:
http://www.myurl.com/ace.pdf
http://www.myurl.com/123.pdf
http://www.myurl.com/hello.pdf
Tried variations of the following but without success:
for i in $(cat listing.txt); do wget http://www.myurl.com/$i; done
No need to use cat and loop. You can use xargs for this:
xargs -I {} wget http://www.myurl.com/{} < listing.txt
Actually, wget has options which can avoid loops & external programs completely.
-i file
--input-file=file
Read URLs from a local or external file. If - is specified as file, URLs are read from the standard input. (Use ./- to read from a file literally named -.)
If this function is used, no URLs need be present on the command line. If there are URLs both on the command line and in an input file, those on the command lines will be the first ones to be retrieved. If --force-html
is not specified, then file should consist of a series of URLs, one per line.
However, if you specify --force-html, the document will be regarded as html. In that case you may have problems with relative links, which you can solve either by adding "<base href="url">" to the documents or by
specifying --base=url on the command line.
If the file is an external one, the document will be automatically treated as html if the Content-Type matches text/html. Furthermore, the file's location will be implicitly used as base href if none was specified.
-B URL
--base=URL
Resolves relative links using URL as the point of reference, when reading links from an HTML file specified via the -i/--input-file option (together with --force-html, or when the input file was fetched remotely from a
server describing it as HTML). This is equivalent to the presence of a "BASE" tag in the HTML input file, with URL as the value for the "href" attribute.
For instance, if you specify http://foo/bar/a.html for URL, and Wget reads ../baz/b.html from the input file, it would be resolved to http://foo/baz/b.html.
Thus,
$ cat listing.txt
ace.pdf
123.pdf
hello.pdf
$ wget -B http://www.myurl.com/ -i listing.txt
This will download all the 3 files.

Unable to create the md5sum file I need to create. Manually doing it would be far too labour-intensive

I need to create/recreate an md5sum file for all files in a directory and all files in all sub-directories of that directory.
I am using a rockettheme template that requires a valid md5sum document and I have made changes to the files, so the originally included md5sum file is no longer valid.
There are over 300 files that need to be checksummed, and the md5hash added to a single file.
The basic structure of the file is as follows:
1555599f85c7cd6b3d8f1047db42200b admin/forms/fields/imagepicker.php
8a3edb0428f11a404535d9134c90063f admin/forms/fields/index.html
8a3edb0428f11a404535d9134c90063f admin/forms/index.html
8a3edb0428f11a404535d9134c90063f admin/index.html
8a3edb0428f11a404535d9134c90063f admin/presets/index.html
b6609f823ffa5cb52fc2f8a49618757f admin/presets/preset1.png
7d84b8d140e68c0eaf0b3ee6d7b676c8 admin/presets/preset2.png
0de9472357279d64771a9af4f8657c2a admin/presets/preset3.png
5bda28157fe18bffe11cad1e4c8a78fa admin/presets/preset4.png
2ff2c5c22e531df390d2a4adb1700678 admin/presets/preset5.png
4b3561659633476f1fd0b88034ae1815 admin/presets/preset6.png
8a3edb0428f11a404535d9134c90063f admin/tips/index.html
2afd5df9f103032d5055019dbd72da38 admin/tips/overview.xml
79f1beb0ce5170a8120ba65369503bdc component.php
caf4a31db542ca8ee63501b364821d9d css/grid-responsive.css
8a3edb0428f11a404535d9134c90063f css/index.html
8697baa2e31e784c8612e2c56a1cd472 css/master-gecko.css
0857bc517aa15592eb796553fd57668b css/master-ie10.css
a4625ce5b8e23790eacb7704742bf735 css/master-ie8.css
This is just a snippet, but the logic is there.
hash path/to/file/relative/to/MD5SUM_file
Can anyone help me write a shell script (bash shell) that I can add to my path that will execute and generate a file called "MD5SUM_new"? I want the output file name to be "MD5SUM_new" so I can review the content before issuing a mv MD5SUM_new MD5SUM
FYI, the MD5SUM_new file needs to be saved in the root level of the template.
Thanks
This is quite easy, really. To hash all files under the current directory:
find . -type f | xargs md5sum > md5sums
Then, you can make sure it's correct:
md5sum -c md5sums

Resources