Linux Scripting - bash

Can you give me a sample on how to filter a certain keyword like for example "error" in the /var/log/messages and then send email if it finds real-time word for error.
I would just like to watch for error keyword in the /var/log/messages and then send it to my email address.

simply grepit.
tail -f log.log | grep error
This will list you all error you can then mail them

What you can do is this:
On a regular basis (which you decide), you:
copy the main file to another file
you DIFF on that file, only taking out the newly added parts (if the file is sequentially written, this will be a nice and clean block of lines, at the end of the file)
you copy the main file to the other file, again (this sets the new reference for the next check)
then you GREP on whatever you want, in the block of lines you've found 2 steps back
you report the found lines, using the wanted method (mail,..)

Related

The assigned bam file is not recognised

I am currently aligning paired-end reads to the reference genome (genome index created) and the goal is to end up with a single bam file. This is the code that I am using and everything works fine until the last code line. I get the error message that the file 'SRR5882797_10M.bam' doesn't exist. This file doesn't exist yet, of course, but this is what I am trying to send my output file to and is therefore, supposed to be created with this code. I am not sure how to fix this since it seems to be asking me to have a file in the folder already. Thanks :)
bwa mem -t 2 Refs/Athaliana/Arabidopsis_thaliana_TAIR10 02_trimmedData/fastq/SRR5882797_10M_1.fastq.gz 02_trimmedData/fastq/SRR5882797_10M_2.fastq.gz |
samtools view -bhS -F4 - > 03_alignedData/bam/SRR5882797_10M.bam

Combine CSV files with condition

I need to combine all the csv files in some directory (.csv), provided that there are other files with the same name in this directory, but with different expansion (.csv.done).
If a csv file doesn't have .done in this extension then I don't need it for combine process.
What is the best way to do it using Bash ?
This approach is a solution to your problem. I see you've commented that it "didn't work", but whatever the reason is for it not working, it's likely simple to fix e.g. if you forgot to include key details, or failed to adapt it appropriately to suit your specific situation. If you need further help troubleshooting, add more info to your question.
The approach:
for f in *.csv.done
do
cat "${f%.*}" >> combined_file.csv
done
How it works:
In your example, you have 3 files named 1.csv 2.csv 3.csv and two 'done' files named 1.csv.done 2.csv.done.
This script begins by making a list of all files that end in .csv.done (two files: 1.csv.done 2.csv.done).
It then uses a parameter expansion, specifically ${parameter%word}, to 'shorten' the name of the two files in the list to .csv (instead of .csv.done).
Then it 'prints' the content of the two 'shortened' filenames (1.csv and 2.csv) into a 'combined' file.
It doesn't 'print' the content of 1.csv.done or 2.csv.done, or 3.csv, because these files weren't in the original 'list'.
If you run this script multiple times, it will keep adding the contents of files 1.csv and 2.csv to the 'combined' file (only run it once, or delete the 'combined' file before running it again)

wanted to use results of find command in custom script that i am building

I want to validate my XML's for well-formed ness, but some of my files are not having a single root (which is fine as per business req eg. <ri>...</ri><ri>..</ri> is valid xml in my context) , xmlwf can do this, but it flags out a file if it's not having single root, So wanted to build a custom script which internally uses xmlwf, my custom script should do below,
iterate through list of files passed as input (eg. sample.xml or s*.xml or *.xml)
for each file prepare a temporary file as <A>+contents of file+</A>
and call xmlwf on that temp file,
Can some one help on this?
You could add text to the beginning and end of the file using cat and bash, so that your file has a root added to it for validation purposes.
cat <(echo '<root>') sample.xml <(echo '</root>') | xmlwf
This way you don't need to write temporary files out.

wGet using sed output in POST request

Fairly new to the world of UNIX and trying to get my head round its quirks.
I am trying to make a fairly simple shell script that uses wGet to send a XML file that has been pre-processed with Sed .
I thought about using a pipe but it caused some weird behaviour where it just outputted my XML into the console.
This is what I have so far:
File_Name=$1
echo "File name being sent to KCIM is : " $1
wget "http://testserver.com" --post-file `sed s/XXXX/$File_Name/ < template.xml` |
--header="Content-Type:text/xml"
From the output I can see I am not doing this right as its creating a badly formatted HTTP request
POST data file <?xml' missing: No such file or directory
Resolving <... failed: node name or service name not known.
wget: unable to resolve host address<'
Bonus points for explaining what the problem is as well as solution
For wget the option post-file sends the contents of the named file. In your case you seem to be passing directly the data so you probably want --post-data.
The way you are doing it right now, bash gets the output from sed and wget gets something like:
wget ... --post-file <?xml stuff stuff stuff
So wget goes looking for a file called <?xml instead of using that text verbatim.

Finding and Removing Unused Files Through Command Line

My websites file structure has gotten very messy over the years from uploading random files to test different things out. I have a list of all my files such as this:
file1.html
another.html
otherstuff.php
cool.jpg
whatsthisdo.js
hmmmm.js
Is there any way I can input my list of files via command line and search the contents of all the other files on my website and output a list of the files that aren't mentioned anywhere on my other files?
For example, if cool.jpg and hmmmm.js weren't mentioned in any of my other files then it could output them in a list like this:
cool.jpg
hmmmm.js
And then any of those other files mentioned above aren't listed because they are mentioned somewhere in another file. Note: I don't want it to just automatically delete the unused files, I'll do that manually.
Also, of course I have multiple folders so it will need to search recursively from my current location and output all the unused (unreferenced) files.
I'm thinking command line would be the fastest/easiest way, unless someone knows of another. Thanks in advance for any help that you guys can be!
Yep! This is pretty easy to do with grep. In this case, you would run a command like:
$ for orphan in `cat orphans.txt`; do \
echo "Checking for presence of ${orphan} in present directory..." ;
grep -rl $orphan . ; done
And orphans.txt would look like your list of files above, one file per line. You can add -i to the grep above if you want to grep case-insensitively. And you would want to run that command in /var/www or wherever your distribution keeps its webroots. If, after you see the above "Checking for..." and no matches below, you haven't got any files matching that name.

Resources