grep command in shell is taking too long - shell

Below is the part of my shell program where the grep command is taking too long to produce result ,
logdir="/logs/orderjob"
recentorderjobFile=$(ls -t $logdir/orderjob* | head -n1)
start=`grep 'TWS has called script' $recentorderjobFile`
But i am not seeing any slowness when manually run the grep command for the log file.
grep 'TWS has called script' orderjob_12042018.log
Why the grep command in start variable is taking long time/not producing the results?.How can i modify the command to get the expected results quickly?.

Related

using cat in a bash script is very slow

I have very big text files(~50,000) over which i have to do some text processing. Basically run multiple grep commands.
When i run it manually it returns in an instant , but when i do the same in a bash script - it takes a lot of time. What am i doing wrong in below bash script. I pass the names of files as command line arguments to script
Example Input data :
BUSINESS^GFR^GNevil
PERSONAL^GUK^GSheila
Output that should come in a file - BUSINESS^GFR^GNevil
It starts printing out the whole file on the terminal after quite some while. How do i suppress the same?
#!/bin/bash
cat $2 | grep BUSINESS
Do NOT use cat with program that can read file itself.
It slows thing down and you lose functionality:
grep BUSINESS test | grep '^GFR|^GDE'
Or you can do like this with awk
awk '/BUSINESS/ && /^GFR|^GDE/' test

Bash statement meaning

I'm working on a project, and it's being run by an autoscript. The script has the following line:
./executable ./dev | grep -i "GET.*index.*200" > ./dev/logs/log1
I have my code writing to stdout, but it never gets written to log1. If I change it though and remove the grep command, it writes just fine. Any help would be appreciated, as I seemingly don't understand grep as well as I should.
You might try to redirect std output in your script "executable" using commands:
exec > ./dev/logs/log1
exec 2> ./dev/logs/errlog1
So, now not need to use ">" in the line
./executable ./dev | grep -i "GET.*index.*200"
Also I recommend you to use only absolute paths in scripts.
ps. [offtop] I can't write comments yet (not enough reputation).

appending file contents as parameter for unix shell command

I'm looking for a unix shell command to append the contents of a file as the parameters of another shell command. For example:
command << commandArguments.txt
xargs was built specifically for this:
cat commandArguments.txt | xargs mycommand
If you have multiple lines in the file, you can use xargs -L1 -P10 to run ten copies of your command at a time, in parallel.
xargs takes its standard in and formats it as positional parameters for a shell command. It was originally meant to deal with short command line limits, but it is useful for other purposes as well.
For example, within the last minute I've used it to connect to 10 servers in parallel and check their uptimes:
echo server{1..10} | tr ' ' '\n' | xargs -n 1 -P 50 -I ^ ssh ^ uptime
Some interesting aspects of this command pipeline:
The names of the servers to connect to were taken from the incoming pipe
The tr is needed to put each name on its own line. This is because xargs expects line-delimited input
The -n option controls how many incoming lines are used per command invocation. -n 1 says make a new ssh process for each incoming line.
By default, the parameters are appended to the end of the command. With -I, one can specify a token (^) that will be replaced with the argument instead.
The -P controls how many child processes to run concurrently, greatly widening the space of interesting possibilities..
command `cat commandArguments.txt`
Using backticks will use the result of the enclosed command as a literal in the outer command

Bash: How to redirect the output of set of commands piped together to a file?

perf record | perf inject -b | perf report > tempfile 2>&1
I am running the above set of commands and trying to capture the ouput to temfile, but sometimes the outputs doesn't get fully appended in the tempfile (output of each command). To be more precise I am running this command from a script and I tried putting them in small brackets like
(perf record | perf inject -b | perf report) > tempfile 2>&1
but this also didn't work.
Pipe redirects output of one program to another. To log the output to a file and redirect to another program use tee command:
http://en.wikipedia.org/wiki/Tee_(command)

Bash: piped argument to open command fails. Open commands excutes too early?

I'm pretty much a novice to shell scripting. I'm trying to send the output of some piped commands to an open command in bash in OSX.
My ultimate goal is to compile a Flex/Actionscript application from TextWrangler by calling a bash script with a little Applescript and have the result played directly in a Flash Player. The Applescript is pretty much doing it's job. But the bash script doesn't work as I expect. Same results when I ommit the Applescript and simply put it directly in terminal.
This is what the Applescript is sending to terminal:
mxmlc -warnings=false DocumentClass.as | tail -n 1 | sed 's/[[:space:]].*$//' | open -a 'Flash Player'
So basically, I read the last line of the output of mxmlc, which usually looks something like this:
/Users/fireeyedboy/Desktop/DocumentClass.swf (994 bytes)
and I strip everything after the first space it encounters. I know it's hardly bulletproof yet, it's still just a proof of concept. When I get this roughly working I'll refine. It returns the desired result so far:
/Users/fireeyedboy/Desktop/DocumentClass.swf
But as you can see, I then try to pipe this sed result to the Flash Player and that's where it fails. The Flash Player seems to open way too early. I would expect the Flash Player to open only after the script finished the sed command. But it opens way earlier.
So my question is twofold:
Is it even possible to pipe an
argument to the open command this
way?
Do I need to use some type
of delay command to get this
working, since the open command doesn't seem to be waiting for the input?
You're trying to give the name of the swf file as input to stdin of the open command, which it doesn't support.
It expects the file name as an argument (similar to -a).
You can do something like this:
FILENAME=`xmlc -warnings=false DocumentClass.as | tail -n 1 | sed 's/[[:space:]].*$//'`
open -a 'Flash Player' $FILENAME
or on a single line:
open -a 'Flash Player' `xmlc -warnings=false DocumentClass.as | tail -n 1 | sed 's/[[:space:]].*$//'`
If you're using bash (or another modern POSIX shell), you can replace the pretty unreadable backtick character with $( and ):
open -a 'Flash Player' $(xmlc -warnings=false DocumentClass.as | tail -n 1 | sed 's/[[:space:]].*$//')
All commands in a pipe are started at the same time. During this step, their input/outputs are chained together.
My guess is that open -a 'Flash Player' doesn't wait for input but simply starts the flash player. I suggest to try to run the player with an argument instead:
name=$(mxmlc -warnings=false DocumentClass.as | tail -n 1 | sed 's/[[:space:]].*$//')
open -a 'Flash Player' "$name"
I'm not familiar with the "open" command as it seems to be a mac thing, but i think what you want to do is:
open -a 'Flash Player' $(mxmlc -warnings=false DocumentClass.as | tail -n 1 | sed 's/[[:space:]].*$//')
In general you can't pipe arguments to a command, you have to specify that you want the output of the previous command to be treated as arguments, either as in my example or with the xargs command. Note that there is a limit on the maximum size of a command line, though.

Resources