Pipe dcraw output as input to cjpeg.exe - windows

Here is the command i'm trying in windows powershell.
.\dcraw-9.27-ms-64-bit.exe -c "C:\Downloads\RAW_CANON_10D.CRW" | cjpeg\cjpeg.exe temp.jpg
But cjpeg isn't getting the input and show this message.

You need to call cjpeg.exe with both input and output arguments.
cjpeg.exe <input-file> <output-file>
You are calling cjpeg.exe with single argument generating the error.
cjpeg.exe temp.jpg
Actually you are trying to feed the result of the first command as the input of the cjpeg command. If dcraw returns the path of the input file then you have to execute the following:
.\dcraw-9.27-ms-64-bit.exe -c "C:\Downloads\RAW_CANON_10D.CRW" |
%{ cjpeg\cjpeg.exe $_ temp.jpg }
If dcraw returns raw binary output of the image you need to save it to a temporary file.
.\dcraw-9.27-ms-64-bit.exe -c "C:\Downloads\RAW_CANON_10D.CRW" |
Out-file temp.tiff |
%{ cjpeg\cjpeg.exe temp.tiff temp.jpg }

Related

How can I pipe output into another command?

I have a script located at /usr/local/bin/gq which is returned by the command whereis gq, well almost. What is actually returned is gq: /usr/local/bin/gq. But the following gives me just the filepath (with some white space)
whereis gq | cut -d ":" -f 2
What I’d like to do is be able to pipe that into cat, so I can see the contents. However the old pipe isn’t working. Any suggestions?
If you want to cat the contents of gq, then how about:
cat $(which gq)
The command which gq will result in /usr/local/bin/gq, and the cat command will act on that.

Find the newest JSON file in my directory and confirm the contents Mac Terminal

I want to be able to find the newest .json file in my directory, and then look through the file and search for "status": "PASSED" inside of it. Is this possible to do inside of Mac terminal?
So far I have the following:
find data/results -type f -mmin -60 | grep '.json'
Note: This command will always output just one JSON file.
Which will print in my terminal the name of the latest .json file, but I can't figure out how to then read It.
Thank you in advance!
JSON File looks like:
[
{
...,
"fieldName": "Answer",
"status": "PASSED",
"otherFieldName": "OtherAnswer",
...
}
]
To find the newest json file in a directory, you can do ls -t *.json | head -n1. Your command (the find … grep) is similar, although it can output more than one file if the dates are close together. The command I suggest will always find only the newest.
To grep that file for the passing status, of course you would do grep PASSED <file>.
To combine these commands, as with all things Terminal, there are many ways to do it. One option would be:
ls -t *.json | head -n1 | xargs grep PASSED
Another option would be:
grep PASSED "$(ls -t *.json | head -n1)"
The result will be the same.
The techniques I'm introducing you to here are "xargs", which takes lines of input and adds them on to the end of the command as parameters, and "command substitution" (the $() construct), which substitutes that part of the command for the result of a separate command within it.

grep command in shell is taking too long

Below is the part of my shell program where the grep command is taking too long to produce result ,
logdir="/logs/orderjob"
recentorderjobFile=$(ls -t $logdir/orderjob* | head -n1)
start=`grep 'TWS has called script' $recentorderjobFile`
But i am not seeing any slowness when manually run the grep command for the log file.
grep 'TWS has called script' orderjob_12042018.log
Why the grep command in start variable is taking long time/not producing the results?.How can i modify the command to get the expected results quickly?.

How to get unique results with grep?

The below mentioned scenario is a part of the logic that i want to implement as part of a jenkins job. I am trying to write a shell script.
I am using grep command to recursively search for a particular string. Sample result that grep returns is like this:
./src/test/java/com/ABC/st/test/pricing/Test1.java: #Tags({ "B-05256" })
./src/test/java/com/ABC/st/test/pricing/Test1.java: #MapToVO(storyID = "B-05256: prices in ST")
./src/test/java/com/ABC/st/test/pricing/Test1.java: #Tags({ "B-05256" })
./src/test/java/com/ABC/st/test/pricing/Test2.java: #Tags({ "B-05256" })
./src/test/java/com/ABC/st/test/pricing/Test2.java: #MapToVO(storyID = "B-05256:Lowest Price of the Season")
./src/test/java/com/ABC/st/test/pricing/Test2.java: #Tags({ "B-05256" })
I want to extract unique file paths such as:
/src/test/java/com/ABC/st/test/pricing/Test1.java
/src/test/java/com/ABC/st/test/pricing/Test2.java
and then use each unique path in a maven command. So:
How can i extract unique file paths from the result set given by grep command?
How do i run a loop kind of a thing, where in every iteration i execute mvn command with unique file path?
If you need only the name of the matching files, grep has a command line switch for this:
-l, --files-with-matches
Suppress normal output; instead print the name of each input file from which output
would normally have been printed. The scanning will stop on the first match. (-l is
specified by POSIX.)
Pipe your text into
sed 's/:.*//' | sort -u | while read path
do
echo now execute your command using "$path"
done
This is what the -l flag to grep is for.
-l, --files-with-matches
Suppress normal output; instead print the name of each input file from which output would normally have been printed. The scanning will stop on the first match. (-l is specified by POSIX.)

Size of the sorted file is double than original file in powershell

I have a powershell script, that reads file content, sorts it and writes output to new file. Following is the script:
get-content $inputFile | sort > $sortedFile
The output in file is sorted properly, but the output file ($sortedFile) is double larger than input file ($inputFile). Note: There are no duplicate or extra line in output file.
Any help or ideas regarding this will be helpful.
Most likely the input file is ascii encoding while the default output using redirection is unicode encoding.
Instead of using > as redirection you can use out-file and specify an encoding.
get-content $inputFile | sort | out-file -encoding ASCII

Resources