I am still new to Shell. In javascript it is super easy to parse all output into a new column. Allyou need is ,. But I am still struggling to do the same in Shell. I've traversed most of the anwsers on Stackoverflow, and still couldn't get it to work. Most of the anwsers are around cutting from an existing file and pasting into a new one etc. Pretty sure, somewhere I am making a simple syntax error.
At the moment I have this:
echo "Mq1:" >> ~/Desktop/howmanySKUs.csv
cd /Volumes/Hams\ Hall\ Workspace/Mannequin_1_WIP && ls |grep \_01.tif$ | wc -l | sed "s/,//" >> ~/Desktop/howmanySKUs.csv
It counts the amount of files in specified directory.
I get this:
But now I am trying to Output Mq1: in one column, and then the sum of found files in the 2nd column.
Desired Output:
Any help would be much appreciated.
You can directly append both the lines
cd /Volumes/Hams\ Hall\ Workspace/Mannequin_1_WIP && echo "Mq1:,"`ls |grep \_01.tif$ | wc -l` > ~/Desktop/howmanySKUs.csv
Related
I have a file .txt with some informations, i need to grep the "Report:" line and save each line in a different .txt file!
it should result something like this in the end:
case1.txt
case2.txt
case3.txt
I tried to
cat cases.txt| grep Report: | while read Report; do echo $Report | > /home/kali/Desktop/allcases/case.txt done
but it didnt work and just created one file called case.txt containing the last grepped "Report:"
I dont know if i was very clear then i'll show this screenshot:
cases
I wanted to split all theses reports in a different .txt file for each report!
These case informations are from a game, so dont worry!
awk would be better suited than grep and a while loop. If acceptable, you can try;
awk '/^Report/{cnt++;close(report); report="case"cnt".txt"}/./{print > report}' file.txt
perl -ne '++$i && `printf "$_" > case$i.txt` if /Report:/' cases.txt
This is looping over cases.txt and shelling out printf "$_" > case$i.txt if the line matches /Report:/
Because it's perl there's some syntax and precedence tricks in here to make it terse and confusing.
Basically I have a flat file that is setup like so:
/this/is/a/log/directory | 30
/this/also/is/having/logs | 45
/this/logs/also | 60
What I'm trying to do is extract the first column of this flat file which is the directory path and check if it has more than 500 log files in it. If it does, remove all but the newest 500 files.
I was trying to do something like this
#!/bin/ksh
for each in "$(awk '{ print $1 }' flat_file)"; do
cd "$each";
if [ "ls -l | wc -l >= 500" ]; then
rm `ls -t | awk 'NR>500'`
else
:
fi
done
However from what I've read I cannot cd from within my script with the for loop like I was trying to do and that you can do it from within a function, at which point I basically just made a function and copied that code into it, and it of course didn't work (not too familiar with shell scripting). Something similar to Python's OS module where I could just use os.listdir() and pass in the directory names would be perfect, however I have yet to be able to figure out an easy way to do this.
OK, you're on the right track, but you'll confuse the csh programmers that look at your code with for each. Why not
for dir in $( awk '{ print $1 }' flat_file ) ; do
cd "$dir"
if (( $(ls -l | wc -l) >= 500 )); then
rm $( ls -t | awk 'NR>500' )
fi
cd -
done
Lots of little things in your original code. Why use backticks sometimes, when you are using the preferred form of cmd-sub $( cmd ) other times.
Enclosing your "$(awk '{print $1}' file)" in dbl-quotes will turn the complete output of the cmd-substition into 1 long string, it won't find a dir named "dir1 dir2 dir3 .... dirn", right?
You don't need a null (:) else. You can just eliminate that block of code.
ksh supports math operations inside (( .... )) pairs (just like bash).
cd - will take you back to the previous directory.
Learn to use the shell debug/trace, set -vx. it will show you first, what it going to be executed (sometimes a very large loop structure) and then it will show each line that does get executed, preceded with + and where variables have been converted into their values. You might also want to use export PS4='$LINENO >' so debugging will show current lineNo that is being executed.
IHTH
New to UNIX, currently learning UNIX via secureshell in a class. We've been given a few basic assignments such as creating loops and finding files. Our last assignment asked us to
write code that will estimate the number of shell scripts in the current directory and then print out that total number as "Estimated number of shell script files in this directory:"
Unlike in our previous assignments we are now allowed to use conditional loops, we are encouraged to use grep and wc statements.
On a basic level I know I can enter
ls * .sh
to find all shell scripts in the current directory. Unfortunately, this doesn't estimate the total number or use grep. Hence my question, I imagine he wants us to go
grep -f .sh (or something)
but I'm not exactly sure if I am on the right path and would greatly appreciate any help.
Thank You
You can do it like:
echo "Estimated number of shell script files in this directory:" `ls *.sh | wc -l`
I'd do it this way:
find . -executable -execdir file {} + | egrep '\.sh: | Bourne| bash' | wc -l
Find all files in the current directory (.) which are executable.
For each file, run the file(1) command, which tries to guess what type of file it is (not perfect).
Grep for known patterns: filenames ending with .sh, or file types containing "Bourne" or "bash".
Count lines.
Huhu, there's a trap, .sh file are not always shell script as the extension is not mandatory.
What tells you this is a shell script will be the Shebang #!/bin/*sh ( I put a * as it could be bash, csh, tcsh, zsh, which are shells) at top of line, hence the hint to use grep, so the best answer would be:
grep '^#!/bin/.*sh' * | wc -l
This give output:
sensible-pager:#!/bin/sh
service:#!/bin/sh
shelltest:#!/bin/bash
smbtar:#!/bin/sh
grep works with regular expression by default, so the match #!/bin/.*sh will match files with a line starting (the ^) by #!/bin/ followed by 0 or unlimited characters .* followed by sh
You may test regex and get explanation of them on http://regex101.com
Piping the result to wc -l to get the number of files containing this.
To display the result, backticks or $() in an echo line is ok.
grep -l <string> *
will return a list of all files that contain in the current directory. Pipe that output into wc -l and you have your answer.
Easiest way:
ls | grep .sh > tmp
wc tmp
That will print the number of lines, bytes and charcters of 'tmp' file. But in 'tmp' there's a line for each *.sh file in your working directory. So the number of lines will give an estimated number of shell scripts you have.
wc tmp | awk '{print $1}' # Using awk to filter that output like...
wc -l tmp # Which it returns the number of lines follow by the name of file
But as many people say, the only certain way to know a file is a shell script is by taking a look at the first line an see if there is #!/bin/bash. If you wanna develop it that way, keep in mind:
cat possible_script.x | head -n1 # That will give you the first line.
Below, I am trying to find the latest version of a file that could be in multiple directories.
Example Directory:
~inventory/emails/2012/06/InventoryFeed-Activev2.csv 2012/06/05
~inventory/emails/2012/06/InventoryFeed-Activev1.csv 2012/06/03
~inventory/emails/2012/06/InventoryFeed-Activev.csv 2012/06/01
Heres the bash script:
#!/bin/bash
FILE = $(find ~/inventory/emails/ -name INVENTORYFEED-Active\*.csv | sort -n | tail -1)
#echo $FILE #For Testing
cp $FILE ~/inventory/Feed-active.csv;
The error I am getting is:
./inventory.sh: line 5: FILE: command not found
The script should copy the newest file as attempted above.
Two questions:
First, is this the best method to achive what I want?
Secondly, Whats wrong above?
It looks good, but you have spaces around the = sign. This won't work. Try:
#!/bin/bash
FILE=$(find ~/inventory/emails/ -name INVENTORYFEED-Active\*.csv | sort -n | tail -1)
#echo $FILE #For Testing
cp $FILE ~/inventory/Feed-active.csv;
... Whats wrong above?
Variable assignment. You are not supposed to put extra spaces around = sign. The following should work:
FILE=$(find ~/inventory/emails/ -name INVENTORYFEED-Active\*.csv | sort -n | tail -1)
... is this the best method to achive what I want?
Probably not. But the best way depends on many factors. Perhaps whoever writes those files, can put them in a right location in the first place. You can also check file modification time, but that could fail, too... So as long as it works for you, I'd say go for it :)
I get a large log file which I have to process.
After a week, I'll get a new one. It will be the same with added new lines (logs).
I just need the new added lines.
How do I do that?
EDIT: I've tried sed so far but haven't been successful
diff would allow you to find any and all differences between these files, as long the changes are restricted to added and/or removed lines. On most Linux distributions it's a part of GNU diffutils, but it exists on pretty much every Uinix-like system.
If line are append to log file and I suppose you have the old one, you could try :
tail -$(( $(cat newLogFileName | wc -l)-$(cat oldLogFileName | wc -l) )) newLogFileName
comm -13 oldfile newfile will get you the lines that only appear in the newfile.
# get new.log
tail -n+$(($(wc -l < old.log)+1)) new.log
mv new.log old.log