How to touch multiple files using range in terminal - shell

I use zsh. I want to touch 0023.rb 0024.rb .... 0040.rb.
Is there better way to touch files using range in terminal?
# something like this
$touch (0023..0040).rb

tested under zsh:
kent$ touch {0010..0015}.foo
kent$ l
total 0
-rw-r--r-- 1 kent kent 0 May 15 16:20 0010.foo
-rw-r--r-- 1 kent kent 0 May 15 16:20 0011.foo
-rw-r--r-- 1 kent kent 0 May 15 16:20 0012.foo
-rw-r--r-- 1 kent kent 0 May 15 16:20 0013.foo
-rw-r--r-- 1 kent kent 0 May 15 16:20 0014.foo
-rw-r--r-- 1 kent kent 0 May 15 16:20 0015.foo

You want to use seq to generate a series of numbers, then pipe the output of that to xargs:
seq 23 42|xargs -Inumber touch 00number.rb
You can format the numbers using standard printf format specifier. For minimum-4-digit, 0-padded number, use %04.f:
seq -f %04.f 99 101|xargs -Inumber touch number.rb

Related

How to crop a image into several rectangular grids using imagemagick

How can I cut a large image into a grid so that the smaller images can be uploaded to Instagram, to make up the large image in the grid view?
I think imagemagick can be used for this.
I have no idea what an Instagram grid is or what size constraints it might have, but if you have an image like this:
You can divide it into a grid, 3 tiles wide by 2 high like this:
magick input.jpg -crop 3x2# tile-%d.png
And here are the 6 tiles:
-rw-r--r--# 1 mark staff 62199 2 Jun 16:26 tile-0.png
-rw-r--r--# 1 mark staff 75180 2 Jun 16:26 tile-1.png
-rw-r--r--# 1 mark staff 69615 2 Jun 16:26 tile-2.png
-rw-r--r--# 1 mark staff 108443 2 Jun 16:26 tile-3.png
-rw-r--r--# 1 mark staff 121714 2 Jun 16:26 tile-4.png
-rw-r--r--# 1 mark staff 121384 2 Jun 16:26 tile-5.png
If you are cropping into lots of smaller parts, you are better using a zero-padded tile name like this so that they occur listed in order if you wish to re-assemble them.:
magick input.jpg -crop 5x4# tile-%04d.png
-rw-r--r-- 1 mark staff 5976 2 Jun 16:33 tile-0000.png
-rw-r--r-- 1 mark staff 15138 2 Jun 16:33 tile-0001.png
-rw-r--r-- 1 mark staff 17625 2 Jun 16:33 tile-0002.png
-rw-r--r-- 1 mark staff 15640 2 Jun 16:33 tile-0003.png
-rw-r--r-- 1 mark staff 12695 2 Jun 16:33 tile-0004.png
-rw-r--r-- 1 mark staff 30138 2 Jun 16:33 tile-0005.png
-rw-r--r-- 1 mark staff 32371 2 Jun 16:33 tile-0006.png
-rw-r--r-- 1 mark staff 30280 2 Jun 16:33 tile-0007.png
-rw-r--r-- 1 mark staff 33469 2 Jun 16:33 tile-0008.png
-rw-r--r-- 1 mark staff 29507 2 Jun 16:33 tile-0009.png
-rw-r--r-- 1 mark staff 34697 2 Jun 16:33 tile-0010.png
-rw-r--r-- 1 mark staff 36322 2 Jun 16:33 tile-0011.png
-rw-r--r-- 1 mark staff 36616 2 Jun 16:33 tile-0012.png
-rw-r--r-- 1 mark staff 40337 2 Jun 16:33 tile-0013.png
-rw-r--r-- 1 mark staff 37466 2 Jun 16:33 tile-0014.png
-rw-r--r-- 1 mark staff 30444 2 Jun 16:33 tile-0015.png
-rw-r--r-- 1 mark staff 36170 2 Jun 16:33 tile-0016.png
-rw-r--r-- 1 mark staff 39400 2 Jun 16:33 tile-0017.png
-rw-r--r-- 1 mark staff 38850 2 Jun 16:33 tile-0018.png
-rw-r--r-- 1 mark staff 36439 2 Jun 16:33 tile-0019.png
To make any image into a grid of squares with ImageMagick you need to decide the number of units in advance. A command like this will start by cropping an input image into an exact square, then crop that square into a 3x3 grid of smaller squares...
convert in.png -gravity center -extent 1:1 -crop 3x3# out%02d.png
That "-extent" crops the input to the largest possible square so when it's cut into a 3x3 grid the finished images are square, too. To crop the image into a 3x4 grid you'll use a command more like this...
convert in.png -gravity center -extent 3:4 -crop 3x4# out%02d.png
In that example the "-extent" crops the input image to an exact aspect ratio of 3:4 so when you crop it into 3 pieces by 4 pieces they'll all be squares.
Both examples will produce output images with sequentially numbered file names like "out01.png", "out02.png", etc.
If you want to number the output images in the order you need to upload them, you'll probably want that numbering in reverse. You can add "-reverse -scene 1" to the command just before writing the outputs to get the file names of those cropped squares numbered in the order you'll use for uploading.
If you're using IM7, change the "convert" to "magick" in those commands.
Note: The syntax that allows "-extent" to use an aspect ratio like "3:4" has only been available since early 2018. Using older versions of ImageMagick might require manually calculating for that first crop to get the input image to the proper aspect ratio (... or using FX expressions to set a viewport and "-distort" to simulate the crop).

Using diff command in unix to find the difference

I have two text files (new.txt and old.txt) which contains the recursively navigated directories.
new.txt
338465485 16 drwxr-x--- 26 encqa2 encqa2 16384 Nov 13 06:04 ./
338465486 4 drwxr-x--- 4 encqa2 encqa2 4096 Sep 19 08:38 ./excalibur
338465487 8 drwxr-x--- 3 encqa2 encqa2 8192 Nov 11 14:33 ./excalibur/data_in
338465488 4 drwxr-x--- 2 encqa2 encqa2 4096 Nov 9 23:16 ./excalibur/data_in/archive
old.txt
338101011 40 drwxr-x--- 26 encqa2 encqa2 36864 Nov 13 06:05 ./
338101012 4 drwxr-x--- 4 encqa2 encqa2 4096 Dec 14 2016 ./manual
338101013 4 drwxr-x--- 2 encqa2 encqa2 4096 Aug 25 2016 ./manual/sorted
338101014 4 drwxr-x--- 2 encqa2 encqa2 4096 Aug 25 2016 ./manual/archive
338101015 4 drwxr-x--- 4 encqa2 encqa2 4096 Aug 25 2016 ./adp
338101016 4 drwxr-x--- 6 encqa2 encqa2 4096 Aug 25 2016 ./adp/0235
what I need is the only it provides me the directories , i.e
expected output after diff should be
./
./excalibur
./excalibur/data_in
./excalibur/data_in/archive
./excalibur/archive
./shares
./shares/data_in
./shares/data_in/archive
./shares/sorted
please provide me the command
If I understand correctly, you want to do get those lines from the two text files which are different, but from these lines you want to output only the directory names, not the full information.
If you do a
diff {old,new}.txt
the differing lines are marked in the output with either a '>' or a '<' in the first column, so you get the desired lines by grepping for these characters:
diff {old,new}.txt | grep '^[<>]' | ....
Now you need only the file names. This is easiest if you know for sure that your pathes won't contain any space. In this case, you can, for instance, pipe your data into:
... | grep -oE ' [^ ]+$' | cut -d ' ' -f 2 | ...
If however the file names can contain spaces, you need to follow a different strategy. For instance, if you know that the number of characters in each line up to the file name is always the same, you can use cut -c .... to select the last portion of the line. Otherwise, you would need to process each line using a regular expression which describes the portion you want to throw away. I would use in this case Perl or Ruby, because I'm most familiar with this, but it can also be done with other tools - Zsh, awk, sed.
After this, you need to remove duplicates. These may occur for instance if a line differs between new.txt and old.txt not in the filename part, but in the file information part. This can be done by finally piping everything into
.... | sort -u

How can I iterate over .log files, process them through awk, and replace with output files with different extensions?

Let's say that we have multiple .log files on the prod unix machine(Sunos) in a directory:
For example:
ls -tlr
total 0
-rw-r--r-- 1 21922 21922 0 Sep 10 13:15 file2017-01.log
-rw-r--r-- 1 21922 21922 0 Sep 10 13:15 file2016-02.log
-rw-r--r-- 1 21922 21922 0 Sep 10 13:15 todo2015-01.log
-rw-r--r-- 1 21922 21922 0 Sep 10 13:15 fix20150223.log
The purpose here is that via nawk I extract specific info from the logs( parse logs ) and "transform" them to .csv files in order to load them to ORACLE tables afterwards.
Although the nawk has been tested and works like a charm, how could I automate a bash script that does the following:
1) For a list of given files in this path
2) nawk (to do my extraction of specific data/info from the log file)
3) Output separately each file to a unique .csv to another directory
4) remove the .log files from this path
What does concern me is that the loadstamp/timestamp on each file ending that is different. I have implemented a script that works only for the latest date. (eg. last month). But I want to load all the historical data and I am bit stuck.
To visualize, my desired/target output is this:
bash-4.4$ ls -tlr
total 0
-rw-r--r-- 1 21922 21922 0 Sep 10 13:15 file2017-01.csv
-rw-r--r-- 1 21922 21922 0 Sep 10 13:15 file2016-02.csv
-rw-r--r-- 1 21922 21922 0 Sep 10 13:15 todo2015-01.csv
-rw-r--r-- 1 21922 21922 0 Sep 10 13:15 fix20150223.csv
How could this bash script please be achieved? The loading will only takes one time, it's historical as mentioned.
Any help could be extremely useful.
An implementation written for readability rather than terseness might look like:
#!/usr/bin/env bash
for infile in *.log; do
outfile=${infile%.log}.csv
if awk -f yourscript <"$infile" >"$outfile"; then
rm -f -- "$infile"
else
echo "Processing of $infile failed" >&2
rm -f -- "$outfile"
fi
done
To understand how this works, see:
Globbing -- the mechanism by which *.log is replaced with a list of files with that extension.
The Classic for Loop -- The for infile in syntax, used to iterate over the results of the glob above.
Parameter expansion -- The ${infile%.log} syntax, used to expand the contents of the infile variable with any .log suffix pruned.
Redirection -- the syntax used in <"$infile" and >"$outfile", opening stdin and stdout attached to the named files; or >&2, redirecting logs to stderr. (Thus, when we run awk, its stdin is connected to a .log file, and its stdout is connected to a .csv file).

bash get files names from string variable

I have the following content in ${f} variable (this being a string):
-rw-r--r-- 1 ftp ftp 407 Jul 20 04:46 abc.zip
-rw-r--r-- 1 ftp ftp 427 Jul 20 05:59 def.zip
-rw-r--r-- 1 ftp ftp 427 Jul 20 06:17 ghi.zip
-rw-r--r-- 1 ftp ftp 427 Jul 20 06:34 jkl.zip
-rw-r--r-- 1 ftp ftp 227820 Jul 20 08:47 mno.zip
What I would like to get is only the files names out of it, such as:
abc.zip
def.zip
ghi.zip
jkl.zip
mno.zip
You can use awk to print the last field in each line. Use <<< input redirection to use the variable value as input.
awk '{print $NF}' <<<"$f"
Note that this won't work if any of the filenames have spaces in their names, you'll just get the part after the last space. Unfortunately, parsing ls output is not trivial.

Keep line formatting in browser with bash CGI

I'm using this bash CGI:
#!/usr/bin/sh
echo "Content-type: text/html"
echo ""
echo `ls -al`
And it produces ie:
total 52 drwxrwxrwx. 2 root root 4096 Feb 2 18:34 . drwxr-xr-x. 8 root
root 4096 Feb 2 17:58 .. -rw-r--r--. 1 root root 36310 Feb 2 17:45
dds.jpg -rw-rw-rw-. 1 user user 50 Feb 2 18:03 dds_panel.htm
-rwxrwxrwx. 1 user user 460 Feb 2 18:34 test-cgi.cgi
In a terminal they appear each neatly on a single line but in the browser they appear all on the same line. What's the best way to keep the formatting?
If you do not need any html formatting, simply change the content-type to text/plain.
If you need html formatting your output should contain a complete html page. In this case surround your output with <pre>, replace newlines with <br> or convert your output in something like a list or table.

Resources