I am new to shell scripting
I keyed in the Command
$ ls -l >out.txt
then I see the output
$ vi out.txt
the contents of the file were
total 8
-rw-rw-r-- 1 arun arun 0 May 5 19:55 out.txt
i now do this
$ ls -l
total 12
-rw-rw-r-- 1 arun arun 54 May 5 19:55 out.txt
why is there a discrepancy in the output that i received on the terminal and the output that was saved on the file out.txt?
The first time you ran ls, out.txt was empty.
The second time you ran ls, out.txt contained the results of ls, hence not empty.
As soon as the shell parsed the command and saw the use of stdout going to out.txt, it opened out.txt in your directory with size 0 bytes. When you did ls -l later on in the shell, out.txt already had some content and it showed the size.
When you ran
ls -l >out.txt
the sequence of events was:
Open the file out.txt for writing. Initially, the file size is 0 bytes.
Run ls -l, which sees the empty file out.txt.
Write the output of ls -l to out.txt.
After step 3, out.txt is a 54-byte file, which you observe with your second invocation of ls -l.
... because your first command put data into out.txt. Its size is necessarily larger after that.
Related
I am trying to run an executable in terminal, compiled with g++, with separate input and output file/stream. But I want to put a limit on the output, when the output file reaches a specific limit of number of lines, the program should stop. I saw the use of the head command in bash.
./a.out | head --lines 100 <input.txt >output.txt
But when executed, it takes input from the input.txt file and truncates 100 lines and prints them to output.txt file. But what I want it to do is, run the a.out executable taking input from input.txt file, and then printing the results to output.txt file. How can I get this done?
But what I want it to do is, run the a.out executable taking input from input.txt file, and then printing the results to output.txt file.
The correct usage of redirections for that is:
./a.out <input.txt | head --lines 100 >output.txt
This question already has answers here:
Why does ls give different output when piped
(3 answers)
Closed 6 years ago.
When I execute the command ls on my system, I get the following output:
System:~ user# ls
asd goodfile testfile this is a test file
However, when I pipe ls to another program (such as cat or gawk), the following is output:
System:~ user# ls | cat
asd
goodfile
testfile
this is a test file
How do I get ls to read the terminal size and output the same over a pipe as it does when printing directly to the terminal?
This question has been solved.
Since I'm using bash, I used the following to achieve the desired output:
System:~ user# ls -C -w "$(tput cols)" | cat
Use ls -C to get columnar output again.
When ls detects that its output isn't a terminal, it assumes that its output is being processed by some other process that wants to parse it, so it switches to -1 (one-entry-per-line) mode to make parsing easier. To make it format in columns as when it's outputting directly to a terminal, use -C to switch back to column mode.
(Note, you may also have to use --color if you care about color output, which is also normally suppressed by outputting to a pipe.)
Maybe -x "list entries by lines instead of by columns" with possible -w "assume screen width instead of current value" is what you need.
When the output goes to a pipe or non-terminal, the output format is like ls -1. If you want the columnar output, use ls -C instead.
The reason for the discrepancy is that it is usually easier to parse one-line-per-file output in shell scripts.
Since I'm using bash, I used the following to achieve the desired output:
System:~ user# ls -C -w "$(tput cols)" | cat
Please explain the output of this shell command:
ls >file1 >file2
Why does the output go to file2 instead of file1?
bash only allows one redirection per file descriptor. If multiple redirections are provided, like in your example, they are processed from left to right, with the last one being the only one that takes effect. (Notice, though, that each file will still be created, or truncated if already in existence; the others just won't be used by the process.)
Some shells (like zsh) have an option to allow multiple redirections. In bash, you can simulate this with a series of calls to tee:
ls | tee file1 file2 > /dev/null
Each call to tee writes its input to the named file(s) and its standard output.
If the shell finds multiple redirections of any output, it will redirect it to the last file given, in your case file2, since redirections are evaluated from left to right.
While it works, you should not do something like that!
You first redirected the STDOUT stream to go to file1 but then immediately redirected it to go to file2. So, the output goes to file2.
Every process initially has three file descriptors - 0 for STDIN, 1 for STDOUT and 2 for STDERR. >file1 means that open the file1 and assign its descriptor the id 1. Note that the process which is about to execute doesn't care about what is the end point where its STDOUT goes. It just writes to whatever is described by file descriptor 1.
For a more technical description of how this works, see this answer.
The redirect operator is short for a stdout redirection(1>). Since the command is evaluated left to right, the last stdout redirection is used for the running of ls.
ls 1> file1 1>file2
is equivalent to
ls >file1 >file2
If you're trying to redirection stderr, use
ls > file1 2> file2
0 = stdin
1 = stdout
2 = stderr
try this, you'll notice that file2 will receive the stderr message.
ls ------ > file1 2> file2
then these, in both cases output will be in stdout and will go to file1.
ls >file1 2>file2
ls 1>file1 2>file2
Because first redirection gets overridden by the second. Note though, that an empty file1 is still created when file1 was opened for output.
suppose I have a very simple ASCII file that only contains
11111111
now, I want to use a command to find how many bytes it really has, not how many bytes the system allocated for it. I tried
ln -s
and
du
but they only output
4
I think that's how many blocks the system allocates for this file, how can I use a command to find the size of such a small file?
You need to use du -b to see the size of the file in bytes.
$ du -b file
9 file
wc -c will do:
$ echo "11111111" > file
$ wc -c file
9 file
You can use the stat command to get information on a file. For instance, the size of file in bytes is:
$ echo "11111111" > file
$ stat -c %s file
9
Type man stat to see all of the other useful things it can tell you about a file.
I'm basically asking why:
head -c 2 > /tmp/first-two-bytes
cat /tmp/first-two-bytes -
doesn't copy the first two bytes of stdin to /tmp/first-two-bytes then dump the entire contents of stdin to stout.
[Edit] Just to be clear, here's what happens on my machine:
$ uname -a
Darwin Myles-Byrnes-iMac.local 11.3.0 Darwin Kernel Version 11.3.0: Thu Jan 12 18:47:41 PST 2012; root:xnu-1699.24.23~1/RELEASE_X86_64 x86_64
$ echo "hello, world" | (head -c 2 > /tmp/first-two-bytes; cat /tmp/first-two-bytes -)
he$ cat /tmp/first-two-bytes
he$
Your commands do exactly what they should. Remember that a stream is not a file. Whatever is read from the stream, is removed from it. There's no rewinding (unless you implement it yourself using a buffer in your app - but it would be in the app, not a property of the stream). The first command reads 2 bytes from stdin. The other outputs the file from /tmp and the "entire contents of stdin" - but at the point it is called, the "entire contents" of stdin is already two bytes less than before the previous command was executed.
As you can see from the following command, the behaviour is just as you describe:
$ echo "hello, world" | (head -c 2 > /tmp/first-two-bytes; cat /tmp/first-two-bytes -)
hello, world
$ cat /tmp/first-two-bytes
he$
Note that the last $ is the prompt
Each byte in a stream can only be read once. It is entirely possible that head could be implemented so that it reads only 2 bytes, but it is also possible that head could be implemented to read the entire stream and output only the first 2 bytes. If the latter implementation is used, then stdout will be exhausted before cat ever sees any data.
If you want the functionality of head that is guaranteed to read exactly 2 bytes of data from the input stream and is maximally portable, you probably want to use dd. Just replace head -c 2 with dd bs=2 count=1
If I understand correctly, you want to write only the first two bytes from standard input into a special file and then print the entire input (including the first two bytes) to standard output.
Here's how I'd do it:
while read -r -n2 c2; do
if [ ! -f tmp.txt ]; then
echo $c2 > tmp.txt
fi
echo $c2
done < input.txt
Basically you first read the input into a variable, two bytes at a time, then write only the first two bytes.