bash files that do like cat - bash

I have an exercise where I need to do a file named my_cat.sh.
It has to work in the same way as the cat command.
Ex:
∼/C-DEV-110> echo Hello > test
∼/C-DEV-110> bash my_cat.sh test
Hello
I tried to search everywhere on the Internet but couldn't find any answers.
Copy your file my_cat.sh and modify it so it takes the file to show as its first parameter.
It's the sentences if it as any other way to find an answers.
(I'm new, so it may be really simple).
I tried to simply out a cat in the nano but i doesn't give back anything.
Thank you.

The cat command copies it's arguments, or the standard input if none was specified, to the standard output.
Now, if you had to implement it using cat, your script would look like this
#!/bin/sh
cat "$#"
However, that would be trivial. Now, if you had to implement without resorting to external commands, then you can use the shell's internal (built-in) read and echo commands combined with a loop to do the job. In order to read from a file instead of the standard input you'll need to use redirections.
It should look similar to this:
while read x
do
echo "$x"
done
Something like that would be the inner loop, there will be an other loop for iterating over the input files... for that you can either use "$1" and shift with a while loop, or a for loop over "$#".

Related

bash command substitution freezes script (output too long?)--how to cope

I have a bash script that includes a line like this:
matches="`grep --no-filename $searchText $files`"
In other words, I am assigning the result of a grep to a variable.
I recently found that that line of code seems to have a vulnerability: if the grep finds too many results, it annoyingly simply freezes execution.
First, if anyone can confirm that excessive output (and exactly what constitutes excessive) is a known danger with command substitution, please provide a solid link for me. I web searched, and the closest reference that I could find is in this link:
"Do not set a variable to the contents of a long text file unless you have a very good reason for doing so."
That hints that there is a danger, but is very inadequate.
Second, is there a known best practice for coping with this?
The behavior that I really want is for excessive output in command substitution
to generate a nice human readable error message followed by an error exit code so that my script will terminate instead of freeze. (Note: I always run my scripts with "set -e" as one of the initial lines). Is there any way that I can get this behavior?
Currently, the only solution that I know of is a hack that sorta works just for my immediate case: I can limit the output from grep using its --max-count option.
Ideally, you shouldn't capture data of unknown length into memory at all; if you read it as you need it, then grep will wait until the content is ready to use.
That is:
while IFS= read -r match; do
echo "Found a match: $match"
# example: maybe we want to look at whether a match exists on the filesystem
[[ -e $match ]] && { echo "Got what we needed!" >&2; break; }
done < <(grep --no-filename "$searchText" "${files[#]}")
That way, grep only writes a line when read is ready to consume it (and will block instead of needing to continue to read input if it has more output already produced than can be stored in the relatively small pipe buffer) -- so the names you don't need don't even get generated in the first place, and there's no need to allocate memory or deal with them in any other way.

Is there a way to save output from bash commands to a "file/variable" in bash without creating a file in your directory

I'm writing commands that do something like ./script > output.txt so that I can use the files in later scripts like ./script2 output.txt otherFile.txt > output2.txt. I remove them all at the end of the script, but when I'm testing certain things or debugging it's tricky to search through all my sub directories and files which have been created in the script.
Is the best option just to create a hidden file?
As always, there are numerous ways to do so. If you want to avoid files altogether, you can save the output (STDOUT) of a command in a variable and pass it to the next command as a file using the <() operator:
output=$(cat /usr/include/stdio.h)
cat <(echo "$output")
Alternatively, you can do so in a single command line:
cat <(cat /usr/include/stdio.h)
This assumes that the next command strictly requires a file for input.
I tend to avoid temporary files whenever possible to eliminate the need for a cleanup step that gets executed in all cases unless large amounts of data have to be processed.

Bash read file into script

Ok I have a really weird question, and a lot of you are probably going to ask why.
I need to read a file into a script and immediately output back to a new file. I guess more appropriately, im streaming into a script and outputting to file.
A java process from our dev team is going to call my script (script.sh) and its going to read in a "file" and some variables.
I know you can do the following:
./script.sh var1 var2 var3
and that will allow you access those variables via $1 $2 and $3
That's great, and that how they will pass a few things to the script, but they also need to read in an XML file (but its not really a file just yet. Its just output from the java process). I think this is how it will work.
./script var1 var2 var3 < file
basically, the first thing my script needs to do is output the "file" to say file.xml, then the script will go about its merry way and start working with the variables and doing the other things it needs to do.
I assume the file being passed into the script is stdin so I tried doing things like:
0> file.xml
or
/dev/stdin > file.xml
but nothing works. I think im just making large conceptual mistakes. Can someone please help?
Thanks!
Use cat:
cat > file.xml
With no arguments, cat reads from stdin and writes to stdout.
As for your conceptual mistake, you didn't consider that file descriptors are merely integers and can't do any work (like moving data) on their own. You need a process that can read from one and write to the other.
Another familiar process for moving data is cp, and you can indeed do cp /dev/stdin file.xml, but this is non-idiomatic and has some pitfalls.

Is it possible to start a program from the command line with input from a file without terminating the program?

I have a program that users can run using the command line. Once running, it receives and processes commands from the keyboard. It's possible to start the program with input from disk like so: $ ./program < startScript.sh. However, the program quits as soon as the script finishes running. Is there a way to start the program with input from disk using < that does not quit the program and allows additional keyboard input to be read?
(cat foo.txt; cat) | ./program
I.e., create a subshell (that's what the parentheses do) which first outputs the contents of foo.txt and after that just copies whatever the user types. Then, take the combined output of this subshell and pipe it into stdin of your program.
Note that this also works for other combinations. Say you want to start a program that always asks the same questions. The best approach would be to use "expect" and make sure the questions didn't change, but for a quick workaround, you can also do something like this:
(echo y; echo $file; echo n) | whatever
Use system("pause")(in bash it's just "pause") in your program so that it does not exit immediatly.
There are alternatives such as
dummy read
infinite loop
sigsuspend
many more
Why not try something like this
BODY=`./startScript.sh`
if [ -n "$BODY" ]
then cat "$BODY" |./program
fi
That depends on how the program is coded. This cannot be achieved from writing code in startScript.sh, if that is what you're trying to achieve.
What you could do is write a callingScript.sh that asks for the input first and then calls the program < startScript.sh.

why does redirect (<) not create a subshell

I wrote the following code
var=0
cat $file | while read line do
var=$line
done
echo $var
Now as I understand it the pipe (|) will cause a sub shell to be created an therefore the variable var on line 1 will have the same value on the last line.
However this will solve it:
var=0
while read line do
var=$line
done < $file
echo $line
My question is why does the redirect not cause a subshell to be created, or if you like why does pipe cause one to be created?
Thanks
The cat command is a command which means it needs its own process and has its own STDIN and STDOUT. You're basically taking the STDOUT produced by the cat command and redirecting it into the process of the while loop.
When you use redirection, you're not using a separate process. Instead, you're merely redirecting the STDIN of the while loop from the console to the lines of the file.
Needless to say, the second way is more efficient. In the old Usenet days before all of you little whippersnappers got ahold of our Internet (_Hey you kids! Get off of my Internet!) and destroyed it with your fancy graphics and all them web page, some people use to give out the Useless Use of Cat award for people who contributed to the comp.unix.shell group and had a spurious cat command because the use of cat is almost never necessary and is usually more inefficient.
If you're using a cat in your code, you probably don't need it. The cat command comes from concatenate and is suppose to be used only to concatenate files together. For example, when we use to use SneakerNet on 800K floppies, we would have to split up long files with the Unix split command and then use cat to merge them back together.
A pipe is there to hook the stdout of one program to the stdin or another one. Two processes, possibly two shells. When you do redirection (> and <), all you're doing remapping stdin (or stdout) to a file. reading/writing a file can be done without another process or shell.

Resources