passing a string to open command in mac - macos

I want to pipe a string to the open command in zsh terminal.
this command: open https://www.google.com open the web-browser correctly.
However, running this command: echo https://www.google.com | open does not work. What's the correct way to pipe a string to this command?

As far as I can see from the Man Page the open command does not take input from stdin so piping into it does not make much sense, but your syntax for piping is correct. You probably want to pass the result of echo as an argument.
Try:
open $(echo https://www.google.com)

Related

Bash script that creates a command, then stores it in user's history, so they can just press up, to edit the command before running it?

I have a script that generates a command and prints it to stdout. Fairly simple.
I want to put that command in the user's bash history, so they can just press UP on their keyboard to get access to it, to edit the command. Is this possible? how can I do this?
I tried doing the following
history -s "ls -la"
echo "ls -la" >> ~/.bash_history
The first one did not work. and the second command put the desired command in the users bash history, but typing history did not show the command. I even tried using history -w and that did not work either.
If I am going about this the wrong way let me know, maybe there is another way to do this.

.sh output to .txt file - what am I doing wrong?

I am running Windows 10 and am trying to save the error output of a test.sh file to a text file.
So I created the test.sh file and wrote an unkown command in it (i.e. "blablubb").
After that I open the terminal (cmd.exe), switch to the directory and type test.sh 2>> log.txt.
Another window opens with "/usr/bin/bash --login -i \test.sh" in the title bar, shows me "bash: blablubb: command not found" and then closes immediately.
I want to save that output because the bash-window just opens for a split second. Every google search brings me to websites talking about redirecting the output and that Stream2 ist STDERR and therefore I should use test.sh 2>> log.txt or something smiliar that takes care of the STDERR stream.
If I try the same with a test.sh file and the content:
#!/bin/bash
echo hi there
I get the output in the briefly open bash-window:
bash: #!/bin/bash: No such file or directory
hi there
But the log.txt file is empty.
If I only have echo hi therein the test.sh file I get bash: echo: command not found in the bash-window.
The log.txt also empty.
If I type the following directly in the terminal, the output is written in the log.txt:
echo hi > log.txt 2>&1
If I type directly in the terminal:
echdo hi > log.txt 2>&1
I get 'Der Befehl "echdo" ist entweder falsch geschrieben oder konnte nicht gefunden werden.' in the log.txt file.
So I guess the redirecting of the output works fine until I use test.sh.
I know that .sh files are something from the unix world and that the problem might lie there but I don't know why I can not redirect the output briefly shown in the bash-console to a text file.
The 2>> redirection syntax only works if the command line containing that syntax is interpreted by bash. So it won't work from the Windows command prompt, even if the program you are running happens to be written in bash. By the time bash is running, it's too late; it gets the arguments as they were interpreted by CMD or whatever your Windows command interpreter is. (In this case, I'm guessing that means the shell script will find it has a command line argument [$1] with the value "2".)
If you open up a bash window (or just type bash in the command one) and then type the test.sh 2>>log.txt command line in that shell, it will put the error message in the file as you expect.
I think you could also do it in one step by typing bash -c "test.sh 2>>log.txt" at the Windows command prompt, but I'm not sure; Windows quoting is different than *nix quoting, and that may wind up passing literal quotation marks to bash, which won't work.
Note that CMD does have the 2>> syntax as well, and if you try to run a nonexistent windows command with 2>>errlog.txt, the "is not recognized" error message goes to the file. I think the problem comes from the fact that CMD and bash disagree on what "standard error" means, so redirecting the error output from Windows doesn't catch the error output by bash. But that's just speculation; I don't have a bash-on-Windows setup handy to test.
It would help to know if you are running Windows Subsystem for Linux (Beta). Or if you are doing something else. I'm assuming this is what you are doing on windows 10.
If this is the case are you using bash to run the script?
Are you using win-bash?
If it is win-bash I'm not very familiar and would recommend Windows Subsystem for Linux (Beta) for reasons like this. win-bash, while cool, might not be compatible with redirection operators like 2>>.
You have stdout and stderr, by default (if you don't specify) the >> (or append) will only append standard output into the txt file.
If you use 2 it will append the standard error into the txt file. Example: test.sh 2>> log.txt
This could be better described at this site.
To get exactly the command for append both stdout and stderr, go to this page.
Please tell me if this doesn't answer your question. Also, it could be more valuable to attempt a search for this answer first and explain why your search found nothing or give more extensive clarification as to what the problem is. I love answering questions and helping, but creating a new forum page for what might be an easy answer may be ineffective. I've had a bunch of fun with your question. I hope that I've helped.
That's makes a lot of sense. Thanks Mark!
Taking what mark says into account I would get Windows Subsystem for Linux (Beta). There are instructions here. Then run your script from there.

Capture output of last executed command into a variable without affecting Vim and line returns

From this question: bash - automatically capture output of last executed command into a variable I used this command:
PROMPT_COMMAND='LAST="`cat /tmp/x`"; exec >/dev/tty; exec > >(tee /tmp/x)'
It works, but when I use Vim I get this:
# vim
Vim: Warning: Output is not to a terminal
Then Vim opens. But it takes a while. Is there a way to get rid of this message and the slowdown?
Also when I list dir and I echo $LAST it removes the return lines (\n). Is there a way to keep the return lines (\n)?
I think what you ask for is hard do achieve. Vim tests if the output is a terminal. The command you've provided redirects the output to the tee command. tee saves its input (which also menans: command's output) to the file and outputs it to the terminal. But vim knows nothing about it. It only knows its output is not a terminal. So it outputs warning. And from the vim's source code:
[...]
if (scriptin[0] == NULL)
ui_delay(2000L, TRUE);
TIME_MSG("Warning delay");
which means this redirection will always get you 2 seconds delay.
Also, for example, man vim command will not work with such redirections, because terminal output has some attributest (e.g. width and height) which generic file hasn't. So... it won't work.

Pipe direct tty output to sed

I have a script using for a building a program that I redirect to sed to highlight errors and such during the build.
This works great, but the problem is at the end of this build script it starts an application which usually writes to the terminal, but stdout and stderr redirection doesn't seem to capture it. I'm not exactly sure how this output gets printed and it's kind of complicated to figure out.
buildAndStartApp # everything outputs correctly
buildAndStartApp 2>&1 | colorize # Catches build output, but not server output
Is there any way to capture all terminal output? The "script" command catches everything, but I would like the output to still print to my terminal rather than redirecting to a file.
I found out script has a -c option which runs a command and all of the output is printed to stdout as well as to a file.
My command ended up being:
script -c "buildAndStartApp" /dev/null | colorize
First, when you use script, the output does still go to the terminal (as well as redirecting to the file). You could do something like this in a second window to see the colorized output live:
tail -f typescript | colorize
Second, if the output of a command is going to the terminal even though you have both stdout and stderr redirected, it's possible that the command is writing directly to /dev/tty, in which case something like script that uses a pseudo-terminal is the only thing that will work.

Switch from file contents to STDIN in piped command? (Linux Shell)

I have a program (that I did not write) which is not designed to read in commands from a file. Entering commands on STDIN is pretty tedious, so I'd like to be able to automate it by writing the commands in a file for re-use. Trouble is, if the program hits EOF, it loops infinitely trying to read in the next command dropping an endless torrent of menu options on the screen.
What I'd like to be able to do is cat a file containing the commands into the program via a pipe, then use some sort of shell magic to have it switch from the file to STDIN when it hits the file's EOF.
Note: I've already considered using cat with the '-' for STDIN. Unfortunately (I didn't know this before), piped commands wait for the first program's output to terminate before starting the second program -- they do not run in parallel. If there's some way to get the programs to run in parallel with that kind of piping action, that would work!
Any thoughts? Thanks for any assistance!
EDIT:
I should note that my goal is not only to prevent the system from hitting the end of the commands file. I would like to be able to continue typing in commands from the keyboard when the file hits EOF.
I would do something like
(cat your_file_with_commands; cat) | sh your_script
That way, when the file with commands is done, the second cat will feed your script with whatever you type on stdin afterwards.
Same as Idelic answer with more simple syntax ;)
cat your_file_with_commands - | sh your_script
I would think expect would work for this.
Have you tried using something like tail -f commandfile | command I think that should pipe the lines of the file to command without closing the file descriptor afterwards. Use -n to specify the number of lines to be piped if tail -f doesn't catch all of them.

Resources