So, i'm running moses machine translation system on my server computer. I access terminal from ssh, and i came across an interesting problem.
The scrip i'm running uses > to specify and output file and it looks like this:
~/mosesdecoder/bin/moses -f /home/tin/working/filtered/moses.ini -i /home/tin/working/filtered/input.29242 > final
Now, since it will take some time for the translation to finish (around 10 hours) i want it to run with nohup, but when i do that even if i put & at the end i end up with file named "final" filled with stdout stuff.
Any idea on how to avoid it??
If you're running the commands inside an actual script file, you could get rid of the > inside the script, and run nohup ./sciptname.sh.
This will print the script's output to terminal, but nohup will redirect it to "nohup.out" in the current directory.
Source:
According to the nohup manpage I am reading, If the standard output is a terminal, the standard output is appended to the file nohup.out in the current directory.
Give it a shot :)
Related
Of course, we can feed the output of any command to a file. Using command > /tmp/filename
Or even better use command | tee /tmp/filename to have the standard output be fed onto the terminal as well as the file name.
However, If I just executed command is there a way for ITerm to reprint the output that command already fed to console without re-running the command (example use case: command is not idempotent and I want to grep something without having to touch the mouse)
You could use the script command, which records your input + the output your commands generate.
To use it, just run script at the beginning, before you start any execution, and this will throw you in a new shell. which gets recorded in a file called typescript in your HOME folder.
Once you are done, you can exit, and then have all of the input + output in that typescript log file.
So, I have a problem. I have downloaded a program from the web. And it's a command line app. I have written a code, which generated some n-k commands to the app. I have written them into an output file. I can write an app in Python, but it freezes on some of the commands. I have tested them manually and seems like there are two issues:
Commands must be run one-by-one;
Some of the commands give an output like bla-bla-bla, this thing is not written into an output file. So, if I run a command ./app -p /file1 -o /file2 -s -a smth- > /fileOutput.txt The fileOutput.txt is empty, though in the terminal, there's is this bla-bla-bla message, stating, that something is wrong. If the command gives bla-bla-bla the app may freeze for a while.
Here is what I want to do:
CD into folder, the containing app;
For command in fileWithCommands perform command and start the next, only when the previous finishes;
If the command gives message, containing bla-bla-bla (cause it may look like file1 bla-bla-bla), write the command and this strange output into file badOutputs.txt.
Have never done applescript before. However, this's what I've done so far:
set theFile to "/Users/MeUser/Desktop/firstCommand"
set fileHandle to open for access theFile
set arrayCommand to paragraphs of (read fileHandle)
#I have found the previous code here: http://alvinalexander.com/mac-os-x/applescript-read-file-into-list-array-examples
close access fileHandle
tell application "Terminal"
activate
do script "cd /Users/MeUser/Desktop/anApp/"
repeat with command in arrayCommand
do script command
end repeat
end tell
Though there's a problem, if in one window the commands make up a huge queue. Without window 1 cd and the command are in different windows. And I am still unable to save the output.
UPDATE
Did with accordance to #Mark Setchell's recommendations. So now I have such code:
set theFile to "/Users/meUser/Desktop/firstCommand"
set fileHandle to open for access theFile
set arrayCommand to paragraphs of (read fileHandle)
close access fileHandle
repeat with command in arrayCommand
do shell script "cd /Users/meUser/Desktop/App/; " & command
end repeat
To the command I have added the following:
2>&1 /Users/meUser/Desktop/errorOut.txt
However, the apple script says that a mistake of the app is the mistake of the script. I.e.: file corrupted, app fails. I want it to write into error file where has it failed and move to the next command, while the script just fails.
Maybe not a complete solution, but more than a comment and easier to format this way...
First Issue
Your command-line app which writes on the Terminal may be writing to stderr rather than stdout. Try redirecting stderr to the same place as stdout by using
./app -p ... > /FileOutput.txt 2>&1
Second Issue
You cannot do:
do shell script cd somewhere
do shell script do_something
because each do shell script will execute in a separate, unrelated process. So your first process will start - in the default directory like all processes - and correctly change directory and then exit. Then your second process will start - in the default directory like all processes - and try to run your command. Rather than that, you can do this:
do shell script "cd somewhere; do_something"
which starts a single process which changes directory and then runs your command line program there.
Issue Three
Why do you want to send your commands to Terminal anyway? Does the user need to see something in Terminal - seems unlikely because you want to capture the output, don't you? Can't you just run your commands using do shell script?
Issue Four
If you want to keep your normal output separate from your error output, you can do:
./app ... params ... > OutputFile.txt 2> errors.txt
Suggestion 1
You can retain all the errors from all the scripts and accumulate them in a single file like this:
./app .. params .. >> results.txt 2>&1
That may enable you to deal with errors separately later.
Suggestion 2
You can capture the output of your shell script into an Applescript variable, say ScriptOutput, like this, then you can parse it:
set ScriptOutput to do shell script "..."
Suggestion 3
If errors caused by your script are stopping your loop, you can enclose them in a try block like this so they are handled and everything continues:
try
do shell script "..."
on error errMsg
display dialog "ERROR: " & errMsg
end try
I have a Java program that I'm running using a Bash script on Mac OS X. I have two files - a FIFO that allows me to pipe commands into the program, and an output log file.
The Bash script consists of the following code:
#!/bin/bash
java -jar file.jar <./run/command-fifo >>./run/server.log 2>&1 &
echo $! >| ./run/server.pid
I honestly can't remember why I used >| in the third line (I just know that it works). In the java line, the first < redirects the fifo file to standard input. The >> should redirect standard output to the file, and the 2>&1 should redirect standard error to it as well. It then runs in the background.
The problem is that nothing is ever written to the server.log file. The command-fifo file is read, but the log is not written. The program IS writing to standard output (if I run it on its own it works fine).
I also tried script as suggested in this question but it didn't work either:
script -q /dev/null java -Xmx4G -Xms4G -jar current.jar --nogui <./run/command-fifo >>./run/server.log 2>&1 &
Anyone have ideas to get this to write to the log properly?
FOLLOWUP: I should explain a bit more of how the software works for this explanation to make sense. There are three parts at work here:
A plist that launchd uses to start the program at boot by calling the launcher script
A launcher script that handles kill signals and waits for the pid of the java process
A start script, called by the launcher script, that launches the program and saves its pid
The script given above is the start script. This launches the java process, echoes its pid to a file, then returns. The launcher script (not given here) then waits for the pid to exit before terminating. If it terminates, launchd automatically relaunches the launcher script.
Launchd has a feature that can set the standard output path for the file it launches. Essentially, it redirects stdout of the launcher script to the given file.
I did this, and lo and behold, it works. By changing the start script line to the following:
java -jar file.jar <>./run/command-fifo &
it allows standard output to be captured by launchd and written to the file. It's a bit different solution, but it does in fact work. It's strange because the launcher script technically has nothing to output since the java process is in the background, but however it works, it does i fact work somehow.
Of course, I'd prefer to explicitly redirect the file's standard output into a file (in other scripts there may be cases where there are more than one and I need to keep them separate). I'm still going to experiment and try to find a solution.
I think #torek's comment about buffering is probably right on the money. You can force your java process to line-buffer its output using the stdbuf utility:
#!/bin/bash
stdbuf -oL java -jar file.jar <./run/command-fifo >>./run/server.log 2>&1 &
echo $! >| ./run/server.pid
Regarding the >| operator, #torek is also correct. Here is the bash manual entry.
There are plenty of threads here discussing how to do this for scripts or for the cmdline (mostly involving pipes, redirections, tee).
What I didn't find is a solution which can be set up once and then just works globally, without manipulating single scripts or adding something to every command line.
What I want to achieve is something like described in the top answer of
How do I write stderr to a file while using "tee" with a pipe?
Isn't it possible to configure the bash session so that all stderr output is logged to a file, while still writing it to console? Something I could add to .bashrc and thus automatically set up every time I login?
Software: Bash 4.2.24(1)-release (x86_64-pc-linux-gnu), xterm, Ubuntu 12.04
Try this variation on #0xC0000022L's previous solution (put it in your .bash_profile):
exec 2> >( tee log.file > /dev/tty )
A couple of caveats:
The prompt and anything you type at the command line are printed to stderr, and so will be logged in your file.
There could be an issue with the newline that terminates a command not being displayed in your terminal; I observe it on my Linux host, but not on my Mac OS X laptop. Perhaps someone else can explain and/or fix the issue. For example, if I type "echo stdout", I see the following:
$ echo stdoutstdout
$
I have a lisp program that needs to run for a long, long time. I wanted to make a bash script so that I could just do $./script.sh& on my school's computer and then check the output periodically without having to be personally running the process. All I want to do is call the program "clisp" and have it execute these commands:
(load "ll.l")
(make)
and save all output to a file. How do I make this script?
Look at the nohup built-in bash command:
From Wikipedia
nohup is most often used to run
commands in the background as daemons.
Output that would normally go to the
terminal goes to a file called
nohup.out if it has not already been
redirected. This command is very
helpful when there is a need to run
numerous batch jobs which are
inter-dependent
You can launch the script with nohup, and when you relog see the progress in the nohup.out file
You just want something like this:
#!/bin/sh
clisp > OUTPUTFILE 2>&1 << EOF
(load "11.1")
(make)
EOF