Redirecting output of stack build --exec - haskell-stack

I'm currently building and running my Haskell program like this:
stack build --exec "myProg with some-args" --file-watch
which nicely rebuilds and runs the program again when I make a change.
However, I can't figure out how to redirect the program output to a file, overwriting the file with each restart. I can do something like
stack build --exec "myprog with some-args" --file-watch > out.log
(or, because I'm on PowerShell, stack build --exec [args as before] | Out-File out.log)
but that will keep appending to the file, so the results of the first execution remains in the file. I tried
stack build --exec "myprog with some-args > out.log" --file-watch
but that just sends > and out.log as additional arguments to my program, instead of redirecting output.
How can I redirect output of my program to a file, overwriting the file on each execution, when using stack build --file-watch --exec to run my program?

Related

How to show the output for the execution for any command and/or script in stdout and file but keeping the color in the stdout?

For some commands, in the terminal they print with some colors in the stdout, for example:
git status
mvn help:help -Ddetail=true
gradle build
Any Linux command (ls [-...], etc)
Note: it applies for scripts that contain:
Executions of Linux commands
Executions for tools commands
Executions other scripts
Therefore the following is possible:
./mvnw help:help -Ddetail=true
./gradlew build
./customscript.sh
Until nothing is new and all work how is expected, therefore:
linux_command
tool_command (maven, git, gradle etc)
script.sh (execute linux/tools commands and other scripts)
So if any of them print in the terminal (stdout) some colors, it is the default behavior according each command/tool
Now if I want see the output in the terminal (as above) and write it to some file, according with:
How to redirect output to a file and stdout
Therefore is possible do in general
"linux_command" | tee [-a] "/some/path/log_file.log"
"tool_command" | tee [-a] "/some/path/log_file.log"
"script.sh" | tee [-a] "/some/path/log_file.log"
And it works how is expected, but the output in the terminal (stdout) does not include the colors anymore.
Question:
How to show the output for the execution for any command and/or script in stdout and file but keeping the color in the stdout?
Same behaviour when the pipe and tee were not included and of course meanwhile write the content in the .log file.
Note I did do a research about the script command
How to trick an application into thinking its stdout is a terminal, not a pipe
but it overrides the script.sh content
I need a general approach, it for any command and/or script.sh
unbuffer is your command
unbuffer git status | tee => I keep my color
git status | tee => back in no-color world
You can have :
on Debian/Ubuntu with expect package
on Fedora with expect package
On MacOS with brew.sh expect package
On MacOS with macports.org expect package
Source of expect package is https://core.tcl-lang.org/expect/index
FYI , if your problem is only limited to Git ,
Git has a flag to force color .
git -c status.color=always status

Determining all the processes started with a given executable in Linux

I have this need to collect\log all the command lines that were used to start a process on my machine during the execution of a Perl script which happens to be a test automation script. This Perl script starts the executable in question (MySQL) multiple times with various command lines and I would like to inspect all of the command lines of those invocations. What would be the right way to do this? One possibility i see is run something like "ps -aux | grep mysqld | grep -v grep" in a loop in a shell script and capture the results in a file but then I would have to do some post processing on this and remove duplicates etc and I could possibly miss some process command lines because of timing issues. Is there a better way to achieve this.
Processing the ps output can always miss some processes. It will only capture the ones currently existing. The best way would be to modify the Perl script to log each command before or after it executes it.
If that's not an option, you can get the child pids of the perl script by running:
pgrep -P $pid -a
-a gives the full process command. $pid is the pid of the perl script. Then process just those.
You could use strace to log calls to execve.
$ strace -f -o strace.out -e execve perl -e 'system("echo hello")'
hello
$ egrep ' = 0$' strace.out
11232 execve("/usr/bin/perl", ["perl", "-e", "system(\"echo hello\")"], 0x7ffc6d8e3478 /* 55 vars */) = 0
11233 execve("/bin/echo", ["echo", "hello"], 0x55f388200cf0 /* 55 vars */) = 0
Note that strace.out will also show the failed execs (where execve returned -1), hence the egrep command to find the successful ones. A successful execve call does not return, but strace records it as if it returned 0.
Be aware that this is a relatively expensive solution because it is necessary to include the -f option (follow forks), as perl will be doing the exec call from forked subprocesses. This is applied recursively, so it means that your MySQL executable will itself run through strace. But for a one-off diagnostic test it might be acceptable.
Because of the need to use recursion, any exec calls done from your MySQL executable will also appear in the strace.out, and you will have to filter those out. But the PID is shown for all calls, and if you were to log also any fork or clone calls (i.e. strace -e execve,fork,clone), you would see both the parent and child PIDs, in the form <parent_pid> clone(......) = <child_pid> so then you should hopefully then have enough information to reconstruct the process tree and decide which processes you are interested in.

How to use tar + pbzip2 with Jenkins

I've been trying to find ways to cut my Jenkins build time as much as possible, and thanks to this helpful SO post, I found pbzip2: Utilizing multi core for tar+gzip/bzip compression/decompression
Works great! A 6 min compression time was brought down to 2 mins on my machine with the following:
tar -v -c --use-compress-program=pbzip2 -f parallel.tar.bzip2 myapplication.app
But Jenkins just barfs with a Execute Shell task where I put in the above command:
+ tar -v -c --use-compress-program=pbzip2 -f parallel.tar.bz2 myapplication.app
a myapplication.appBuild step 'Execute shell' marked build as failure
The fact that the "Build step" line is getting mashed together with the output from the tar tells me it might be a background process issue that tar/pbzip2 is introducing.
I've tried introducing a #!/bin/bash -xe and get the same results. I've tried wrapping the tar command in an if statement. I've also tried putting tar in a background thread itself with & and waiting for it. Same result.
Is there any techniques I could implement to help the Jenkins process out?
Found out that even though I can run this command as the jenkins user through command line, pbzip2 wasn't defined in the PATH for the Jenkins job. Pretty misleading since there wasn't useful output.

PSQL: How can I prevent any output on the command line?

My problem: I'm trying to run a database generation script at the command line via a batch file as part of a TFS build process to enable nightly testing on a known dataset.
The scripts we run are outputting Notices, Warnings and some Errors on the command line. I would like to suppress at least the Notices and Warnings, and if possible the Errors as they don't seem to have an impact on the overall success of the scripts. This output seems to be affecting the success or failure of the process as far as the TFS build process is concerned. It's highlighting every line of output from the scripts as errors and failing the build.
As our systems are running on Windows, most of the potential solutions I've found online don't work as they seem to target Linux.
I've changed the client_min_messages to error in the postgresql.conf file, but when looking at the same configuration from pgAdmin (tools > server configuration) it shows the value as Error but the current value as Notice.
All of the lines in the batch file that call psql use the -q flag as well but that only seems to prevent the basics such as CREATE TABLE and ALTER TABLE etc.
An example line from the batch file is:
psql -d database -q < C:\Database\scripts\script.sql
Example output line from this command:
WARNING: column "identity" has type "unknown"
DETAIL: Proceeding with relation creation anyway.
Specifying the file with the -f flag makes no difference.
I can manually run the batch file on my development machine and it produces the expected database regardless of what errors or messages show on the command prompt.
So ultimately I need all psql commands in my batch files to run silently.
psql COMMAND &> output.txt
Or, using your example command:
psql -d database -q < C:\Database\scripts\script.sql &> output.txt
use psql -o flag to send the command output to the filename you wish or /dev/null if you don't care about it.
The -q option will not prevent the query output.
-q, --quiet run quietly (no messages, only query output)
to avoid the output you have to send the query result to a file
psql -U username -d db_name -pXXXX -c "SELECT * FROM table_name LIMIT 5;" > C:\test.csv
use 1 > : create new file each time
use 2 >> : will create and keep adding

getting Bash to execute external commands through proxy

I want to add some extra logging, so I'd like bash to run "myevaluator cmdline" after expanding all the environment variables in cmdline, is that possible?
Update: basically I want to extend my bash history logging to include PID of the main process started by the command, and things from /proc/ tree.
For instance, if I run "java xyz" from bash command line, I want to log PID of the java process started by that command line.
Only way I can see to implement this would be to have "bash" call my custom evaluator giving it the final command-line, and then my evaluator would take care of starting the process and doing the logging
So the question is -- how do I get bash to call "myevaluator cmdline" whenever bash tries to execute an external process
Use set -x in your script ( or /bin/bash -x you_script.sh ) to print every line prepended with PS4 to stderr.

Resources