mplayer fail to get stdin stream from golang - go

I want to write a simple command line m3u8 player for Linux. (Let me know if there already one.)
There are several ts file urls in m3u8 file. m3u8 file is dynamically changed from network. Usually, one ts file has only a few seconds. So I need to download m3u8 file and ts files in it again and again. Then I use mplayer to play the stream continuesly. I suppose this is a network radio.
Here is what I have done:
First, I lauch mplayer process and get the stdin:
mplayer_cmd := exec.Command("sh", "-c", "mplayer -msglevel all=9 -cache 80 -")
mplayer_writer, mplayer_err := mplayer_cmd.StdinPipe()
Then, I get m3u8 file and ts urls in it and wget content of ts file and write it to stdin of mplayer. And I do this step again and again:
out, err = exec.Command("sh", "-c", "wget " + m3u8_url + " -qO - | grep '.ts'").Output()
...
out, err = exec.Command("sh", "-c", "wget " + ts_url + " -qO -").Output()
...
n, err = mplayer_writer.Write(out)
fmt.Println("wrote ", n)
No sound come out from mplayer. Comparing to a successful running from command line, there is such related error messsage:
Cache empty, consider increasing -cache and/or -cache-min. [performance issue]
A suspect info is that - mplayer fork a child process when lauch. Will stdin/stdout pipe broken in this situation?
| | \-+- 03027 hgneng mplayer -msglevel all=9 -cache 80 -
| | \--- 03033 hgneng mplayer -msglevel all=9 -cache 80 -

Sorry, it's my fault. I get the stdout pipe of mplayer somewhere for debug. However, the code hangs there because there is no output. I found this with godebug.

Related

In Go, when running exec.Command with /usr/bin/script before a command, an error is thrown: /usr/bin/script: invalid option -- 'P'

I am using Go to automate IBM Aspera uploads. I would like to capture the stdout/stderr percentage progress while this happens.
I am running this on an ubuntu:latest docker image
This is the entire command that I am trying to run in go.
/usr/bin/script -e -c 'ascp -P <port> -m 3g -l 3g -i <secret> local_folder <user>#<host>:<remote-folder>' > percentage.txt 2>&1
This is how I am calling this command in my Go project
cmd := exec.Command("/usr/bin/script", "-e", "-c", "'ascp", "-P <port>", "-m 3g", "-l 3g", "-i <secret>", "local_folder", "<user>#<host>:<remote-folder>'", "> percentage.txt", "2>&1")
I have found I have to use /usr/bin/script to capture the output of the ascp command, otherwise I am unable to capture the upload percentage from the stdout/stderr. I am only able to capture the success message and total bytes transferred.
Example percentage.txt output WITHOUT using /usr/bin/script:
Completed: 2389K bytes transferred in 1 seconds
(10275K bits/sec), in 3 files, 1 directory.
Example percentage.txt output WITH using /usr/bin/script. As you can see, I am able to preserve the upload percentage:
Script started, output log file is 'typescript'.
<sample_file> 100% 2194KB 13.3Mb/s 00:01
<sample_file> 100% 192KB 13.3Mb/s 00:01
<sample_file> 100% 3329 13.3Mb/s 00:01
Completed: 2389K bytes transferred in 1 seconds
(12268K bits/sec), in 3 files, 1 directory.
Script done.
When I take the raw command above, and run it directly on the cli of my docker instance, I have no issue and it works as expected.
However, when I attempt to run the command through the exec.Command() function, I receive this output
/usr/bin/script: invalid option -- 'P'
Try 'script --help' for more information.
exit status 1
panic: exit status 1
goroutine 1 [running]:
main.main()
/project_dir/main.go:40 +0x3e7
exit status 2
When I run println(cmd.String()) this is my output:
/usr/bin/script -e -c 'ascp -P <port> -m 3g -l 3g -i <secret> local_folder <user>#<host>:<remote-folder>' > percentage.txt 2>&1
And if I copy and paste the outputted command string from the cmd.string into my terminal, instead of running it through exec.Command(), it works and the percentage.txt captures the upload status.
What am I doing wrong here? Why is go's exec.Command() unable to run this, but my shell prompt is?
Full code:
func main() {
f, err := os.OpenFile("percentage.txt", os.O_RDWR|os.O_CREATE|os.O_APPEND, 0666)
if err != nil {
panic(err)
}
defer f.Close()
mwriter := io.MultiWriter(f, os.Stdout)
err = os.Setenv("SHELL", "bash")
if err != nil {
return
}
cmd := exec.Command("/usr/bin/script", "-e", "-c", "'ascp", "-P <port>", "-m 3g", "-l 3g", "-i <secret>", "local_folder", "<user>#<host>:<remote-folder>'", "> percentage.txt", "2>&1")
println(cmd.String())
cmd.Stderr = mwriter
cmd.Stdout = mwriter
err = cmd.Run()
if err != nil {
println("\n\n" + err.Error() + "\n\n")
panic(err)
}
}
You have mis-tokenized your command line. If this is the shell command:
/usr/bin/script -e -c 'ascp -P <port> -m 3g -l 3g -i <secret> local_folder <user>#<host>:<remote-folder>' > percentage.txt 2>&1
Then the corresponding exec.Command would almost be:
cmd := exec.Command("/usr/bin/script", "-e", "-c", "ascp -P <port> -m 3g -l 3g -i <secret> local_folder <user>#<host>:<remote-folder>")
That is, everything enclosed by single quotes on the command line is a single argument.
Using this version of the command, you will need to handle output redirection yourself (by reading the output from the command and writing it to a file).
If you really don't want to deal with reading the output from the command, you could explicitly call out to a shell:
cmd := exec.Command("/bin/sh", "-c", "/usr/bin/script -e -c 'ascp -P <port> -m 3g -l 3g -i <secret> local_folder <user>#<host>:<remote-folder>' > percentage.txt 2>&1")
...but this is typically less efficient and less robust, because now you need to be careful about characters (like >) that have special meaning to the shell and you need to be careful about properly quoting everything.
Using ascp directly will bring complexity and you will try to parse logs that may anyway change any time.
Have look to the Aspera Transfer SDK.
This SDK provides a GRPC interface and Go stubs.
The SDK provides auto-resume of failed transfers, progress monitoring, multi-session, etc...
The SDK is free and can be downloaded from the IBM developer site.

PHP Composer pipe the output to the grep

How to pipe the output of the composer to the grep?
I've tried:
composer install --dry-run | grep -o 'test'
but I've got the same output as if there was no piping to the grep.
Your composer install command outputs to standard error exclusively.
Only standard out is piped to grep so grep gets nothing and the standard err text is still displayed as usual.
To grep the output you can redirect standard err (2) to to standard out (1) :
composer install --dry-run 2>&1 | grep -o 'test'
Edit: See shell redirection and
file descriptors for more information.

tcpdump suppress console output in script & write to file

In a bash script I need to run a tcpdump command and save the output to a file however when I do that via > /tmp/test.txt i still get the following output in the console:
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 1500 bytes
1 packet captured
1 packet received by filter
0 packets dropped by kernel
However I do wnat the script to wait for the command to complete before continuing.
is it possible to supress this output?
The output you're seeing is written to stderr, not stdout, so you can redirect it to /dev/null if you don't want to see it. For example:
tcpdump -nn -v -i eth0 -s 1500 -c 1 'ether proto 0x88cc' > /tmp/test.txt 2> /dev/null

How to combine 3 commands into a single process for runit to monitor?

I wrote a script that grabs a set of parameters from two sources using wget commands, stores them into a variables and then executes video transcoding process based on the retrieved parameters. Runit was installed to monitor the process.
The problem is that when I try to stop the process, runit doesnt know that only the last transcoding process needs to be stopped therefore it fails to stop it.
How can I combine all the commands in bash script to act as a single process/app?
The commands are something as follows:
wget address/id.html
res=$(cat res_id | grep id.html)
wget address/length.html
time=$(cat length_id | grep length.html)
/root/bin -i video1.mp4 -s $res.....................
Try wrapping them in a shell:
sh -c '
wget address/id.html
res=$(grep id.html res_id)
wget address/length.html
time=$(grep length.html length_id)
/root/bin -i video1.mp4 -s $res.....................
'

Can s3cmd be used to download a file and upload to s3 without storing locally?

I'd like to do something like the following but it doesn't work:
wget http://www.blob.com/file | s3cmd put s3://mybucket/file
Is this possible?
Cannot speak for s3cmd but its definitely possible.
You can use https://github.com/minio/mc . Minio Client aka mc is written in Golang, released under Apache License Version 2.
It implements mc pipe command for users to stream data directly to Amazon S3 from an incoming data on a pipe/os.stdin. mc pipe can also pipe to multiple destinations in parallel. Internally mc pipe streams the output and does multipart upload in parallel.
$ mc pipe
NAME:
mc pipe - Write contents of stdin to files. Pipe is the opposite of cat command.
$ mc cat
NAME:
mc cat - Display contents of a file.
Example
#!/bin/bash
mc cat https://s3.amazonaws.com/mybucket/1.txt | mc pipe https://s3-us-west-2.amazonaws.com/mywestbucket/1.txt
To answer the question regarding s3cmd: No, it can not (currently) read from STDIN.
It does support multi-part-upload and also streams to STDIN, but apparently not the other way around.
Piping output from s3cmd works like this:
s3cmd get s3://my-bucket/some_key - | gpg -d | tar -g /dev/null -C / -xvj
Please be aware that there may be an issue with streaming gzip files: https://github.com/s3tools/s3cmd/issues/811

Resources