Reroute File Output to stdout in Bash Script - bash

I have a script, wacaw (http://webcam-tools.sourceforge.net/) that outputs video from my webcam to a file. I am trying to basically stream that to some sort of display i.e vlc, quicktime, etc to get a "mirror" type effect.
Aside from altering the source code for wacaw, is there any way to force a script's file output to stdout so I can pipe it to something like vlc? Is it even possible to stream video like that?
Thanks for your help!
UPDATE: just to clarify:
running the wacaw script is formatted as follows:
./wacaw --video --duration 5 --VGA myFile
and it outputs a file myFile.avi. If I try to do a named pipe:
mkfifo pipe
./wacaw --video --duration 5 --VGA pipe
it outputs a file pipe.avi

You can use named pipes. You use mkfifo to create the pipe, hand that filename to the writing process and then read from that file with the other process. I have no idea if video in particular would work that way, but many other things do.

At least in bash you can do like this:
Original command:
write-to-file-command -f my-file -c
Updated command:
write-to-file-command -f >(pipe-to-command) -c
write-to-file-command will think >(pipe-to-command) is a write-only file and pipe-command will receive the file data on its stdin.
(If you just want the output to stdout you could do
write-to-file-command >(cat)
)

You may also try using tail -F myFile.avi:
# save stdout to file stdout.avi
man tail | less -p '-F option'
(rm -f myFile.avi stdout.avi; touch myFile.avi; exec tail -F myFile.avi > stdout.avi ) &
rm -f myFile.avi; wacaw --video --duration 1 --VGA myFile
md5 -q myFile.avi stdout.avi
stat -f "bytes: %z" myFile.avi stdout.avi
# pipe stdout to mplayer (didn't work for me though)
# Terminal window 1
# [mov,mp4,m4a,3gp,3g2,mj2 # ...]moov atom not found
#rm -f myFile.avi; touch myFile.avi; tail -F myFile.avi | mplayer -cache 8192 -
# Terminal window 2
#rm -f myFile.avi; wacaw --video --duration 1 --VGA myFile

Related

How to change sample rate for all files in a directory

How do I change the sample rate for every file in the folder?
I have the following code and it just erases the files -- the file size becomes 0.
for i in wav/*.wav; do
sox -r 8000 -e unsigned -b 16 -c 1 "$i" "$i"
done
Why is that?
sox -r 8000 -e unsigned -b 16 -c 1 "$i" "$i"
Your not using $o (the output file) anywhere - looks like you're trying to input and output from the same file (just a guess - not familiar with sox)
One approach would be to write to a temporary output file and then move that over the top of the input file:
for fname in wav/*.wav; do
TMP_OUT=$(mktemp)
sox -r 8000 -e unsigned -b 16 -c 1 "$fname" $TMP_OUT
mv $TMP_OUT $fname
done
The danger here is that you lose the input file if something goes wrong. I would check the result of sox and only overwrite the input file if sox was successful.

Script to convert sample rate of audio file in Ubuntu

ls -lrt *wav|wc -l --> 2160
Got around 2k audio files with sample rate 8k. Need to make an script to convert all the files to 16k Sample rate.For now Usig SOX for converting 1 file at a time.
For eg. :-
sox 9560850166.wav -r 16000 -b 16 -c 1 file1.wav
Need an script so that next audio files will be selected from the directory and SOX will be done to change sample rate and it will be saved with a new file name like file1.wav, file2.wav etc...
Run the below for loop from the directory containing wave files
a=0;
for i in `ls *.wav`;
do
let a++;
echo "Processing file $i"
sox $i -r 16000 -b 16 -c 1 file$a.wav
done
You do not need a script for this combination of find and exec will do the job
Use following command
find ./ -name "*wav" -exec sox {} -r 16000 -b 16 -c 1 {}.16000.wav \;
With this new audio file should get created with .16000.wav appended in original file name.
#!/bin/bash
i=0;
for filename in /home/mrityunjoy/myWork/audio_files/*.wav; do
i=$((i+1));
sox "$filename" -r 16000 -b 16 -c 1 "file$i.wav"
done
This will give the output in the directory we run the script.

youtube-dl problems (scripting)

Okay, so I've got this small problem with a bash script that I'm writing.
This script is supposed to be run like this:
bash script.sh https://www.youtube.com/user/<channel name>
OR
bash script.sh https://www.youtube.com/user/<random characters that make up a youtube channel ID>
It downloads an entire YouTube channel to a folder named
<uploader>{<uploader_id>}/
Or, at least it SHOULD...
the problem I'm getting is that the archive.txt file that youtube-dl creates is not created in the same directory as the videos. It's created in the directory from which the script is run.
Is there a grep or sed command that I could use to get the archive.txt file to the video folder?
Or maybe create the folder FIRST, then cd into it, and run the command from there?
I dunno
Here is my script:
#!/bin/bash
pwd
sleep 1
echo "You entered: $1 for the URL"
sleep 1
echo "Now downloading all videos from URL "$1""
youtube-dl -iw \
--no-continue $1 \
-f bestvideo+bestaudio --merge-output-format mkv \
-o "%(uploader)s{%(uploader_id)s}/[%(upload_date)s] %(title)s" \
--add-metadata --download-archive archive.txt
exit 0
I ended up solving it with this:
uploader="$(youtube-dl -i -J $URL --playlist-items 1 | grep -Po '(?<="uploader": ")[^"]*')"
uploader_id="$(youtube-dl -i -J $URL --playlist-items 1 | grep -Po '(?<="uploader_id": ")[^"]*')"
uploaderandid="$uploader{$uploader_id}"
echo "Uploader: $uploader"
echo "Uploader ID: $uploader_id"
echo "Folder Name: $uploaderandid"
echo "Now downloading all videos from URL "$URL" to the folder "$DIR/$uploaderandid""
Basically I had to parse the JSON with grep, since the youtube-dl devs said that implementing -o type variables into any other variable would clog up the code and make it bloated.

How to get files list downloaded with scp -r

Is it possible to get files list that were downloaded using scp -r ?
Example:
$ scp -r $USERNAME#HOSTNAME:~/backups/ .
3.tar 100% 5 0.0KB/s 00:00
2.tar 100% 5 0.0KB/s 00:00
1.tar 100% 4 0.0KB/s 00:00
Expected result:
3.tar
2.tar
1.tar
The output that scp generates does not seem to come out on any of the standard streams (stdout or stderr), so capturing it directly may be difficult. One way you could do this would be to make scp output verbose information (by using the -v switch) and then capture and process this information. The verbose information is output on stderr, so you will need to capture it using the 2> redirection operator.
For example, to capture the verbose output do:
scp -rv $USERNAME#HOSTNAME:~/backups/ . 2> scp.output
Then you will be able to filter this output with something like this:
awk '/Sending file/ {print $NF}' scp.output
The awk command simply prints the last word on the relevant line. If you have spaces in your filenames then you may need to come up with a more robust filter.
I realise that you asked the question about scp, but I will give you an alternative solution to the problem Copy files recursively from a server using ssh, and getting the file names that are copied.
The scp solution has at least one problem: if you copy lots of files, it takes a while as each file generates a transaction. Instead of scp, I use ssh and tar:
ssh $USERNAME#HOSTNAME:~/backups/ "cd ~/backups/ && tar -cf - ." | tar -xf -
With that, adding a tee and a tar -t gives you what you need:
ssh $USERNAME#HOSTNAME:~/backups/ "cd ~/backups/ && tar -cf - ." | tee >(tar -xf -) | tar -tf - > file_list
Note that it might not work in all shell (bash is ok) as the >(...) construct (process substitution) is not a general option. If you do not have it in your shell you could use a fifo (basically what the process substitution allows but shorter):
mkfifo tmp4tar
(tar -xf tmp4tar ; rm tmp4tar;) &
ssh $USERNAME#HOSTNAME:~/backups/ "cd ~/backups/ && tar -cf - ." | tee -a tmp4tar | tar -tf - > file_list
scp -v -r yourdir orczhou#targethost:/home/orczhou/ \
2> >(awk '{if($0 ~ "Sending file modes:")print $6}')
with -v "Sending file modes: C0644 7864 a.sql" should be ouput to stderr
use 'awk' to pick out the file list

Using mplayer to determine length of audio/video file

The following works very nicely to determine the length of various audio/video files:
mplayer -identify file.ogg 2>/dev/null | grep ID_LENGTH
However, I want to kill mplayer's output so I can determine the length of many files more efficiently. How do I do that?
The MPlayer source ships with a sample script called midentify, which looks like this:
#!/bin/sh
#
# This is a wrapper around the -identify functionality.
# It is supposed to escape the output properly, so it can be easily
# used in shellscripts by 'eval'ing the output of this script.
#
# Written by Tobias Diedrich <ranma+mplayer#tdiedrich.de>
# Licensed under GNU GPL.
if [ -z "$1" ]; then
echo "Usage: midentify.sh <file> [<file> ...]"
exit 1
fi
mplayer -vo null -ao null -frames 0 -identify "$#" 2>/dev/null |
sed -ne '/^ID_/ {
s/[]()|&;<>`'"'"'\\!$" []/\\&/g;p
}'
The -frames 0 makes mplayer exit immediately, and the -vo null -ao null prevent it from trying to open any video or audio devices. These options are all documented in man mplayer.
FFMPEG can give you the same information in a different format (and doesn't attempt playing the file):
ffmpeg -i <myfile>
There's another FF-way in addition to #codelogic's method, which doesn't exit with an error:
ffprobe <file>
and look for the duration entry.
Or grep for it directly in the error stream:
ffprobe <file> 2> >(grep Duration)
looks like there are a few other libs available, see time length of an mp3 file
Download your .mp3 file, play it with your Player (ex. Windows Media Player) and the player will show the total time at the end of play.

Resources