Save picture with mjpg-streamer on arduino Yun - image

I'm using mjpg-streamer to stream video to a webpage through the yun. The stream is working fine but since it's not recording and only live streaming I thought of having it capturing pictures from time to time (3 mins gap maybe) and add a button to the webpage to capture the picture as that button is pressed.
I decided to aproach the button first and the problem I found was that if the device is live streaming it can't take pictures at the same time, I have to stop the stream in order to capture the picture. I found that I can take a single picture when manualy typing the following commands:
/etc/init.d/mjpg-streamer stop
mjpg_streamer -i "./input_uvc.so -d /dev/video0 -r 640x480 -yuv -n -f 1 -q 80" -o "./output_file.so -f ./tests/ -d 5000"
/etc/init.d/mjpg-streamer stop
/etc/init.d/mjpg-streamer start
but when having a .cgi file running all this the stream stops and the device keeps capturing pictures until rebooted...
I'm not fully aware of what all the parameters do here... Without a delay (-d) does the yun only take one picture or is it really necessary to have a certain delay value even if I only want one picture?
Is there a better way to achieve my goal?
Thanks in advance!

Installing fswebcam and using:
#!/bin/ash
echo "Content-type: text/html"
echo
echo "<html><head><meta http-equiv='refresh' content='0;URL=http://arduino/stream-url' /><title>Take Picture</title></head><body></body></html>"
/etc/init.d/mjpg-streamer stop
sleep 0.4; fswebcam /mnt/sda1/pictures/$(date +"%Y.%m.%d.%H.%M.%S").png
/etc/init.d/mjpg-streamer start
exit 0
was the best way around :)

Related

Run script while having browser focused

I am currently working on a home project with a Raspberry Pi and the 7" display. Almost everything works, only the last bit I am a bit confused with. A chromium window in kiosk mode is open which refreshes on mouse movement. Also on mouse movement I want to change the backlight for a few seconds to full light.
The script below works so far stand-alone:
#!/bin/bash
while true; do
pos1=$(xdotool getmouselocation)
sleep 0.5
pos2=$(xdotool getmouselocation)
if [[ $pos1 != $pos2 ]]; then
sudo /usr/local/bin/rpi-backlight -b 100
sleep 10
sudo /usr/local/bin/rpi-backlight -b 0 -d 2
fi
done
I already tried to make it happen by
putting it in one script together with the chromium call,
opening both in autostart,
creating a systemd service for the script above. It does not seem to work in the background.
Can anyone tell me, where I am mistaken?
I made it happen by putting both in autostart. My syntax seemed to be wrong.
#/path/script.sh &
#chromium-browser --kiosk http://website.xyz
works like a charme, where ampersand "&" is for making it a background process.

VLC Player start recording and stop using linux command

I am trying to record the live streaming and stopping it after few minutes but i am not able to stop the recording. I am able to create new recording but the script doesnot stop. I am using the following command on ubuntu:-
cvlc -vvv http://cab.mpeg --sout "#transcode{}:duplicate{dst=std{access=file,mux=ts,dst={CabTest_$NOW.ts}" > video_log 2>&1 &
echo "Start recording the test case...."
run_uiautomator 'test.jar demo.jar'
com.epl.test.mini.RankingAlgoTest
echo "Stop recording the test case...."
stop
this stop command does not stop live recording
stop is not a valid command in this case. Try killall -SIGTERM vlc instead

Running mpg123 with FIFO control?

I need to run mpg123 with a single file, such that it will autostart and autoclose, like it normally does, however, I need to be able to override this default behavior with commands sent to a fifo file.
I had been running mpg123 filename.mp3 from a script, and simply waiting for it to finish before moving on. However, I'd like another script to be able to pause playback, control volume, or kill the process early, depending on the user's input.
mpg123 -R --fifo /srv/http/newsctl filename.mp3 seems to start mpg123 and create the pipe, but does not start playback.
How do I make this work?
Unfortunately mpg123 is unable to play a specified file when -R argument is used.
To start the playback you have to load a file using created fifo.
FIFO_MPG='/srv/http/newsctl'
mpg123 -R --fifo "$FIFO_MPG"
echo 'load filename.mp3' >> "$FIFO_MPG"
Also I suggest you to silence verbose output by using
echo 'silence' >> "$FIFO_MPG"
I hope it is not too late. Good luck! ;)

Need Help with Unix bash "timer" for mp3 URL Download Script

I've been using the following Unix bash script:
#!/bin/bash
mkdir -p ~/Desktop/URLs
n=1
while read mp3; do
curl "$mp3" > ~/Desktop/URLs/$n.mp3
((n++))
done < ~/Desktop/URLs.txt
to download and rename a bunch mp3 files from URLs listed in "URLs.txt". It works well (thanks to StackOverflow users), but due to a suspected server quantity/time download limit, it's only allowing me to access a range of 40 - 50 files from my URL list.
Is there a way to work around this by adding a "timer" inside the while loop so it downloads 1 file per "X" seconds?
I found another related question, here:
How to include a timer in Bash Scripting?
but I'm not sure where to add the "sleep [number of seconds]"... or even if "sleep" is really what I need for my script...?
Any help enormously appreciated — as always.
Dave
curl has some pretty awesome command-line options (documentation), for example, --limit-rate will limit the amount of bandwidth that curl uses, which might completely solve your problem.
For example, replace the curl line with:
curl --limit-rate 200K "$mp3" > ~/Desktop/URLs/$n.mp3
would limit the transfers to an average of 200K per second, which would download a typical 5MB MP3 file in 25 seconds, and you could experiment with different values until you found the maximum speed that worked.
You could also try a combination of --retry and --retry-delay so that when and if a download fails, curl waits and then tries again after a certain amount of time.
For example, replace the curl line with:
curl --retry 30 "$mp3" > ~/Desktop/URLs/$n.mp3
This will transfer the file. If the transfer fails, it will wait a second and try again. If it fails again, it will wait two seconds. If it fails again, it will wait four seconds, and so on, doubling the waiting time until it succeeds. The "30" means it will retry up to 30 times, and it will never wait more than 10 minutes. You can learn this all at the documentation link I gave you.
#!/bin/bash
mkdir -p ~/Desktop/URLs
n=1
while read mp3; do
curl "$mp3" > ~/Desktop/URLs/$n.mp3 &
((n++))
if ! ((n % 4)); then
wait
sleep 5
fi
done < ~/Desktop/URLs.txt
This will spawn at most 4 instances of curl and then wait for them to complete before it spawns 4 more.
A timer?
Like your crontab?
man cron
You know what they let you download, just count the disk usage of your files that you did get.
There is the transfer you are allowed. You need that, and you will need the PID of your script.
ps aux | grep $progname | print awk '{print $1}'
or something like that. The secret sauce here is that you can suspend with
kill -SIGSTOP PID
and resume with
kill -SIGCONT PID
So the general method would be
Urls on an array or queue or whatever
bash lets you have
Process an url.
increment transfer counter
When transfer counter gets close
kill -SIGSTOP MYPID
You are suspended.
in your crontab foreground your script after a minute/hour/day whatever
Continue processing
Repeat until done.
just don't log out or you'll need to do the whole thing over again, although if you used perl it would be trivial.
Disclaimer, I am not sure if this is an exercise in bash or whatnot, I confess freely that I see the answer in perl, which is always my choice outside of a REPL. Code in Bash long enough , or heaven forbid, Zsh ( my shell ) and you will see why Perl was so popular. Ah memories...
Disclaimer 2: Untested, drive by , garbage methodology here only made possible because you've an idea what that transfer might be. Obviously, if you have ssh , use ssh -D PORT you#host and pull the mp3's out of the proxy half the time.
In my own defense, if you slow pull the urls with sleep you'll be connected for a while. Perhaps "they" might notice that. Suspend and resume and you only should be connected while grabbing tracks, and gone otherwise.
Not so much an answer as an optimization. If you can consistently get the first few URLs, but it times out on the later ones, perhaps you could trim your URL file as the mp3s were successfully received?
That is, as 1.mp3 is successfully downloaded, strip it from the list:
tail url.txt -n +2 > url2.txt; mv -f url2.txt url.txt
Then the next time the script runs, it'll begin from 2.mp3
If that works, you might just set up a cron job to periodically execute the script over and over, taking bites at a time.
It just occurred to me that you're programatically numbering the mp3s, and curl might clobber some of them on restart, since every time it runs it'll start counting at 1.mp3 again. Something to be aware of, if you go this route.

grep 5 seconds of input from the serial port inside a shell-script

I've got a device that I'm operating next to my PC and as it runs it's spitting log lines out it's serial port. I have this wired to my PC and I can see the log lines fine if I'm using either minicom or something like:
ttylog -b 115200 -d /dev/ttyS0
I want to write 5 seconds of the device serial output to a temp file (or assign it to a variable) and then later grep that file for keywords that will let me know how the device is operating. I've already tried redirecting the output to a file while running the command in the background, and then sleeping 5 seconds and killing the process, but the log lines never get written to my temp file. Example:
touch tempFile
ttylog -b 115200 -d /dev/ttyS0 >> tempFile &
serialPID=$!
sleep 5
#kill ${serialPID} #does not work, gets wrong PID
killall ttylog
cat tempFile
The file gets created but never filled with any data. I can also replace the ttylog line with:
ttylog -b 115200 -d /dev/ttyS0 |tee -a tempFile &
In neither case do I ever see any log lines logged to stdout or the log file unless I have multiple versions of ttylog running by mistake (see commented out line, D'oh).
I have no idea what's going on here. It seems to be a failure of redirection within my script.
Am I on the right track? Is there a better way to sample 5 seconds of the serial port?
It sounds like maybe ttylog is buffering its output. Have you tried running it with -f or --flush?
You might try the unbuffer script that comes with expect.
ttylog has a --timeout option, where you can simply specify for how many seconds it should run.
So, in your case, you could do
ttylog --baud 115200 --device /dev/ttyS0 --timeout 5
and it would just run for 5 seconds and then stop.
Indeed it also has the -f option as mentioned which flushes, but if you'd use --timeout you would not be needing that.

Resources