unable to record mono audio in raspberry pi3 - raspberry-pi3

i've been trying to record audio using re-speaker 2 mic hat on my raspberry pi3 .stereo recording is working fine but .i cant't make mono recording works ,all i got was just white noise.
recording stereo on audacity
screen shot of mono recording on audacity
this is my alsa info
http://www.alsa-project.org/db/?f=bfd07e437056c0a9560290340fa9b4c0a5ade7e1
Respeaker 2 mic pi hat - http://wiki.seeedstudio.com/ReSpeaker_2_Mics_Pi_HAT/
any help

arecord -L
plughw:CARD=ArrayUAC10,DEV=0
ReSpeaker 4 Mic Array (UAC1.0), USB Audio
Hardware device with all software conversions
arecord -D plughw:0 -f S16_LE -r 16000 -c 1 audio-1ch-16k.pcm

The issue was because my left channel mic was broken .now i can record in stereo and extract mono using audioop library

Related

is there any way to take a screenshot of specific window on Mac?

I wanted to capture only certain window using Imagemagick or Ffmpeg, but I heard that the x11 id needed for that is not supported on Mac.
The purpose is to capture the application window so that all areas are displayed even if some areas are out of the display and are not visible.
In general, applications on macOS use the native Cocoa framework to generate their GUI rather than X11. I'll write my answer to address each possibility in term, separating the two with a horizontal line.
With X11
You can install an X11 server called XQuartz on macOS using homebrew with:
brew cask install xquartz
However, only applications written to the X11 interface will create XQuartz windows and you can capture those windows in an X11 way, using xwininfo and passing the id to ImageMagick. However, relatively few apps use X11.
Let's just show that quickly with an example:
xeyes & # start "xeyes" which is an X11 app and get our prompt back
xwininfo -name xeyes # so we can get its id
...
...
xwininfo: Window id: 0xa0000a "xeyes"
...
...
# Tell ImageMagick to grab the "xeyes: window by its id and save as "xeyes.png"
magick import -window 0xa0000a xeyes.png
Another issue is that the homebrew version of ImageMagick doesn't support X11... so you'll have to either edit the homebrew formula, or run ./configure yourself and include X11 support - which is non-trivial.
With native macOS Cocoa and 'screencapture'
If your app uses the native Cocoa interface, you can get its "Cocoa id" using a script I shared here. Slightly more simply, you can run some AppleScript to get the window id, e.g. to get the id of a window belonging to the Terminal app:
osascript -e 'tell app "Terminal" to id of window 1'
You can then use that id with the screencapture command supplied with macOS to hopefully do what you want with whatever application you are using. For example:
/usr/sbin/screencapture -l <WINDOWID> image.png
With 'ffmpeg'
On macOS, ffmpeg uses AVFoundation under the covers, so first you need to get the index that AVFoundation assigns to your screen, like this:
ffmpeg -hide_banner -f avfoundation -list_devices true -i ""
[AVFoundation indev # 0x131e05df0] AVFoundation video devices:
[AVFoundation indev # 0x131e05df0] [0] FaceTime HD Camera
[AVFoundation indev # 0x131e05df0] [1] Capture screen 0 <-- THIS LINE
[AVFoundation indev # 0x131e05df0] AVFoundation audio devices:
[AVFoundation indev # 0x131e05df0] [0] MacBook Pro Microphone
Look at the listing above and you can see that I must use device [1] if I want to record the screen. As I don't want sound recorded, I use none for the sound channel, so the basic ffmpeg command to record the screen on my Mac at 30fps will be:
ffmpeg -r 30 -f avfoundation -i "1:none" ...
Now, you want to record a specific window, but ffmpeg doesn't know about windows, it only knows coordinates, so we need to find the coordinates of our window. Imagine I want to record Safari main window, first I get its location with:
osascript -e 'tell application "Safari" to get the bounds of the front window'
87, 43, 1290, 538 # top-left x, top-left y, width, height
Now I tell ffmpeg to record that:
ffmpeg -y -r 30 -f avfoundation -i "1:none" -vf "crop=1290:538:87:43" screen.mp4
and it appears to work, but doesn't! Apparently there's no atom and ffplay can't play stuff without atoms. It transpires that atoms get written at the end of the video - unless you Control-C out of them. So, you can now make sure you don't need to Control-C by adding a duration of 5 seconds:
ffmpeg -y -r 30 -f avfoundation -i "1:none" -t 5 -vf "crop=1290:538:87:43" screen.mp4
That also appears to work, but doesn't actually work either. You can play it with ffplay but not Apple's QuickTime which only likes yuv420p format, so you can actually do what you want with:
ffmpeg -y -r 30 -f avfoundation -i "1:none" -t 5 -vf "crop=1290:538:87:43,format=yuv420p" screen.mp4

Cec-client is very slow and gives segmentation fault on Raspberry Pi

I am using a Raspberry Pi and I want to be able to change the input of the TV using cec-client.
I tried these two scripts:
echo "scan" | cec-client -s -d 1
and
echo "as" | cec-client -s -d 1
When I tried them on a monitor without having a TV, they ran fine, although obviously didn't do anything meaningful.
Afterwards I tried them on a TV. Scanning usually takes around 1 minute and gives back proper info but at the end also says "segmentation fault". The "as" command always resulted in segmentation fault without switching the input, and it took multiple minutes. I tried updating the firmware of the LG TV, but it did not help. Later tried it on a Philips TV, I still had these problems. I tried uninstalling then reinstalling cec-client. Tried restarting the Raspberry Pi. Nothing worked so far.

Recording from online stream and listening to it at the same time (ffmpeg / ffplay)

Sometimes I like to record programmes from online radio channels, live or archived streams too. When there is no interesting actual programmes in the radios, I also would like listening to it at the same time while recording. I am using such command lines, which is called from Ruby script - to help parsing radios' timetables / programme pages and constructing the proper URLs of archived programmes which usually contains some timecode, such as 20160616_083000.mp3, etc.
So my command line to call from Ruby script looks like:
programmes.each{|datepart,programme_length|
cmd=%Q{ffmpeg -y -i http://example.com/stream/#{datepart}.mp3 -t #{programme_length} -c:a libmp3lame -b:a 160k "#{fname}" -c copy -t #{programme_length} -f mp3 -f rtp rtp://127.0.0.1:8888}
system cmd
}
It resides in a loop to record the previously parsed and selected programmes. Of cource the programmes are recorded properly and at the same time ffmpeg streams it as an mp3 rtp stream as well on localhost at the given port. In another terminal window I connect to the streamed data with one-liner as follows:
while true; do ffplay -i rtp://127.0.0.1:8888 -autoexit; done
I am using the -autoexit switch which should be stop playing the stream when it is ended and the "while" loop should be connect again to the new stream which is served by the programme recording "each" loop. Unfortunately it keeps playing after the end, and doesn't initiate a new connection to the newly started stream. How to use ffplay properly to stop playing after rtp stream is ended and let it connect again to the new stream?

Save picture with mjpg-streamer on arduino Yun

I'm using mjpg-streamer to stream video to a webpage through the yun. The stream is working fine but since it's not recording and only live streaming I thought of having it capturing pictures from time to time (3 mins gap maybe) and add a button to the webpage to capture the picture as that button is pressed.
I decided to aproach the button first and the problem I found was that if the device is live streaming it can't take pictures at the same time, I have to stop the stream in order to capture the picture. I found that I can take a single picture when manualy typing the following commands:
/etc/init.d/mjpg-streamer stop
mjpg_streamer -i "./input_uvc.so -d /dev/video0 -r 640x480 -yuv -n -f 1 -q 80" -o "./output_file.so -f ./tests/ -d 5000"
/etc/init.d/mjpg-streamer stop
/etc/init.d/mjpg-streamer start
but when having a .cgi file running all this the stream stops and the device keeps capturing pictures until rebooted...
I'm not fully aware of what all the parameters do here... Without a delay (-d) does the yun only take one picture or is it really necessary to have a certain delay value even if I only want one picture?
Is there a better way to achieve my goal?
Thanks in advance!
Installing fswebcam and using:
#!/bin/ash
echo "Content-type: text/html"
echo
echo "<html><head><meta http-equiv='refresh' content='0;URL=http://arduino/stream-url' /><title>Take Picture</title></head><body></body></html>"
/etc/init.d/mjpg-streamer stop
sleep 0.4; fswebcam /mnt/sda1/pictures/$(date +"%Y.%m.%d.%H.%M.%S").png
/etc/init.d/mjpg-streamer start
exit 0
was the best way around :)

Dealing with 302 Redirection in FFMPEG

I am using FFMPEG Library to Stream Audio from RTSP URL. My Sample Streaming URL is:
rtsp://username:password#machine-ip/708314c4eba2a041
And I am using the following command for streaming this RTSP URL:
ffmpeg -i RTSP_URL -f mov C:\FFMPEG_Recordings\bay.mov
So, the above FFMPEG command will capture the media stream from RTSP_URL and will store in bay.mov media file.
Sometimes, I get 302 Redirection Error from the Server which is actually propagating streams. Such as:
[rtsp # 0000000002cc8100] Status 302: Redirecting to rtsp://server-ip:server-port/708314c4eba2a041?token=708114c4e99dbcd1^LA^0^0^0^1427248150^e77149b2a2c209982a74367d0f72c2e11ba6636c
And after this process gets stuck (on Command Prompt) until I press CTRL+C twicely to terminate it in cmd where I run this command.
While It should start streaming from the redirected URL automatically.
I read that It's a FFMPEG's bug on FFMPEG Track and also read about it on FFMPEG Discussion Community but didn't get any solution or workaround for this.
Please guide to overcome this scenario If anyone ever encountered it that what are workarounds for this. Thnaks

Resources