How to take a picture using command line on webOS on HP touchpad? - webos

on webos, I have openssh running and would like to take a picture using the command line script.
I suspect this is going to include some luna-send command, or alternatively a gst-launch
But I am not having any luck with the docs.
webos doesn't have any of the expected capture tools, but I can access the /dev/video0 device.
Edit: i noticed that the touchpad has the ffmpeg utility installed, but it doesn't recognise the video4linux2 format
So far, I am trying Gopherkhan's suggestions with the following code;
luna-send -n 1 palm://com.palm.mediad.MediaCapture/startImageCapture \
'{"path":"/media/internal/foo1.png","options":[{"quality" \
:100,"flash":2,'reviewDuration':0,'exifData':{}}]}'
but its just hanging there doing nothing, after a while is says this;
{"serviceName":"com.palm.mediad.MediaCapture","returnValue":false,"errorCode":-1 \
,"errorText":"com.palm.mediad.MediaCapture is not running."} \
(process:8534): LunaService-CRITICAL **: AppId msg type: 17

So to do this with luna-sends is a bit tricky, and technically not supported.
You're probably going to want to hit the MediaCapture library, which can be found on the device here:
/usr/palm/frameworks/enyo/0.10/framework/lib/mediacapture
To include it in your enyo app drop the following in your depends.js:
"$enyo-lib/mediacapture/"
There are three main steps involved.
Initializing the component
Capturing the image
Unloading the device.
Here's a sample:
Declare the component in your scene
{
kind: "enyo.MediaCapture", name:"mediaCaptureObj",
onLoaded:"_setUpLoadedState", onInitialized:"_setUpInitializedState",
onImageCaptureStart:"_onImageCaptureStart", onImageCaptureComplete:"_onImageCaptureComplete",
onAutoFocusComplete:"_onAutoFocusComplete", onError:"_handleError",
onElapsedTime:"_onElapsedTime", onVuData:"_onVuDataChange", onDuration:"_onDuration"
}
Call the initialize method:
this.$.mediaCaptureObj.initialize(this.$.ViewPort);
In your onInitialized callback
Use the property bag to locate the number of devices that are available. Typically, the descriptions are "Camera/Camcorder", "Front Microphone", and "User facing camera"
var keyString;
for(var i = 0; i < this.pb.deviceKeys.length; i++)
{
if(this.pb.deviceKeys[i].description.indexOf("Camera/Camcorder") >= 0)
{
keyString = this.pb.deviceKeys[i].deviceUri;
break;
}
}
if(keyString)
{
var formatObj = {
imageCaptureFormat: this.pb[keyString].supportedImageFormats[0]
};
this.$.mediaCaptureObj.load(keyString, formatObj);
}
Take a photo.
var obj = {"exifData":"{\"make\": \"Palm\", \"model\": \"Pre3\", \"datetime\": \"2011:05:19 10:39:18\", \"orientation\": 1, \"geotag\": {}}","quality":90,"flash":"FLASH_ON"};
this.$.mediaCaptureObj.startImageCapture("", obj);
Unload the device:
this.$.mediaCaptureObj.unload();
To do this with the old JS frameworks, see:
https://developer.palm.com/content/api/reference/javascript-libraries/media-capture.html
Now, you can do something similar with luna-send, but again, I don't think it's technically supported. You might have trouble with starting-up/keeping-alive the media capture service, etc. BUT, if you want to try, you could do something along the lines of:
1. get the media server instance --- this returns a port instance number
luna-send -a your.app.id -i palm://com.palm.mediad/service/captureV3 '{"args":["subscribe":true]}'
This will return a location of the capture service with a port number, a la:
{"returnValue":true, "location":"palm://com.palm.mediad.MediaCaptureV3_7839/"}
Since this is a subscription, don't kill the request. Just open a new terminal.
2. Open a new terminal. Use the "location" returned in step 1 as your new service uri:
luna-send -a your.app.id -i palm://com.palm.mediad.MediaCaptureV3_7839/load '{"args":["video:1", {"videoCaptureFormat":{"bitrate":2000000,"samplerate":44100,"width":640,"height":480,"mimetype":"video/mp4","codecs":"h264,mp4a.40"},"imageCaptureFormat":{"bitrate":0,"samplerate":1700888,"width":640,"height":480,"mimetype":"image/jpeg","codecs":"jpeg"},"deviceUri":"video:1"}]}'
You should see:
{"returnValue":true}
if the call completed correctly. You can safely ctrl+c out of this call.
3. Take your picture. (you can ctrl+c out of the last call, and just supply the args here)
luna-send -a your.app.id -i palm://com.palm.mediad.MediaCaptureV3_7839/startImageCapture '{"args":["", {"exifData":"{\"orientation\": 1, \"make\": \"HP\", \"model\": \"TouchPad\", \"datetime\": \"2011:09:22 15:34:36\", \"geotag\": {}}","quality":90,"flash":"FLASH_DISABLED","orientation":"faceup"}]}'
Again, you should see:
{"returnValue":true}
if the call completed correctly.
You should hear a shutter click, and the image will show up in the Photos app, in your Photo Roll.

An alternative, which might some benefit of using cross platform tools, is to the use the gst-launch pipeline. So far I have managed to start the web cam using command line;
gst-launch camsrc .src ! video/x-raw-yuv,width=320,height=240,framerate=30/1
! palmvideoencoder ! avimux name=mux ! filesink location=test1.avi alsasrc !
palmaudioencoder
but not take a single image;
gst-launch -v camsrc .src_still take-picture=1 flash-ctrl=2 ! fakesink dump=true
but I can't get it to recognise the .src_still tab. I will update this answer with this alternative method as I proceed.

Related

Problem uploading image to twitter using bash script with twurl. Keep getting "code: 44" "media_ids parameter is invalid"

I am having an issue when trying to upload an image to twitter using a bash script and twurl.
When I use a variable (which is storing the media_id of the image I want to upload) as the parameter for the "media_ids" that I appended to my status update, no photo gets posted and I receive this error message:
twurl -j -X POST -H upload.twitter.com "/1.1/media/upload.json" -f /root/$imgName -F media > mediaID.txt
local mediaID=$(egrep -o " [0-9]{19}" mediaID.txt)
twurl -X POST -H api.twitter.com "/1.1/statuses/update.json?status=The Astronomy Photo Of The Day (courtesy of NASA) for the day $date is \"$title\". For more details and info check out: https://apod.nasa.gov/apod/astropix.html&media_ids=$mediaID" | jq
“errors”: [
{
“code”: 44,
“message”: “media_ids parameter is invalid.”
}
]
However, when I use the actual media_id value instead of the variable ($mediaID) of the image as the parameter for the "media_ids" that gets appended to the status update, everything works as expected.
Is there some issue with using a variable as a parameter for media_id?
I am brand new to twurl and api's and therefore might be missing some basic point.
I would really appreciate any help or suggestions on this topic.
Thank you!

How to see print() results in Tarantool Docker container

I am using tarantool/tarantool:2.6.0 Docker image (the latest at the moment) and writing lua scripts for the project. I try to find out how to see the results of callin' print() function. It's quite difficult to debug my code without print() working.
In tarantool console print() have no effect also.
Using simple print()
Docs says that print() works to stdout, but I don't see any results when I watch container's logs by docker logs -f <CONTAINER_NAME>
I also tried to set container's logs driver to local. Than I get one time print to container's logs, but only once...
The container's /var/log directory is always empty.
Using box.session.push()
Using box.session.push() works fine in console, but when I use it in lua script:
-- app.lua
function log(s)
box.session.push(s)
end
-- No effect
log('hello')
function say_something(s)
log(s)
end
box.schema.func.create('say_something')
box.schema.user.grant('guest', 'execute', 'function', 'say_something')
And then call say_something() from nodeJs connector like this:
const TarantoolConnection = require('tarantool-driver');
const conn = new TarantoolConnection(connectionData);
const res = await conn.call('update_links', 'hello');
I get error:
Any suggestions?
Thanx!
I suppose you've missed io.flush() after print command.
After I added io.flush() after each print call my messages start to write to logs (docker logs -f <CONTAINER_NAME>).
Also I'd recommend to use log module for such purpose. It writes to stderr without buffering.
Regarding the error in the connector, I think nodejs connector simply doesn't support pushes.

MacOS - detect when camera is turned on/off

I want to automate a personal workflow that is based on camera usage on my MBP.
Basically I want to know if any of the cameras (built-in or USB) has been turned on or off, so I can run a program or script I'll create.
I think it's OK if I need to poll for the cameras statuses but an event or callback based solution would be ideal
This seems to work.
❯ log stream | grep "Post event kCameraStream"
2020-12-01 14:58:53.137796-0500 0xXXXXXX Default 0x0 XXX 0 VDCAssistant: [com.apple.VDCAssistant:device] [guid:0xXXXXXXXXXXXXXXXX] Post event kCameraStreamStart
2020-12-01 14:58:56.431147-0500 0xXXXXXX Default 0x0 XXX 0 VDCAssistant: [com.apple.VDCAssistant:device] [guid:0xXXXXXXXXXXXXXXXX] Post event kCameraStreamStop
2020-12-01 14:58:56.668970-0500 0xXXXXXX Default 0x0 XXX 0 VDCAssistant: [com.apple.VDCAssistant:device] [guid:0xXXXXXXXXXXXXXXXX] Post event kCameraStreamStart
Some of the numbers in the output are redacted with Xs because I don't know what they mean. :)
log stream --predicate 'eventMessage contains "Post event kCameraStream"' works up to macOS Big Sur, but not in macOS Monterey. You'll have to use a slightly different predicate:
$ log stream --predicate 'subsystem contains "com.apple.UVCExtension" and composedMessage contains "Post PowerLog"'
Filtering the log data using "subsystem CONTAINS "com.apple.UVCExtension" AND composedMessage CONTAINS "Post PowerLog""
Timestamp Thread Type Activity PID TTL
2021-10-27 12:21:13.366628+0200 0x147c5 Default 0x0 353 0 UVCAssistant: (UVCExtension) [com.apple.UVCExtension:device] UVCExtensionDevice:0x1234005d7 [0x7fe3ce008ca0] Post PowerLog {
"VDCAssistant_Device_GUID" = "00000000-1432-0000-1234-000022470000";
"VDCAssistant_Power_State" = On;
}
2021-10-27 12:21:16.946379+0200 0x13dac Default 0x0 353 0 UVCAssistant: (UVCExtension) [com.apple.UVCExtension:device] UVCExtensionDevice:0x1234005d7 [0x7fe3ce008ca0] Post PowerLog {
"VDCAssistant_Device_GUID" = "00000000-1432-0000-1234-000022470000";
"VDCAssistant_Power_State" = Off;
}
As far as I know, you can poll for the camera usage with:
$ lsof -n | grep "AppleCamera"
or change "AppleCamera" with the driver name of an external camera.
Other relevant names to try are: "USBVDC" or "VDCAssistant" or "FaceTime" (or "iSight" in older Macs).
You should get one line with the name and pid of the process using the webcam or nothing, which means that it is not in use.
You could check for all of the keywords and decide that the camera is in use if any of these keywords give you something back.
The -n option is to skip resolving DNS names of IP connections and this speeds the command a lot.
As a side note, I use this app to know when any app is using the microphone and/or webcam: OverSight
In macOS Ventura I find this incantation works:
log stream --predicate 'sender contains "appleh13camerad" and (composedMessage contains "PowerOnCamera" or composedMessage contains "PowerOffCamera")'

ffmpeg creates an mp4 stream which results error in Firefox

I'm about to play an fmp4 in HTML5 video element.
I was successfully created a websocket to pass ffmpeg's output into MSE.
However when I try to open the page in Firefox (72.0.1, 64bit, under Ubuntu 18.04LTS), it always results an error:
Media resource blob:http://localhost/XXXX could not be decoded, error: Error Code: NS_ERROR_FAILURE (0x80004005)
Details: virtual mozilla::MediaResult mozilla::MP4ContainerParser::IsInitSegmentPresent(const mozilla::MediaSpan &): Invalid Top-Level Box:f"
This is my FFMPEG's line:
ffmpeg -r 5 -i rtsp://IPCAMERA -c:v copy -an -movflags +frag_keyframe+empty_moov+default_base_moof -f mp4 pipe:1
This is how the server side Java (with Tomcat engine) parses the output of this command (this might be inefficient but for now is ok):
ProcessBuilder b = new ProcessBuilder(FFMPEGCOMMAND.split("\\s+"));
try {
p = b.start();
} catch (IOException e) {
e.printStackTrace();
}
InputStream input = p.getInputStream();
int bytes_read = 0;
byte buffer[] = new byte[512];
try {
while (0 < (bytes_read = input.read(buffer, 0, 512))) {
System.out.println("Bytes read:" + bytes_read);
this.session.getBasicRemote().sendBinary(ByteBuffer.wrap(buffer));
}
} catch (IOException e) {
e.printStackTrace();
}
Then the client side is a websocket-MSE like in this repo.
Results:
When I debug the server side, and there is a breakpoint before the sendBinary call, and I wait for some seconds before letting run the server side, then a first picture is shown in the browser, but then immediately goes onto the error above.
If I run the server side (without any breakpoints), the browser does not show any picture, it immediately goes onto the error.
Invalid Top-Level box error message is always followed by (a) random garbage character(s).
The proof that this should be working is in the point 1. If I wait some time before letting roll the data to the client, it can decode 1 (or maybe more frames) before reaching that error.
This might be an error with my ffmpeg command-line.
However I could not really find any good resources on this topic (only found those ones which related to older releases of Firefox).
Update1
Here is the FFMPEG log when the same command creates an mp4 file instead of the pipe: https://pastebin.com/Gjq2vxeT
Here is the detailed Firefox log with the wrong box:
Details: virtual mozilla::MediaResult mozilla::MP4ContainerParser::IsInitSegmentPresent(const mozilla::MediaSpan &): Invalid Top-Level Box:f
Please note, Top-Level Box is always 'f' when I'm doing the scenario marked in point 2 (running without breakpoints).
Update2:
Here are the ffmpeg's current output (with numeric and alphanumeric representation, for the first 128 items): https://pastebin.com/DeJMfNYs
The interesting thing is that the first 4 bytes seems invalid for me. However starting from byte4 (5th byte), it seems OK ("ftyp").
Could you please confirm this?
The problem was on the server's side.
With this line:
this.session.getBasicRemote().sendBinary(ByteBuffer.wrap(buffer));
This is not good because when there is a 0 in the buffer, ByteBuffer's wrap method stops.
The good one is this:
this.session.getBasicRemote().sendBinary(ByteBuffer.wrap(buffer, 0, bytes_read));
What a silly mistake - now works fine :)

Google Assistant SDK on Raspberry Pi 3: Audio setup does not work

I have been trying to install G assistant in to a Raspberry Pi3. I have question in the following link
https://developers.google.com/assistant/sdk/prototype/getting-started-pi-python/configure-audio
A partial text from above link
# Record a short audio clip. If you get an error, go to step 2.
$ arecord --format=S16_LE --duration=5 --rate=16k --file-type=raw out.raw
As expected I got error in this step. So I tried the Step2 created a new file (.asoundrc) with all the hardware info. Then I tried following
speaker-test -t wav
But I got following error ( If I rename .asoundrc I don't see this error but I cannot record)
speaker-test 1.0.28
Playback device is default
Stream parameters are 48000Hz, S16_LE, 1 channels
WAV file(s)
ALSA lib conf.c:1697:(snd_config_load1) toplevel:9:17:Unexpected char
ALSA lib conf.c:3417:(config_file_open) /home/pi/.asoundrc may be old or corrupted: consider to remove or fix it
ALSA lib conf.c:3339:(snd_config_hooks_call) function snd_config_hook_load returned error: Invalid argument
ALSA lib conf.c:3788:(snd_config_update_r) hooks failed, removing configuration
Playback open error: -22,Invalid argument
How can I fix this?
Thanks!
That happens if your .asoundrc doesn't have the correct structure. Warning: If you use the rpi gui (desktop) volume control to change inputs it will modify the .asoundrc for you, breaking ALSA for google assistant. You'll have to go and fix it up. The instructions on google's website are correct.
To fix it delete the new entries created.
Then, at the top, look for the line 'type hw'. It has been sneakily modified... you'll have to change it back to 'type asym' to match the config google specify.
I leave the input/output for the gui (desktop) volume set to analogue and don't touch it once I start using google assistant so it won't go and mess with .asoundrc again.
I am using a Logitech USB headset and which has both mic & speaker (I don't use external speaker)
So, given my audio input and output goes to the headset, my .asoundrc looks like this:
pcm.!default {
  type asym
  capture.pcm "mic"
  playback.pcm "speaker"
}
pcm.mic {
  type plug
  slave {
pcm "hw:1,0"
  }
}
pcm.speaker {
  type plug
slave {
pcm "hw:1,0"
}
}
Lastly, if you reboot your Pi, you'll have to reset the source path for the assistant binary, otherwise it won't be able to find the command that starts the assistant demo.
Do this by typing "source env/bin/activate"
Then you can run it again by typing "google-assistant-demo"
Good luck!
Yes I was seeing this error -- weirdly after I had everything working fine, i never thought that the .asoundrc file would have been corrupted.
Invalid value card arecord: main:722: audio open error: no such file or directory
I confirm that Xeneck Stoher says about the Rasbian gui volume/audio in/out selection messing up your ~.asoundrc file, replacing it fixed this issue and the recording / playback now works fine.

Resources