Working capture cards for Processing on a Mac - macos

I recently bought a USB capture card for my Mac (EzCap: http://www.amazon.com/Easycap-Version-Capturer-Camcorder-Compatible/dp/B0044XIQIW) and I'm not all that shocked to find out it doesn't work with Processing. (I've tried the Capture library and GSVideo).
My app needs to take in video from an external source (i.e not just the built in iSight camera - which is super simple) for processing.
I was wondering if anyone has a working video capture implementation? And could let me know what capture devices worked for them?
Thought i'd ask before I start wasting a tonne of time and money buying more expensive devices that also might not work.
Thanks in advance.

You can start off by checking if your USB camera is seen in processing. Using GSVideo for example:
import codeanticode.gsvideo.*;
GSCapture cam;
void setup() {
size(640,480);
String[] cameras = GSCapture.list();
if (cameras.length == 0)
{
println("There are no cameras available for capture.");
exit();
} else {
println("Available cameras:");
for (int i = 0; i < cameras.length; i++) {
println(cameras[i]);
} }
cam = new GSCapture(this, 640, 480, cameras[0]);
cam.start(); }
if it does see the camera, you can add the draw() function:
void draw(){
if (cam.available() == true) {
cam.read();
cam.loadPixels();
image(cam,0,0);
}}
That works for me.

Related

macOS Camera extension is not reported as a video device by `kCMIOHardwarePropertyDevices`

I have a macOS app, which detects if a camera is being used via the kCMIOHardwarePropertyDevices :
typedef void (^DeviceIterator)(CMIOObjectID cameraObject, BOOL *stop);
static void iterateThroughAllInputDevices(DeviceIterator deviceIterator)
{
// Check the number of devices.
CMIOObjectPropertyAddress address = makeGlobalPropertyAddress(kCMIOHardwarePropertyDevices);
UInt32 devicesDataSize;
auto status = CMIOObjectGetPropertyDataSize(kCMIOObjectSystemObject, &address, 0, nil, &devicesDataSize);
if (isError(status))
{
return;
}
// Get the devices.
UInt32 devicesDataUsed;
int count = devicesDataSize / sizeof(CMIOObjectID);
if (!count)
{
LOG_INFO("video device list is empty");
return;
}
I am using virtual machines for my automated end-to-end tests (which tests are written in Python).
These Virtual Machines don't have a Camera device available, so I quickly wrote a macOS Camera Extension (based on https://developer.apple.com/videos/play/wwdc2022/10022 ) in hope that it could act like a real camera.
Unfortunately this Virtual Camera is not being detected by the code above. I am getting the video device list is empty message.
How could I create a virtual (software) camera that would be listed by the API above?

Processing can't access built in webcam

I’ve written the following code and I get error an error
IllegalStateException: Could not find any devices
import processing.video.*;
Capture unicorn;
void setup(){
size(640,480);
unicorn=new Capture (this,640,480);
unicorn.start();
background(0);
}
void captureEvent(Capture video){
video.read();
}
void draw(){
for(int i=0; i<100; i++){
float x=random(width);
float y=random(height);
color c=unicorn.get(int(x),int(y));
fill(c);
noStroke();
ellipse(x,y,16,16);
}
}
Just to be sure: did you add the video library for Processing already (it is the library named "Video | GStreamer-based video library for Processing.")? Installation is explained in step 1 of this Processing video tutorial, which contains much more interesting information and great video examples. Since you are able to run your sketch, this should already be okay.
As statox already mentioned, be sure that the camera is working for other programs; there might be some hardware or driver issue. To list the cameras that Processing can detect, you can use code from the Capture documentation. This is only the part for showing the available cameras; use the link for the complete example:
import processing.video.*;
String[] cameras = Capture.list();
if (cameras.length == 0) {
println("There are no cameras available for capture.");
} else {
println("Available cameras:");
for (int cameraIndex = 0; cameraIndex < cameras.length; cameraIndex++) {
println(cameras[cameraIndex]);
}
}
On my system with two cameras, the output looks like this:
Processing video library using GStreamer 1.16.2
Available cameras:
<Camera 1>
<Camera 2>
If the code from the Capture documentation does not work for you, you can try this alternative approach suggested by Neil C Smith on the Processing forum (was already mentioned by statox):
import processing.video.*;
Capture camera;
void setup() {
size(640, 480);
// Suggestion from Neil C Smith on the Processing forum:
camera = new Capture(this, "pipeline:autovideosrc");
camera.start();
}
void draw() {
if (camera.available()) {
camera.read();
}
image(camera, 0, 0);
}

how to copy (resized) camera image and use it back in Processing

I have a small bit of code to get the camera:
void setup() {
if (cam.available() == false) {
cam.start();
}
}
void draw() {
if (cam.available() == true) {
cam.read();
}
image(cam, w/2, h/2, w, 480.0/640.0*w); // resized according to size()
}
If I use cam.get(), the image is not resized, it keeps the camera resolution.
Is there any solution to get the "resized" camera image?
I tried
big = copy(cam, int(w/2), int(h/2), int(w), int(480/640*w), 0, 0, int(w), int(h));
but it doesn't seem to work (same for cam.copy(...).
Thank you in advance!
Assuming you're using the video library and cam is a Capture, then I would expect these methods to work. Capture extends PImage, so you should be able to copy and resize it.
So the first thing I'd check is what values you're passing into these functions. The println() function is your best friend.
Or try passing hardcoded values so you know what to expect:
image(cam, 0, 0, 100, 100);
If that really doesn't work, then as a worst case scenario you could use the Capture.get() function to get the pixels yourself, then do the resizing manually. I really don't think you'll have to do that though.

Networking rotation sync

My Unity version is 5.2.3f1, I m trying to sync the rotation of a child gameobject, in local works perfectly fine but it doesnt show up in other clients. I tried everything I could find and nothing.
The reason of this is to rotate a FPS body, so, I m trying to rotate Spine2 (Rotate the camera is not my best solution). I m using a Mixamo character to test, in the end I will have Mixamo auto-rigged charscters so everything I make here will be compatible.
I tried to use the Network Transform Rigidbody 3D and it only sync the character itself, not Spine2, I have tried Network Transform Child, and an official skeleton sync.
In the script part, I have tried a lot of things, the most promising one was this:
[SyncVar]
private Quaternion syncPlayerRotation;
[SerializeField]
private Transform playerTransform;
[SerializeField]
private float lerpRate = 15f;
// Use this for initialization
void Start () {
}
// Update is called once per frame
void LateUpdate () {
TransmitRotations();
LerpRotations();
}
void LerpRotations()
{
if (!isLocalPlayer)
playerTransform.localRotation = Quaternion.Lerp(playerTransform.localRotation, syncPlayerRotation, Time.deltaTime * lerpRate);
}
[Command]
void CmdProvideRotationsToServer(Quaternion playerRot)
{
syncPlayerRotation = playerRot;
}
[Client]
void TransmitRotations()
{
if (isLocalPlayer)
{
CmdProvideRotationsToServer(playerTransform.localRotation);
}
}
Its from UNET tutorial series on youtube, Gamer To Game Developer user.
I attached it to Spine2 and still dont work, but when I attached it to the main character, it worked.
Also tried this:
void OnSerializeNetworkView(BitStream stream, NetworkMessageInfo info)
{
Vector3 syncPosition = Vector3.zero;
if (stream.isWriting)
{
syncPosition = Spine.GetComponent<Rigidbody>().position;
stream.Serialize(ref syncPosition);
}
else
{
stream.Serialize(ref syncPosition);
Spine.GetComponent<Rigidbody>().position = syncPosition;
}
}
But I think it was for an older version of Unity.
To make the rotations I m using A Free Simple Smooth Mouselook
I edited it, this lines:
if (Input.GetMouseButton(1))
{
var xRotation = Quaternion.AngleAxis(-_mouseAbsolute.y, targetOrientation * Vector3.forward);
transform.localRotation = xRotation;
}
else
{
var xRotation = Quaternion.AngleAxis(-_mouseAbsolute.y, targetOrientation * Vector3.right);
transform.localRotation = xRotation;
}
Basicly, I changed Vector3.right to Vector3.forward and converted the Vector3.right only if the right mouse button is not pressed. The script is attached to Spine2 and its activated on the start if(isLocalPlayer) by script.
There's a pic of the current hierarchy:
(some cameras are there only to test, the main camera is FirstPersonCamera, extracted from the standard assets)
I noticed that if I debug log the Spine2 rotation, it only gives me values from 0 to 1.

Image Size With J2ME on an HTC Touch2

I'm trying to ascertain wither there is a limitation on the camera access in the j2me implementation on the HTC Touch2. The native camera is 3MP however it seams that the quality is notably reduced when accessed via j2me, in fact it seams that the only size and format the .getSnapshot() method is able to return is a 240x320 pixel jpeg. I'm trying to confirm that this is a limitation if the j2me implementation and not my coding. Hears and example of some of the things I have tried:
private void showCamera() {
try {
mPlayer = Manager.createPlayer("capture://video");
// mPlayer = Manager.createPlayer("capture://video&encoding=rgb565&width=640&height=480");
mPlayer.realize();
mVideoControl = (VideoControl)mPlayer.getControl("VideoControl");
canvas = new CameraCanvas(this, mVideoControl);
canvas.addCommand(mBackCommand);
canvas.addCommand(mCaptureCommand);
canvas.setCommandListener(this);
mDisplay.setCurrent(canvas);
mPlayer.start();
}
catch (Exception ex) {}
}
public void capture() {
try {
// Get the image.
byte[] raw = mVideoControl.getSnapshot("encoding=jpeg&quality=100&width=640&height=480");
// byte[] raw = mVideoControl.getSnapshot("encoding=png&quality=100&width=
// 640&height=480");
// byte[] raw = mVideoControl.getSnapshot(null);
Image image = Image.createImage(raw, 0, raw.length);
// Image thumb = createThumbnail(image);
// Place it in the main form.
if (mMainForm.size() > 0 && mMainForm.get(0) instanceof StringItem)
mMainForm.delete(0);
mMainForm.append(image);
If anyone could help it would be much appreciated.
I have reseved word from a number of sources that there is indeed a limitation on the camera access the JVM has witch is put in place by the operating system.

Resources