Processing can't access built in webcam - processing

I’ve written the following code and I get error an error
IllegalStateException: Could not find any devices
import processing.video.*;
Capture unicorn;
void setup(){
size(640,480);
unicorn=new Capture (this,640,480);
unicorn.start();
background(0);
}
void captureEvent(Capture video){
video.read();
}
void draw(){
for(int i=0; i<100; i++){
float x=random(width);
float y=random(height);
color c=unicorn.get(int(x),int(y));
fill(c);
noStroke();
ellipse(x,y,16,16);
}
}

Just to be sure: did you add the video library for Processing already (it is the library named "Video | GStreamer-based video library for Processing.")? Installation is explained in step 1 of this Processing video tutorial, which contains much more interesting information and great video examples. Since you are able to run your sketch, this should already be okay.
As statox already mentioned, be sure that the camera is working for other programs; there might be some hardware or driver issue. To list the cameras that Processing can detect, you can use code from the Capture documentation. This is only the part for showing the available cameras; use the link for the complete example:
import processing.video.*;
String[] cameras = Capture.list();
if (cameras.length == 0) {
println("There are no cameras available for capture.");
} else {
println("Available cameras:");
for (int cameraIndex = 0; cameraIndex < cameras.length; cameraIndex++) {
println(cameras[cameraIndex]);
}
}
On my system with two cameras, the output looks like this:
Processing video library using GStreamer 1.16.2
Available cameras:
<Camera 1>
<Camera 2>
If the code from the Capture documentation does not work for you, you can try this alternative approach suggested by Neil C Smith on the Processing forum (was already mentioned by statox):
import processing.video.*;
Capture camera;
void setup() {
size(640, 480);
// Suggestion from Neil C Smith on the Processing forum:
camera = new Capture(this, "pipeline:autovideosrc");
camera.start();
}
void draw() {
if (camera.available()) {
camera.read();
}
image(camera, 0, 0);
}

Related

How to fill a rectangle with some color along with printing a string in Processing console?

I'm developing a GUI for Arduino mega 2560 using Processing (control p5 library).
My board senses analog pin A0 and continuously displays its value as string in console. If a specific digital pin goes high then It sends the error string to processing console and waits for reset to be pressed.
Ex: A1-B1 error press reset
If A1-B1 is error then I want my GUI to fill the rectangle with red along with displaying string
" A1-B1 error press reset"
How to I do this?
Here's my processing code
import java.util.*;
import at.mukprojects.console.*;
Console console;
import processing.serial.*;
Serial port;
import controlP5.*;
ControlP5 cp5;
int myColorBackground = color(0, 0, 0);
float k,l;
String val;
int i;
char a;
void setup() {
size(800,600);
frame.setResizable(true);
smooth();
noStroke();
printArray(Serial.list());
port = new Serial(this,Serial.list()[0],9600);
port.bufferUntil(10);
cp5 = new ControlP5(this); //init gui lib
console = new Console(this); //init console
console.start();
}
void draw() {
background(myColorBackground);
fill(250, 131, 3); //text color
console.draw();
k= (width*0.75);
l=(0.25*height)-50;
fill(0);
stroke(250, 131, 1);
rect(k+20, l+20, 12,12);
fill(250, 131, 3);
textFont(font, 16);
text("A1-B1", k+100, l+20);
}
void serialEvent(Serial myPort) {
while(port.available()>0){
val = port.readStringUntil(10);
}
if (val!=null)
{
println(val);
}
}
The best advice we can give you is to break your problem down into smaller steps and take those pieces on one at a time.
For example, can you create a simple sketch that displays a message after the mouse has been clicked? Forget about the Arduino for a minute and just get this working by itself. It might look something like this:
boolean mouseWasPressed = false;
void draw(){
if(mouseWasPressed){
background(255, 0 , 0);
}
}
void mousePressed(){
mouseWasPressed = true;
}
Separately from that, get a sketch working that just shows the Arduino message in the console. It sounds like you might already have a lot of that done, but try to isolate it in a small example program.
When you have both of those working separately, then you can start thinking about combining them into one program. And if you get stuck, you can post a MCVE showing exactly which step you're stuck on. Good luck.

Kinect infra image not showing - why?

I have installed openni2.2, nite2.2 and kinect SDK 1.6 along with Simpleopenni library for processing. Everything working fine except infrared image - it is simply not there. That is really strange since at the same time I can clearly see the depth image (and depthimage logically need the infra camera and projector working to run). So I assume there is a problem with drivers or software? I would like to use kinect as infrared camera. Please help, below I attach my test code:
/* --------------------------------------------------------------------------
* SimpleOpenNI IR Test
* --------------------------------------------------------------------------
* Processing Wrapper for the OpenNI/Kinect library
* http://code.google.com/p/simple-openni
* --------------------------------------------------------------------------
* prog: Max Rheiner / Interaction Design / zhdk / http://iad.zhdk.ch/
* date: 02/16/2011 (m/d/y)
* ----------------------------------------------------------------------------
*/
import SimpleOpenNI.*;
SimpleOpenNI context;
void setup()
{
context = new SimpleOpenNI(this);
// enable depthMap generation
if(context.enableDepth() == false)
{
println("Can't open the depthMap, maybe the camera is not connected!");
exit();
return;
}
// enable ir generation
if(context.enableIR() == false)
{
println("Can't open the depthMap, maybe the camera is not connected!");
exit();
return;
}
background(200,0,0);
size(context.depthWidth() + context.irWidth() + 10, context.depthHeight());
}
void draw()
{
// update the cam
context.update();
// draw depthImageMap
image(context.depthImage(),0,0);
// draw irImageMap
image(context.irImage(),context.depthWidth() + 10,0);
}
This does the job:
context.enableIR(1,1,1);
I have the exact same issue.
It's not a solution but the closest I can get to getting an infra-red image from the kinect is by getting the point cloud from the depth Image
That soltuion is here
import SimpleOpenNI.*;
import processing.opengl.*;
SimpleOpenNI kinect;
void setup()
{
size( 1024, 768, OPENGL);
kinect = new SimpleOpenNI( this );
kinect.enableDepth();
}
void draw()
{
background( 0);
kinect.update();
image(kinect.depthImage(),0,0,160,120);//check depth image
translate( width/2, height/2, -1000);
rotateX( radians(180));
stroke(255);
PVector[] depthPoints = kinect.depthMapRealWorld();
//the program get stucked in the for loop it loops 307200 times and I don't have any points output
for( int i = 0; i < depthPoints.length ; i+=4)//draw point for every 4th pixel
{
PVector currentPoint = depthPoints[i];
if(i == 0) println(currentPoint);
point(currentPoint.x, currentPoint.y, currentPoint.z );
}
}
Are you able to capture the infrared stream, but you just can't see it?
Then the issue might be RANGE (which it should be in [0, 255]).
I had this issue in Python and C++; I solved it by dividing the array by the range (max-min) and then multiply all entries by 255.
user3550091 is right!
For reference here is my complete working code (Processing+OpenNI):
import SimpleOpenNI.*;
SimpleOpenNI context;
void setup(){
size(640 * 2 + 10, 480);
context = new SimpleOpenNI(this);
if(context.isInit() == false){
println("fail");
exit();
return;
}
context.enableDepth();
// enable ir generation
//context.enableIR(); old line
context.enableIR(1,1,1); //new line
background(200,0,0);
}
void draw(){
context.update();
image(context.depthImage(),context.depthWidth() + 10,0);
image(context.irImage(),0,0);
}

Image processing for windows phone

im searching for a good imaging SDK for windows phone ...
i tried to use Nokia SDK but it didn't work for me, it keeps showing as exception:
"Operation Is Not Valid Due To The Current State Of The Object."
here is my test code:
The processImage method is used to apply the filter on the image.
private async void processImage()
{
WriteableBitmap writeableBitmap = new WriteableBitmap((int)bitmapImage.PixelWidth, (int)bitmapImage.PixelHeight);
try
{
using (var imageStream = new StreamImageSource(photoStream))
{
// Applying the custom filter effect to the image stream
using (var customEffect = new NegateFilter(imageStream))
{
// Rendering the resulting image to a WriteableBitmap
using (var renderer = new WriteableBitmapRenderer(customEffect, writeableBitmap))
{
// Applying the WriteableBitmap to our xaml image control
await renderer.RenderAsync();
imageGrid.Source = writeableBitmap;
}
}
}
}
catch (Exception exc) { MessageBox.Show(exc.Message + exc.StackTrace, exc.Source, MessageBoxButton.OK); }
}
This is the NegateFilter class:
namespace ImagingTest
{
class NegateFilter : CustomEffectBase
{
public NegateFilter(IImageProvider source) : base(source){}
protected override void OnProcess(PixelRegion sourcePixelRegion, PixelRegion targetPixelRegion)
{
sourcePixelRegion.ForEachRow((index, width, pos) =>
{
for (int x = 0; x < width; ++x, ++index)
{
targetPixelRegion.ImagePixels[index] = 255 - sourcePixelRegion.ImagePixels[index];
}
});
}
}
}
any ideas for a good imaging SDK? like ImageJ on java for example, or OpenCV ..
i will be better to use Nokia SDK ..
thx :)
I looked in to you code and did a quick test.
The code worked fine when I just made sure that the bitmapImage.PixelWidth and bitmapImage.PixelHeight > 0.
I did not get and image on the screen but when I remove your custom filter the image is show.
I hope you will continue to use the SDK since it is a great product.
What about emguCV?
I am not try it yet but looks like it's possible with phone's camera.

Create more than one window of a single sketch in Processing

How to create more than one window of a single sketch in Processing?
Actually I want to detect and track a particular color (through webcam) in one window and display the detected co-ordinates as a point in another window.Till now I'm able to display the points in the same window where detecting it.But I want to split it into two different windows.
You need to create a new frame and a new PApplet... here's a sample sketch:
import javax.swing.*;
SecondApplet s;
void setup() {
size(640, 480);
PFrame f = new PFrame(width, height);
frame.setTitle("first window");
f.setTitle("second window");
fill(0);
}
void draw() {
background(255);
ellipse(mouseX, mouseY, 10, 10);
s.setGhostCursor(mouseX, mouseY);
}
public class PFrame extends JFrame {
public PFrame(int width, int height) {
setBounds(100, 100, width, height);
s = new SecondApplet();
add(s);
s.init();
show();
}
}
public class SecondApplet extends PApplet {
int ghostX, ghostY;
public void setup() {
background(0);
noStroke();
}
public void draw() {
background(50);
fill(255);
ellipse(mouseX, mouseY, 10, 10);
fill(0);
ellipse(ghostX, ghostY, 10, 10);
}
public void setGhostCursor(int ghostX, int ghostY) {
this.ghostX = ghostX;
this.ghostY = ghostY;
}
}
One option might be to create a sketch twice the size of your original window and just offset the detected coordinates by half the sketch's size.
Here's a very rough code snippet (assumming blob will be a detected color blob):
int camWidth = 320;
int camHeight = 240;
Capture cam;
void setup(){
size(camWidth * 2,camHeight);
//init cam/opencv/etc.
}
void draw(){
//update cam and get data
image(cam,0,0);
//draw
rect(camWidth+blob.x,blob.y,blob.width,blob.height);
}
To be honest, it might be easier to overlay the tracked information. For example, if you're doing color tracking, just display the outlines of the bounding box of the tracked area.
If you really really want to display another window, you can use a JPanel.
Have a look at this answer for a running code example.
I would recommend using G4P, a GUI library for Processing that has some functionality built in for handling multiple windows. I have used this before with a webcam and it worked well. It comes with a GWindow object that will spawn a window easily. There is a short tutorial on the website that explains the basics.
I've included some old code that I have that will show you the basic idea. What is happening in the code is that I make two GWindows and send them each a PImage to display: one gets a webcam image and the other an effected image. The way you do this is to augment the GWinData object to also include the data you would like to pass to the windows. Instead of making one specific object for each window I just made one object with the two PImages in it. Each GWindow gets its own draw loop (at the bottom of the example) where it loads the PImage from the overridden GWinData object and displays it. In the main draw loop I read the webcam and then process it to create the two images and then store them into the GWinData object.
Hopefully that gives you enough to get started.
import guicomponents.*;
import processing.video.*;
private GWindow window;
private GWindow window2;
Capture video;
PImage sorted;
PImage imgdif; // image with pixel thresholding
MyWinData data;
void setup(){
size(640, 480,P2D); // Change size to 320 x 240 if too slow at 640 x 480
// Uses the default video input, see the reference if this causes an error
video = new Capture(this, 640, 480, 24);
numPixels = video.width * video.height;
data = new MyWinData();
window = new GWindow(this, "TEST", 0,0, 640,480, true, P2D);
window.isAlwaysOnTop();
window.addData(data);
window.addDrawHandler(this, "Window1draw");
window2 = new GWindow(this, "TEST", 640,0 , 640,480, true, P2D);
window2.isAlwaysOnTop();
window2.addData(data);
window2.addDrawHandler(this, "Window2draw");
loadColors("64rev.csv");
colorlength = mycolors.length;
distances = new float[colorlength];
noCursor();
}
void draw()
{
if (video.available())
{
background(0);
video.read();
image(video,0,0);
loadPixels();
imgdif = get(); // clones the last image drawn to the screen v1.1
sorted = get();
/// Removed a lot of code here that did the processing
// hand data to our data class to pass to other windows
data.sortedimage = sorted;
data.difimage = imgdif;
}
}
class MyWinData extends GWinData {
public PImage sortedimage;
public PImage difimage;
MyWinData(){
sortedimage = createImage(640,480,RGB);
difimage = createImage(640,480,RGB);
}
}
public void Window1draw(GWinApplet a, GWinData d){
MyWinData data = (MyWinData) d;
a.image(data.sortedimage, 0,0);
}
public void Window2draw(GWinApplet a, GWinData d){
MyWinData data = (MyWinData) d;
a.image(data.difimage,0,0);
}

Working capture cards for Processing on a Mac

I recently bought a USB capture card for my Mac (EzCap: http://www.amazon.com/Easycap-Version-Capturer-Camcorder-Compatible/dp/B0044XIQIW) and I'm not all that shocked to find out it doesn't work with Processing. (I've tried the Capture library and GSVideo).
My app needs to take in video from an external source (i.e not just the built in iSight camera - which is super simple) for processing.
I was wondering if anyone has a working video capture implementation? And could let me know what capture devices worked for them?
Thought i'd ask before I start wasting a tonne of time and money buying more expensive devices that also might not work.
Thanks in advance.
You can start off by checking if your USB camera is seen in processing. Using GSVideo for example:
import codeanticode.gsvideo.*;
GSCapture cam;
void setup() {
size(640,480);
String[] cameras = GSCapture.list();
if (cameras.length == 0)
{
println("There are no cameras available for capture.");
exit();
} else {
println("Available cameras:");
for (int i = 0; i < cameras.length; i++) {
println(cameras[i]);
} }
cam = new GSCapture(this, 640, 480, cameras[0]);
cam.start(); }
if it does see the camera, you can add the draw() function:
void draw(){
if (cam.available() == true) {
cam.read();
cam.loadPixels();
image(cam,0,0);
}}
That works for me.

Resources