Video with alpha channel in Processing - processing

I was wondering if anyone can be amazing and help me with something I'm working on in Processing. I need to play a video file with transparencies over a live feed so that the video isn't simply a rectangle. Here is the section of the code that I think I need to add something to or change. I'm extremely new to all of this and I'm extremely grateful to anyone that can help.

If you're video has an alpha channel, that's great,
otherwise, you should be able to blend() the other content.
Here's a basic proof of concept sketch. It overlays a grid of circles on top of a live feed. Use the space key to cycle though blend modes. Some will work better than others depending on your content and what you're trying to achieve:
import processing.video.*;
Capture cam;
int w = 320;
int h = 240;
int np = w*h;
PImage overlay;
int blendMode = 1;
int[] blendModes = {BLEND,ADD,SUBTRACT,DARKEST,LIGHTEST,DIFFERENCE,EXCLUSION,MULTIPLY,SCREEN,OVERLAY,HARD_LIGHT,SOFT_LIGHT,DODGE,BURN};
String[] blendModesNames = {"BLEND","ADD","SUBTRACT","DARKEST","LIGHTEST","DIFFERENCE","EXCLUSION","MULTIPLY","SCREEN","OVERLAY","HARD_LIGHT","SOFT_LIGHT","DODGE","BURN"};
void setup(){
size(w,h);
cam = new Capture(this,w,h);
cam.start();
//test content to overlay, a grid of circles
background(0);fill(255);
for(int y = 0 ; y < height; y += 30)
for(int x = 0 ; x < width; x+= 30)
ellipse(x,y,15,15);
overlay = get();
}
void draw(){
image(cam,0,0);
blend(overlay,0,0,width,height,0,0,width,height,blendModes[blendMode]);
}
void keyReleased(){
if(key == ' ') {
blendMode = (blendMode+1)%blendModes.length;
println("blendMode: " + blendModesNames[blendMode]);
}
}
void captureEvent(Capture c){
c.read();
}

I solved (maybe can be improved) by using 2 videos: first footage is the color map with white color on the background; second footage is the matte mask: white for the "important" part, and black the others. Then apply mask() function, herunder is the important part of the code:
Movie mov1;
Movie mov2;
void setup() {
....code...
mov1 = new Movie(this, "matte.mov");
mov2 = new Movie(this, "alpha.mov");
mov1.play();
mov1.pause();
mov2.play();
mov2.pause();
}
void draw() {
...code...
mov1.play();
mov2.play();
loadPixels();
mov2.mask(mov1);
image(mov2, 0, 0);
}
The video used for the test was 256x256, I always use power of two numbers for better performance (float maths). Hope this helps someone!

Related

Alpha channel in C++Builder

In Borland/Embarcadero C++Builder with VCL, I am trying to develop an application with an image where some parts (in fact, circles) fade in or out over time.
My code is mostly as follows:
void __fastcall TfmMain::FormCreate(TOBject *Sender)
{
img = new TBitmap;
img->Width = 800;
img->Height = 600;
fmMain->DoubleBuffered = true;
...
}
void __fastcall TfmMain::tmMainTimer(TObject *Sender)
{
for(int i = 0; i < nbParts; i++){
...
img->Brush->Color = clRed | alpha (t_time) << 24;
// alpha is a function returning 0 to 0xff, depending on required level of fade at time t_time)
img->Canvas->Ellipse(....);
}
fmMain->Canvas->Draw(0, 0, img);
}
But the result is not at all what I want : as an example, a part supposed to fade out has its color alternating between red and black. Same for a part supposed to fade in.
I tried DrawTransparent(), but had the error:
DrawTransparent is not accessible
And it has a transparency value for the whole bitmap, not for individual parts.
I tried a separate bitmap for each part, but I may have hundreds of them, and the animation becomes too slow.
Please, can someone help, and tell me what I should do?

Using Processing for image visualization: pixel color thresholds

Image to be manipulated, hoping to identify each white dot on each picture with a counter
PImage blk;
void setup() {
size(640, 480);
blk=loadImage("img.png");
}
void draw () {
loadPixels();
blk.loadPixels();
int i = 0;
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
int loc = x+y*width;
pixels [loc] = blk.pixels[loc];
if (blk.pixels[loc] == 0) {
if (blk.pixels [loc]+1 != 0) {
i++;
}
}
float r = red(blk.pixels[loc]);
float g = green(blk.pixels[loc]);
float b = blue(blk.pixels[loc]);
pixels [loc] = color(r, g, b);
}
}
System.out.println (i);
updatePixels();
}
The main problem is within my if statement, not sure to approach it logically.
I'm unsure where this is exactly going, but I can help you find the white pixels. Here, I just counted 7457 "white" pixels (then I turned them red so you can see where they are and adjust the threshold if you want to get more or less of them):
Of course, this is just a proof of concept which you should be able to adapt to your needs.
PImage blk;
void setup() {
size(640, 480);
blk=loadImage("img.png");
blk.loadPixels();
int whitePixelsCount = 0;
// I'm doing this in the 'setup()' method because I don't need to do it 60 times per second
// Once it's done once I can just use the image as modified unless you want several
// different versions (which you can calculate once anyway then store in different PImages)
for (int i = 0; i < blk.width * blk.height; i++) {
float r = red(blk.pixels[i]);
float g = green(blk.pixels[i]);
float b = blue(blk.pixels[i]);
// In RGB, the brightness of each color is represented by it's intensity
// So here I'm checking the "average intensity" of the color to see how bright it is
// And I compare it to 100 since 255 is the max and I wanted this simple, but you can
// play with this threshold as much as you like
if ((r+g+b)/3 > 100) {
whitePixelsCount++;
// Here I'm making those pixels red so you can see where they are.
// It's easier to adjust the threshold if you can see what you're doing
blk.pixels[i] = color(255, 0, 0);
}
}
println(whitePixelsCount);
updatePixels();
}
void draw () {
image(blk, 0, 0);
}
In short (you'll read this in the comments too), we count the pixels according to a threshold we can adjust. To make things more obvious for you, I colored the "white" pixels red. You can lower or raise the threshold according to what you see this way, and once you know what you want you can get rid of the color.
There is a difficulty here, which is that the image isn't "black and white", but more greyscale - which is totally normal, but makes things harder for what you seem to be trying to do. You'll probably have to tinker a lot to get to the exact ratio which interests you. It could help a lot if you edited the original image in GiMP or another image software which lets you adjust contrast and brightness. It's kinda cheating, but it it doesn't work right off the bat this strategy could save you some work.
Have fun!

Processing: How to add audio into a webcam effect?

How Can I add a song to this code using processing?, And synchronize it with a PIR sensor in Arduino?.
import processing.video.*;
import ddf.minim.*;
import ddf.minim.AudioPlayer;
// Size of each cell in the grid
int cellSize = 20;
// Number of columns and rows in our system
int cols, rows;
// Variable for capture device
Capture video;
Minim minim;
AudioPlayer song;
void setup() {
size(1280, 720);
frameRate(30);
cols = width / cellSize;
rows = height / cellSize;
colorMode(RGB, 255, 255, 255, 100);
// This the default video input, see the GettingStartedCapture
// example if it creates an error
video = new Capture(this, width, height);
// Start capturing the images from the camera
video.start();
background(0);
}
{
// we pass this to Minim so that it can load files from the data directory
minim = new Minim(this);
// loadFile will look in all the same places as loadImage does.
// this means you can find files that are in the data folder and the
// sketch folder. you can also pass an absolute path, or a URL.
song = minim.loadFile("untitled.wav");
}
void draw() {
if (video.available()) {
video.read();
video.loadPixels();
// Begin loop for columns
for (int i = 0; i < cols; i++) {
// Begin loop for rows
for (int j = 0; j < rows; j++) {
// Where are we, pixel-wise?
int x = i*cellSize;
int y = j*cellSize;
int loc = (video.width - x - 1) + y*video.width; // Reversing x to mirror the image
float r = red(video.pixels[loc]);
float g = green(video.pixels[loc]);
float b = blue(video.pixels[loc]);
// Make a new color with an alpha component
color c = color(r, g, b, 75);
// Code for drawing a single rect
// Using translate in order for rotation to work properly
pushMatrix();
translate(x+cellSize/2, y+cellSize/2);
// Rotation formula based on brightness
rotate((2 * PI * brightness(c) / 255.0));
rectMode(CENTER);
fill(c);
noStroke();
// Rects are larger than the cell for some overlap
rect(0, 0, cellSize+6, cellSize+6);
popMatrix();
}
}
}
}
I am interested to detect the movement to activate or desactivate this feature.
Please, Can you help me.
This is the error that I got:
The sketch path is not set. ==== JavaSound Minim Error ==== ==== java.lang.reflect.InvocationTargetException
=== Minim Error === === Couldn't load the file untitled.wav
Stack Overflow isn't really designed for general "how do I do this" type questions. It's for specific "I tried X, expected Y, but got Z instead" type questions. But I'll try to help in a general sense:
You need to break your problem down into smaller pieces and then take those pieces on one at a time. Get a simple example working. If you're asking about audio, then forget about the webcam for a second. Create a simple sketch that just plays a sound. Separately from that, create a simple sketch that just gets a webcam working. When you have those working perfectly, then you can think about combining them. But work your way forward in small steps. Write down exactly what you want to happen, in English, and that will be an algorithm that you can think about implementing with code.
Then if you get stuck, you can post a more specific question along with a MCVE. Good luck.

Getting two consecutive frames from camera in Processing

I've been trying a couple of ways to achieve this using Processing, but each time, it doesn't get some that are perfectly consecutive. Would anyone know the "right way" of doing this?
Thanks in advance!
In theory it should be a matter of listening to the captureEvent() to get a new frame and keep track if the first frame was previously recorded, if so recorded the second one afterwards.
Here's a basic commented sketch to illustrate the point (press any key to grab another pair of frames):
import processing.video.*;
Capture camera;
PImage firstFrame;
PImage secondFrame;
void setup(){
size(1920,480);
camera = new Capture(this,640,480);
camera.start();
}
void draw(){
image(camera,0,0);
if(firstFrame != null){
image(firstFrame,640,0);
}
if(secondFrame != null){
image(secondFrame,1280,0);
}
}
//this is the callback from the video library when a new camera frame is available
void captureEvent(Capture c){
//read a new frame
c.read();
//if the first frame wasn't recorded yet, record(copy) it's pixels
if(firstFrame == null){
firstFrame = c.get();
}
//same for the second frame, but check if the first frame has been recorded first
if(firstFrame != null && secondFrame == null){
secondFrame = c.get();
}
}
void keyPressed(){
//reset consecutive frames on keypress
firstFrame = secondFrame = null;
}
In theory (as you can see in the Processing Video Library's source code), captureEvent is fired only when a new camera sample is ready.
In practice, you will find that two consecutive frames may look identical (even though they could be a split second apart in time), even noise as you pointed out in your comments.
It feels like what you're after is a frame that is consecutive, but different enough from the previous. If that's the case you can have a play with the FrameDifferencing example (Processing > Examples > Libraries > Video > Capture > FrameDifferencing)
Here's a modified version of the above sketch, using Golan Levin's FrameDifferencing code to only grab a second frame if it's different by a little bit:
import processing.video.*;
Capture camera;
PImage firstFrame;
PImage secondFrame;
PImage diff;
void setup(){
size(1920,960);
camera = new Capture(this,640,480);
camera.start();
diff = createImage(640,480,RGB);
}
void draw(){
image(camera,0,0);
if(firstFrame != null){
image(firstFrame,640,0);
}
if(secondFrame != null){
image(secondFrame,1280,0);
}
image(diff,0,480);
}
//this is the callback from the video library when a new camera frame is available
void captureEvent(Capture c){
//read a new frame
c.read();
//if the first frame wasn't recorded yet, record(copy) it's pixels
if(firstFrame == null){
firstFrame = c.get();
println("recorded first frame at",new java.util.Date());
}
//same for the second frame, but check if the first frame has been recorded first
if(firstFrame != null && secondFrame == null){
//if the difference between the first frame cand the current frame is even ever so slightly off, record the second frame
if(difference(firstFrame,camera) > 100){
secondFrame = c.get();
}
}
}
int difference(PImage first,PImage second){
final int numPixels = 640*480;
camera.loadPixels();
int movementSum = 0; // Amount of movement in the frame
for (int i = 0; i < numPixels; i++) { // For each pixel in the video frame...
color currColor = first.pixels[i];
color prevColor = second.pixels[i];
// Extract the red, green, and blue components from current pixel
int currR = (currColor >> 16) & 0xFF; // Like red(), but faster
int currG = (currColor >> 8) & 0xFF;
int currB = currColor & 0xFF;
// Extract red, green, and blue components from previous pixel
int prevR = (prevColor >> 16) & 0xFF;
int prevG = (prevColor >> 8) & 0xFF;
int prevB = prevColor & 0xFF;
// Compute the difference of the red, green, and blue values
int diffR = abs(currR - prevR);
int diffG = abs(currG - prevG);
int diffB = abs(currB - prevB);
// Render the difference image to the screen
diff.pixels[i] = color(diffR, diffG, diffB);
// Add these differences to the running tally
movementSum += diffR + diffG + diffB;
}
diff.updatePixels();
return movementSum;
}
void keyPressed(){
//reset consecutive frames on keypress
firstFrame = secondFrame = null;
}
In the example above 100 is an arbitrary value.
The max would be 255*3*640*480 (0-255 per channel * number of channels * width * height)
I looked at the KetaiCamera sources and issue reports, especially this one. Unfortunately this code is not built to provide real time camera frames.
You can try to take the AndroidCapture project as a starter, and modify the native Android class to achieve your goal.

Extract Pixel Data in Processing

Using the capture video in processing, I want to understand how to set up a small section of the camera feed that the camera will constantly scan. Within that defined section, I want the camera to look for a change in brightness (i.e the brightness now becomes dark.) If the brightness changes I just want it to return 'shadow detected.' Can anyone help me get started? I am very new to this language.
You can easily get a small section of the camera(or any image) using PImage's get() method to which you pass a bunch of coordinates describing of your section rectangle(x,y, width, height).
This is also known as a region of interest (ROI) in computer vision.
Once you retrieve this region, you can process it.
Here a minimal example showing how to get the ROI and process it (in this case simply doing threshold based on the mouse position:
import processing.video.*;
Capture cam;
int w = 320;
int h = 240;
int np = w*h;
int roiX = 80;
int roiY = 60;
int roiW = 160;
int roiH = 120;
PImage roi;
void setup(){
size(w,h);
cam = new Capture(this,w,h);
cam.start();
}
void draw(){
image(cam,0,0);
if(roi != null){
//process ROI
// roi.filter(GRAY);
roi.filter(THRESHOLD,(float)mouseX/width);
//display output
image(roi,roiX,roiY);
}
}
void captureEvent(Capture c){
c.read();
roi = c.get(roiX,roiY,roiW,roiH);
}
You can get the brightness of a pixel using the brightness() function.
This means you can get the average brightness of your ROI by adding the brightness levels for each pixels, then dividing the result by the total number of pixels:
import processing.video.*;
Capture cam;
int w = 320;
int h = 240;
int np = w*h;
int roiX = 80;
int roiY = 60;
int roiW = 160;
int roiH = 120;
PImage roi;
void setup(){
size(w,h);fill(127);
cam = new Capture(this,w,h);
cam.start();
}
void draw(){
image(cam,0,0);
if(roi != null){
//process ROI
// roi.filter(GRAY);
roi.filter(THRESHOLD,(float)mouseX/width);
//display output
image(roi,roiX,roiY);
text("ROI brightness:"+brightness(roi),10,15);
}
}
void captureEvent(Capture c){
c.read();
roi = c.get(roiX,roiY,roiW,roiH);
}
float brightness(PImage in){
float brightness = 0.0;
int numPixels = in.pixels.length;
for(int i = 0 ; i < numPixels; i++) brightness += brightness(in.pixels[i]);
return brightness/numPixels;
}
If you've set your ROI to cover the bright area, you should see the average brightness go down as the shadow appears. Simply using a threshold value in a condition should allow to act on it:
import processing.video.*;
Capture cam;
int w = 320;
int h = 240;
int np = w*h;
int roiX = 80;
int roiY = 60;
int roiW = 160;
int roiH = 120;
PImage roi;
float brightness = 0.0;
float shadowThresh = 127.0;
void setup(){
size(w,h);fill(127);
cam = new Capture(this,w,h);
cam.start();
}
void draw(){
image(cam,0,0);
if(roi != null){
//process ROI
// roi.filter(GRAY);
roi.filter(THRESHOLD,(float)mouseX/width);
brightness = brightness(roi);
if(brightness < shadowThresh) println("shadow detected");
//display output
image(roi,roiX,roiY);
text("ROI brightness:"+brightness,10,15);
}
}
void captureEvent(Capture c){
c.read();
roi = c.get(roiX,roiY,roiW,roiH);
}
float brightness(PImage in){
float brightness = 0.0;
int numPixels = in.pixels.length;
for(int i = 0 ; i < numPixels; i++) brightness += brightness(in.pixels[i]);
return brightness/numPixels;
}
Hopefully these examples are easy to read and understand.
Note that this aren't as fast as they can be.
Be sure to also check out the video examples that come with Processing (Examples > Libraries > video > Capture), especially these: BrightnessThresholding,BrightnessTracking
If you want to learn more about techniques like these you should look into computer vision and the OpenCV library. There is a very nice OpenCV Processing library which you can now easily install via Sketch > Import Library... > Add Library... and select OpenCV for Processing. It also comes with examples on using brightness.
This covers the pixel manipulation side, but another important aspect of doing this sort of development is setup. It's crucial to have a reliable setup: it will make your life easier. What I mean by that is, in your case:
having control over the camera: being able to control auto white balance/brightness/etc. as automatic adjustments may throw off your values.
having control over the scene: making sure you reduce of risks of accidental lights messing with your tracking, or something bumping over the camera or object you're tracking.
Assuming the camera data you are analysing is a PImage, you can apply filters to the data to get it into a Black/White or grey scale form. The docs on PImage Filter modes: https://processing.org/reference/filter_.html should be useful.
You will probably have to do a pixel analysis - there may be a library to help here, but you can get an array of pixels from the filtered PImage, loop through it and check the values against your baseline values to see if they are brighter or darker. If they are grey scale in the 0 - 255 scale, you can tell if they are lighter if the number if higher than the baseline, or darker of the number is lower.

Resources