How to play a movie inside a user's silhoutte using kinect - processing

I have created a program related to kinect in Processing IDE using SimpleOpenNI library, in which a simple image is displayed inside the user's silhouette.
This is the code:
depthValues=kinect.depthMap();
userMap = kinect.userMap();
for (int i =0; i < kinect.depthHeight(); i++) {
for (int j = 0; j < kinect.depthWidth(); j++) {
int index=j+i*kinect.depthWidth();
if (userMap[index] != 0) {
c = movie.pixels[index];
// PImage m=movie.get();
userImage.pixels[index]=color(c);
}
else{
userImage.pixels[index]=color(0);
}
}
}
But How can I display an image of any playing movie inside the user's silhouette ...
Can any one suggest me about this?

Related

Randomize image when mouse pressed - Processing

I'm trying to make a grid where each individual cell contains one random image from my data folder.
So far, I've accomplished having a different image in every cell, but it doesn't randomize
instead of randomly picking one from the 600+ from the folder, it places every image in order 1 to 27.
Apart from that, I want it to randomize every time I click with the mouse instead of just randomizing when its closed and played again. The code:
PImage img[];
int nPics;
int w;
int h;
int rand;
void setup(){
size(1500,500);
nPics=27;
img = new PImage[nPics];
w=width/9;
h=height/3;
for (int i = 0; i <nPics; i++) {
img[i] = loadImage("img_" +nf(i,3)+ ".jpg");
imageMode(CORNERS);
}
//rand = int(random(0,687));
//img[0]=loadImage("img_" + nf(rand,3)+ ".jpg");
}
void draw(){
background(0);
for(int i=0;i<nPics;i=i+3){
int col = i/3;
for(int row=0;row<3;row++)
image(img[i+row],col*w,row*h,(col+1)*w,(row+1)*h);
}
}
When you are loading your images you are using i instead of rand. In order to randomize the images when you click the mouse, you can use the mousePressed() to reload different images into your array.
This should work:
PImage img[];
int nPics;
int w, h;
void loadImages(){
for (int i = 0; i < nPics; i++) {
img[i] = loadImage("img_"+ nf(int(random(0, 687)), 3) + ".jpg");
imageMode(CORNERS);
}
}
void setup() {
size(1500, 500);
nPics=27;
img = new PImage[nPics];
w=width/9;
h=height/3;
loadImages();
}
void mousePressed(){
loadImages();
}
void draw() {
background(0);
for (int i=0; i<nPics; i=i+3) {
int col = i/3;
for (int row=0; row<3; row++)
image(img[i+row], col*w, row*h, (col+1)*w, (row+1)*h);
}
}

Xamarin.Forms iOS flipping from portrait to landscape not working (iPhone only)

I have problem with resizing Grid on iPhone.
I subscribed event SizeChanged, which is rising when I'm rotating device.
Here are methods which I'm using to change grid size:
private void MainMenu_SizeChanged(object sender, EventArgs e)
{
if (Width > Height)
{
ImageSunLogo.Margin = 0;
GridMenuItems = ChangeGridOrientation(GridMenuItems, 2, 3);
}
if (Height > Width)
{
ImageSunLogo.Margin = 20;
GridMenuItems = ChangeGridOrientation(GridMenuItems, 3, 2);
}
}
private Grid ChangeGridOrientation(Grid gridParent, int nOfRows, int nOfColumns)
{
var resultGrid = gridParent;
var parentChilds = gridParent.Children.ToList();
ClearGridRowsColsChilds(ref resultGrid);
for (int i = 0; i < nOfRows; i++)
{
resultGrid.RowDefinitions.Add(new RowDefinition());
}
for (int i = 0; i < nOfColumns; i++)
{
resultGrid.ColumnDefinitions.Add(new ColumnDefinition());
}
int currentChildIndex = 0;
for (int row = 0; row < nOfRows; row++)
{
for (int col = 0; col < nOfColumns; col++)
{
resultGrid.Children.Add(parentChilds[currentChildIndex], col, row);
currentChildIndex++;
}
}
return resultGrid;
}
private void ClearGridRowsColsChilds(ref Grid grid)
{
GridMenuItems.Children.Clear();
GridMenuItems.RowDefinitions.Clear();
GridMenuItems.ColumnDefinitions.Clear();
}
This code working perfectly on Android devices, iPad, but on iPhone while changing orientation from portrait to landscape by rotate phone to the left side (to the right working normally), app don't rising this event, and grid does not flip.
Have anyone experienced something like that?
If you want to use the landscape .Do not forget to check the Device Orientation in info.plist

Unknwon Member Bitmap.SetPixel(x, y, color) in Xamarin

I'm developing a Xamarin application with Native Shared project. This is my Inversion filter method for bitmaps
using System;
using Android.Graphics;
public static Bitmap Inversion (Bitmap bmp) {
for (int x = 0; x < bmp.Width; x++)
{
for (int y = 0; y < bmp.Height; y++)
{
var pixel = new Color(bmp.GetPixel(x, y));
bmp.SetPixel(x, y, Color.Rgb(255 - pixel.R, 255 - pixel.G, 255 - pixel.B));
}
}
return bmp;
}
I'm getting a Java.Lang.IllegalStateException error, when applying filter to a bitmap and I have no idea how to fix it, here is the place it occurs:
I get that this is some Xamarin error not recognizing .SetPixel() method, by I don't know why this is occuring.
Here is the content of pixel variable:
Please help
Your Bitmap is immutable and thus you are getting the IllegalStateException, you can make a copy of it and then use SetPixel on the copy.
public static Bitmap Inversion(Bitmap bmp)
{
var mutableBitmap = Bitmap.CreateBitmap(bmp.Width, bmp.Height, bmp.GetConfig());
for (int x = 0; x < bmp.Width; x++)
{
for (int y = 0; y < bmp.Height; y++)
{
var pixel = new Color(bmp.GetPixel(x, y));
var color = Color.Rgb(255 - pixel.R, 255 - pixel.G, 255 - pixel.B);
mutableBitmap.SetPixel(x, y, color);
}
}
return mutableBitmap;
}

Processing - deprecated OpenKinect library

I am trying to replicate a project for Kinect for this music video, but the code is seriously outdated.
After weeks searching, I have not found anything about this.
I would be greatly thankful to anyone who points out to me what is deprecated in the following code:
(I'm using Processing 3)
import org.openkinect.*;
import org.openkinect.processing.*;
import java.io.*;
// Kinect Library object
Kinect kinect;
float a = 0;
// Size of kinect image
int w = 640;
int h = 480;
// writing state indicator
boolean write = false;
// treshold filter initial value
int fltValue = 950;
// "recording" object. each vector element holds a coordinate map vector
Vector <Object> recording = new Vector<Object>();
// We'll use a lookup table so that we don't have to repeat the math over and over
float[] depthLookUp = new float[2048];
void setup() {
size(800,600,P3D);
kinect = new Kinect(this);
kinect.start();
kinect.enableDepth(true);
// We don't need the grayscale image in this example
// so this makes it more efficient
kinect.processDepthImage(false);
// Lookup table for all possible depth values (0 - 2047)
for (int i = 0; i < depthLookUp.length; i++) {
depthLookUp[i] = rawDepthToMeters(i);
}
}
void draw() {
background(0);
fill(255);
textMode(SCREEN);
text("Kinect FR: " + (int)kinect.getDepthFPS() + "\nProcessing FR: " + (int)frameRate,10,16);
// Get the raw depth as array of integers
int[] depth = kinect.getRawDepth();
// We're just going to calculate and draw every 4th pixel (equivalent of 160x120)
int skip = 4;
// Translate and rotate
translate(width/2,height/2,-50);
rotateY(a);
//noStroke();
//lights();
int index = 0;
PVector[] frame = new PVector[19200];
for(int x=0; x<w; x+=skip) {
for(int y=0; y<h; y+=skip) {
int offset = x+y*w;
// Convert kinect data to world xyz coordinate
int rawDepth = depth[offset];
boolean flt = true;
PVector v = depthToWorld(x,y,rawDepth);
if (flt && rawDepth > fltValue)
{
v = depthToWorld(x,y,2047);
}
frame[index] = v;
index++;
stroke(map(rawDepth,0,2048,0,256));
pushMatrix();
// Scale up by 200
float factor = 400;
translate(v.x*factor,v.y*factor,factor-v.z*factor);
//sphere(1);
point(0,0);
//line (0,0,1,1);
popMatrix();
}
}
if (write == true) {
recording.add(frame);
}
// Rotate
//a += 0.015f;
}
// These functions come from:http://graphics.stanford.edu/~mdfisher/Kinect.html
float rawDepthToMeters(int depthValue) {
if (depthValue < 2047) {
return (float)(1.0 / ((double)(depthValue) * -0.0030711016 + 3.3309495161));
}
return 0.0f;
}
PVector depthToWorld(int x, int y, int depthValue) {
final double fx_d = 1.0 / 5.9421434211923247e+02;
final double fy_d = 1.0 / 5.9104053696870778e+02;
final double cx_d = 3.3930780975300314e+02;
final double cy_d = 2.4273913761751615e+02;
PVector result = new PVector();
double depth = depthLookUp[depthValue];//rawDepthToMeters(depthValue);
result.x = (float)((x - cx_d) * depth * fx_d);
result.y = (float)((y - cy_d) * depth * fy_d);
result.z = (float)(depth);
return result;
}
void stop() {
kinect.quit();
super.stop();
}
int currentFile = 0;
void saveFile() {
}
void keyPressed() { // Press a key to save the data
if (key == '1')
{
fltValue += 50;
println("fltValue: " + fltValue);
}
else if (key == '2')
{
fltValue -= 50;
println("fltValue: " + fltValue);
}
else if (key=='4'){
if (write == true) {
write = false;
println( "recorded " + recording.size() + " frames.");
// saveFile();
// save
Enumeration e = recording.elements();
println("Stopped Recording " + currentFile);
int i = 0;
while (e.hasMoreElements()) {
// Create one directory
boolean success = (new File("out"+currentFile)).mkdir();
PrintWriter output = createWriter("out"+currentFile+"/frame" + i++ +".txt");
PVector [] frame = (PVector []) e.nextElement();
for (int j = 0; j < frame.length; j++) {
output.println(j + ", " + frame[j].x + ", " + frame[j].y + ", " + frame[j].z );
}
output.flush(); // Write the remaining data
output.close();
}
currentFile++;
}
}
else if (key == '3') {
println("Started Recording "+currentFile);
recording.clear();
write = true;
}
}
If the code works, then I wouldn't worry too much about it. Deprecated can just mean that a newer version is available, not that the older version stopped working.
However, if the code does not work, then updating to a newer library is probably a good idea anyway. Check out the library section of the Processing homepage, which lists several Kinect libraries.
In fact, one of those libraries is the updated version of the old library you're using: Open Kinect for Processing.
Edit: It looks like both of the errors you mentioned are due to missing import statements. You need to import both Vector and Enumeration to use them:
import java.util.Vector;
import java.util.Enumeration;

How to make waveform rendering more interesting?

I wrote a waveform renderer that takes an audio file and creates something like this:
The logic is pretty simple. I calculate the number of audio samples required for each pixel, read those samples, average them and draw a column of pixels according to the resulting value.
Typically, I will render a whole song on around 600-800 pixels, so the wave is pretty compressed. Unfortunately this usually results in unappealing visuals as almost the entire song is just rendered at almost the same heights. There is no variation.
Interestingly, if you look at the waveforms on SoundCloud almost none of them are as boring as my results. They all have some variation. What could be the trick here? I don't think they just add random noise.
I don't think SoundCloud is doing anything particularly special. There are plenty of songs I see on their front page that are very flat. It has more to do with the way detail is perceived and what the overall dynamics of the song are like. The main difference is that SoundCloud is drawing absolute value. (The negative side of the image is just a mirror.)
For demonstration, here is a basic white noise plot with straight lines:
Now, typically a fill is used to make the overall outline easier to see. This already does a lot for the appearance:
Larger waveforms ("zoomed out" in particular) typically use a mirror effect because the dynamics become more pronounced:
Bars are another way to visualize and can give an illusion of detail:
A pseudo routine for a typical waveform graphic (average of abs and mirror) might look like this:
for (each pixel in width of image) {
var sum = 0
for (each sample in subset contained within pixel) {
sum = sum + abs(sample)
}
var avg = sum / length of subset
draw line(avg to -avg)
}
This is effectively like compressing the time axis as RMS of the window. (RMS could also be used but they are almost the same.) Now the waveform shows overall dynamics.
That is not too different from what you are already doing, just abs, mirror and fill. For boxes like SoundCloud uses, you would be drawing rectangles.
Just as a bonus, here is an MCVE written in Java to generate a waveform with boxes as described. (Sorry if Java is not your language.) The actual drawing code is near the top. This program also normalizes, i.e., the waveform is "stretched" to the height of the image.
This simple output is the same as the above pseudo routine:
This output with boxes is very similar to SoundCloud:
import javax.swing.*;
import java.awt.*;
import java.awt.event.*;
import java.awt.image.*;
import java.io.*;
import javax.sound.sampled.*;
public class BoxWaveform {
static int boxWidth = 4;
static Dimension size = new Dimension(boxWidth == 1 ? 512 : 513, 97);
static BufferedImage img;
static JPanel view;
// draw the image
static void drawImage(float[] samples) {
Graphics2D g2d = img.createGraphics();
int numSubsets = size.width / boxWidth;
int subsetLength = samples.length / numSubsets;
float[] subsets = new float[numSubsets];
// find average(abs) of each box subset
int s = 0;
for(int i = 0; i < subsets.length; i++) {
double sum = 0;
for(int k = 0; k < subsetLength; k++) {
sum += Math.abs(samples[s++]);
}
subsets[i] = (float)(sum / subsetLength);
}
// find the peak so the waveform can be normalized
// to the height of the image
float normal = 0;
for(float sample : subsets) {
if(sample > normal)
normal = sample;
}
// normalize and scale
normal = 32768.0f / normal;
for(int i = 0; i < subsets.length; i++) {
subsets[i] *= normal;
subsets[i] = (subsets[i] / 32768.0f) * (size.height / 2);
}
g2d.setColor(Color.GRAY);
// convert to image coords and do actual drawing
for(int i = 0; i < subsets.length; i++) {
int sample = (int)subsets[i];
int posY = (size.height / 2) - sample;
int negY = (size.height / 2) + sample;
int x = i * boxWidth;
if(boxWidth == 1) {
g2d.drawLine(x, posY, x, negY);
} else {
g2d.setColor(Color.GRAY);
g2d.fillRect(x + 1, posY + 1, boxWidth - 1, negY - posY - 1);
g2d.setColor(Color.DARK_GRAY);
g2d.drawRect(x, posY, boxWidth, negY - posY);
}
}
g2d.dispose();
view.repaint();
view.requestFocus();
}
// handle most WAV and AIFF files
static void loadImage() {
JFileChooser chooser = new JFileChooser();
int val = chooser.showOpenDialog(null);
if(val != JFileChooser.APPROVE_OPTION) {
return;
}
File file = chooser.getSelectedFile();
float[] samples;
try {
AudioInputStream in = AudioSystem.getAudioInputStream(file);
AudioFormat fmt = in.getFormat();
if(fmt.getEncoding() != AudioFormat.Encoding.PCM_SIGNED) {
throw new UnsupportedAudioFileException("unsigned");
}
boolean big = fmt.isBigEndian();
int chans = fmt.getChannels();
int bits = fmt.getSampleSizeInBits();
int bytes = bits + 7 >> 3;
int frameLength = (int)in.getFrameLength();
int bufferLength = chans * bytes * 1024;
samples = new float[frameLength];
byte[] buf = new byte[bufferLength];
int i = 0;
int bRead;
while((bRead = in.read(buf)) > -1) {
for(int b = 0; b < bRead;) {
double sum = 0;
// (sums to mono if multiple channels)
for(int c = 0; c < chans; c++) {
if(bytes == 1) {
sum += buf[b++] << 8;
} else {
int sample = 0;
// (quantizes to 16-bit)
if(big) {
sample |= (buf[b++] & 0xFF) << 8;
sample |= (buf[b++] & 0xFF);
b += bytes - 2;
} else {
b += bytes - 2;
sample |= (buf[b++] & 0xFF);
sample |= (buf[b++] & 0xFF) << 8;
}
final int sign = 1 << 15;
final int mask = -1 << 16;
if((sample & sign) == sign) {
sample |= mask;
}
sum += sample;
}
}
samples[i++] = (float)(sum / chans);
}
}
} catch(Exception e) {
problem(e);
return;
}
if(img == null) {
img = new BufferedImage(size.width, size.height, BufferedImage.TYPE_INT_ARGB);
}
drawImage(samples);
}
static void problem(Object msg) {
JOptionPane.showMessageDialog(null, String.valueOf(msg));
}
public static void main(String[] args) {
SwingUtilities.invokeLater(new Runnable() {
#Override
public void run() {
JFrame frame = new JFrame("Box Waveform");
JPanel content = new JPanel(new BorderLayout());
frame.setContentPane(content);
JButton load = new JButton("Load");
load.addActionListener(new ActionListener() {
#Override
public void actionPerformed(ActionEvent ae) {
loadImage();
}
});
view = new JPanel() {
#Override
protected void paintComponent(Graphics g) {
super.paintComponent(g);
if(img != null) {
g.drawImage(img, 1, 1, img.getWidth(), img.getHeight(), null);
}
}
};
view.setBackground(Color.WHITE);
view.setPreferredSize(new Dimension(size.width + 2, size.height + 2));
content.add(view, BorderLayout.CENTER);
content.add(load, BorderLayout.SOUTH);
frame.pack();
frame.setResizable(false);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setLocationRelativeTo(null);
frame.setVisible(true);
}
});
}
}
Note: for the sake of simplicity, this program loads the entire audio file in to memory. Some JVMs may throw OutOfMemoryError. To correct this, run with increased heap size as described here.

Resources