Related
So I am working with p5.js for class and I am very lost with it, as I dont understand very well. How do I animate this image to match with the sound? I tried frequency analysis but i dont know how to apply to the image. I wanted to animate it as i it was beating, like a heart, but according to the bpm sound i put in the sketch.
here is the sketch + image + sound
https://editor.p5js.org/FilipaRita/sketches/cUG6qNhIR
Actually finding the BMP for an entire piece of music would be a bit complicated (see this sound.stackexchange.com question), but if you just want to detect beats in real time I think you can probably hack something together that will work. Here is a visualization that I think will help you understand the data returned by fft.analyze():
const avgWindow = 20;
const threshold = 0.4;
let song;
let fft;
let beat;
let lastPeak;
function preload() {
song = loadSound("https://www.paulwheeler.us/files/metronome.wav");
}
function setup() {
createCanvas(400, 400);
fft = new p5.FFT();
song.loop();
beat = millis();
}
function draw() {
// Pulse white on the beat, then fade out with an inverse cube curve
background(map(1 / pow((millis() - beat) / 1000 + 1, 3), 1, 0, 255, 100));
drawSpectrumGraph(0, 0, width, height);
}
let i = 0;
// Graphing code adapted from https://jankozeluh.g6.cz/index.html by Jan Koželuh
function drawSpectrumGraph(left, top, w, h) {
let spectrum = fft.analyze();
stroke('limegreen');
fill('darkgreen');
strokeWeight(1);
beginShape();
vertex(left, top + h);
let peak = 0;
// compute a running average of values to avoid very
// localized energy from triggering a beat.
let runningAvg = 0;
for (let i = 0; i < spectrum.length; i++) {
vertex(
//left + map(i, 0, spectrum.length, 0, w),
// Distribute the spectrum values on a logarithmic scale
// We do this because as you go higher in the spectrum
// the same perceptible difference in tone requires a
// much larger chang in frequency.
left + map(log(i), 0, log(spectrum.length), 0, w),
// Spectrum values range from 0 to 255
top + map(spectrum[i], 0, 255, h, 0)
);
runningAvg += spectrum[i] / avgWindow;
if (i >= avgWindow) {
runningAvg -= spectrum[i] / avgWindow;
}
if (runningAvg > peak) {
peak = runningAvg;
}
}
// any time there is a sudden increase in peak energy, call that a beat
if (peak > lastPeak * (1 + threshold)) {
// print(`tick ${++i}`);
beat = millis();
}
lastPeak = peak;
vertex(left + w, top + h);
endShape(CLOSE);
// this is the range of frequencies covered by the FFT
let nyquist = 22050;
// get the centroid (value in hz)
let centroid = fft.getCentroid();
// the mean_freq_index calculation is for the display.
// centroid frequency / hz per bucket
let mean_freq_index = centroid / (nyquist / spectrum.length);
stroke('red');
// convert index to x value using a logarithmic x axis
let cx = map(log(mean_freq_index), 0, log(spectrum.length), 0, width);
line(cx, 0, cx, h);
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.3.1/p5.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.3.1/addons/p5.sound.min.js"></script>
Hopefully this code with the comments helps you understand the data returned by fft.analyze() and you can use this as a starting point to achieve the effect you are looking for.
Disclaimer: I have experience with p5.js but I'm not an audio expert, so there could certainly be better ways to do this. Also while this approach works for this simple audio file there's a good chance it would fail horribly for actual music or real world environments.
If I were you then I would cheat and add some meta data that explicitly includes the timestamps of the beats. This would be a much simpler problem if you could shift the problem of beat detection to pre-processing. Maybe even do it by hand. Rather than trying to do it at runtime. The signal processing to do beat detection in an audio signal is non-trivial.
class Particle{
PVector velocity, location; //PVector variables for each particle.
Particle(){ //Constructor - random location and speed for each particle.
velocity = new PVector(random(-0.5,0.5), random(-0.5,0.5));
location = new PVector(random(0,width),random(0,width));
}
void update() { location.add(velocity); } //Motion method.
void edge() { //Wraparound case for particles.
if (location.x > width) {location.x = 0;}
else if (location.x < 0) {location.x = width;}
if (location.y > height) {location.y = 0;}
else if (location.y < 0) {location.y = height;}
}
void display(ArrayList<Particle> p){ //Display method to show lines and ellipses between particles.
for(Particle other: p){ //For every particle in the ArrayList.
float d = PVector.dist(location,other.location); //Get distance between any two particle.
float a = 255 - d*2.5; //Map variable 'a' as alpha based on distance. E.g. if distance is high, d = 100, alpha is low, a = 255 - 225 = 30.
println("Lowest distance of any two particle =" + d); //Debug output.
if(d<112){ //If the distance of any two particle falls bellow 112.
noStroke(); //No outline.
fill(0,a); //Particle are coloured black, 'a' to vary alpha.
ellipse(location.x, location.y, 8, 8); //Draw ellipse based on location of particle.
stroke(0,a); //Lines are coloured black, 'a' to vary alpha.
strokeWeight(0.7);
line(location.x,location.y,other.location.x,other.location.y); //Draw line between four coordinates, between two particle.
}
}
}
}
ArrayList<Particle> particles = new ArrayList<Particle>(); //Create a new arraylist of type Particle.
void setup(){
size(640,640,P2D); //Setup frame of sketch.
particles.add(new Particle()); //Add five Particle elements into arraylist.
particles.add(new Particle());
particles.add(new Particle());
particles.add(new Particle());
particles.add(new Particle());
}
void draw(){
background(255); //Set white background.
for(Particle p: particles){ //For every 'p' of type Particle in arraylist particles.
p.update(); //Update location based on velocity.
p.display(particles); //Display each particle in relation to other particles.
p.edge(); //Wraparound if particle reaches edge of screen.
}
}
In the above code, there are to shape objects, lines and ellipses. The transparency of which are affected by variable a.
Variable 'a', or alpha, is extrapolated from 'd' which is distance. Hence, when the objects are further, the alpha value of the objects falls.
In this scenario, the alpha values of the line do not change over time e.g. fade with distance. However the ellipses seem to be stuck on alpha '255' despite having very similar code.
If the value of 'a' is hardcoded, e.g.
if(d<112){ //If the distance of any two particle falls bellow 112.
noStroke(); //No outline.
fill(0,100); //Particle are coloured black, set alpha 'a' to be 100, grey tint.
ellipse(location.x, location.y, 8, 8); //Draw ellipse based on location of particle.
the ellipses changes colour as expected to a grey tint.
Edit: I believe I have found the root of the issue. The variable 'a' does not discriminate between the particles that are being iterated. As such, the alpha might be stuck/adding up to 255.
You're going to have to post an MCVE. Note that this should not be your entire sketch, just a few hard-coded lines so we're all working from the same code. We should be able to copy and paste your code into our own machines to see the problem. Also, please try to properly format your code. Your lack of indentation makes your code hard to read.
That being said, I can try to help in a general sense. First of all, you're printing out the value of a, but you haven't told us what its value is. Is its value what you expect? If so, are you clearing out previous frames before drawing the ellipses, or are you drawing them on top of previously drawn ellipses? Are you drawing ellipses elsewhere in your code?
Start over with a blank sketch, and add just enough lines to show the problem. Here's an example MCVE that you can work from:
stroke(0);
fill(0);
ellipse(25, 25, 25, 25);
line(0, 25, width, 25);
stroke(0, 128);
fill(0, 128);
ellipse(75, 75, 25, 25);
line(0, 75, width, 75);
This code draws a black line and ellipse, then draws a transparent line and ellipse. Please hardcode the a value from your code, or add just enough code so we can see exactly what's going on.
Edit: Thanks for the MCVE. Your updated code still has problems. I don't understand this loop:
for(Particle other: p){ //For every particle in the ArrayList.
float d = PVector.dist(location,other.location); //Get distance between any two particle.
float a = 255 - d*2.5; //Map variable 'a' as alpha based on distance. E.g. if distance is high, d = 100, alpha is low, a = 255 - 225 = 30.
println("Lowest distance of any two particle =" + d); //Debug output.
if(d<112){ //If the distance of any two particle falls bellow 112.
noStroke(); //No outline.
fill(0,a); //Particle are coloured black, 'a' to vary alpha.
ellipse(location.x, location.y, 8, 8); //Draw ellipse based on location of particle.
stroke(0,a); //Lines are coloured black, 'a' to vary alpha.
strokeWeight(0.7);
line(location.x,location.y,other.location.x,other.location.y); //Draw line between four coordinates, between two particle.
}
}
}
You're saying for each Particle, you loop through every Particle and then draw an ellipse at the current Particle's location? That doesn't make any sense. If you have 100 Particles, that means each Particle will be drawn 100 times!
If you want each Particle's color to be based off its distance to the closest other Particle, then you need to modify this loop to simply find the closest Particle, and then base your calculations off of that. It might look something like this:
Particle closestNeighbor = null;
float closestDistance = 100000;
for (Particle other : p) { //For every particle in the ArrayList.
if (other == this) {
continue;
}
float d = PVector.dist(location, other.location);
if (d < closestDistance) {
closestDistance = d;
closestNeighbor = other;
}
}
Notice the if (other == this) { section. This is important, because otherwise you'll be comparing each Particle to itself, and the distance will be zero!
Once you have the closestNeighbor and the closestDistance, you can do your calculations.
Note that you're only drawing particles when they have a neighbor that's closer than 112 pixels away. Is that what you want to be doing?
If you have a follow-up question, please post an updated MCVE in a new question. Constantly editing the question and answer gets confusing, so just ask a new question if you get stuck again.
I'm doing augmented reality with Three.js and recenlty I tried to combine WebGL and CSS3 rendering to render both 3D content and DOM objects (Mostly for video playback) at the same time. I've started with Closing the gap between html and webgl tutorial, but I cannot get correct visualization using CSS (Although WebGL working fine).
Basically, when doing AR, we have two matrices we have to apply to our scene: projection matrix and camera matrix. The projection matrix (row-major) usually looks like this:
var projectionMatrix = [ 1.820090055466, 0, -0.000550820783, 0,
0, 3.227676868439, -0.036605358124, 0,
0, 0, -1.000199913979,-0.200020000339,
0, 0, -1, 0
];
And camera matrix (row-major) represents a rigid 3D transform (R|t composition) that represents camera transformation in virtual world:
var cameraMatrix = [ 0.790828585625,0.296402275562,-0.535477280617,-0.309822082520,
-0.612037420273,0.382129371166,-0.692378044128,-0.447699964046,
-0.000600785017,0.875284433365,0.483608126640,-0.637073278427,
0.000000000000,0.000000000000,0.000000000000,1.000000000000];
With WebGL it's pretty easy to apply these matrices to a pipeline:
self.wglCamera.matrixAutoUpdate = false;
self.wglCamera.projectionMatrix.set(
pm[0], pm[1], pm[2], pm[3],
pm[4], pm[5], pm[6], pm[7],
pm[8], pm[9], pm[10], pm[11],
pm[12], pm[13], pm[14], pm[15]);
self.wglCamera.matrix.set(
cm[0], cm[1], cm[2], cm[3],
cm[4], cm[5], cm[6], cm[7],
cm[8], cm[9], cm[10], cm[11],
cm[12], cm[13], cm[14], cm[15]);
When I do the same for CSS3 camera, I get incorrect rendering result (VIDEO):
There are two issues:
Red texture (CSS3Object) non-uniformly scaled (it's square in fact)
It always sits in screen center, however it should be located where a blue grid is.
After analyzing CSS3Renderer implementation, I found that only camera FOV property is used to set perspective effect, but the projectionMatrix property is totally ignored when rendering with CSS3Renderer. Is it intended?
// https://github.com/mrdoob/three.js/blob/master/examples/js/renderers/CSS3DRenderer.js#L225
this.render = function ( scene, camera ) {
var fov = 0.5 / Math.tan( THREE.Math.degToRad( camera.fov * 0.5 ) ) * _height;
...
camera.matrixWorldInverse.getInverse( camera.matrixWorld );
// Why we don't use camera.projection Matrix here?
var style = "translate3d(0,0," + fov + "px)" + getCameraCSSMatrix( camera.matrixWorldInverse ) +
" translate3d(" + _widthHalf + "px," + _heightHalf + "px, 0)";
...
};
And, if yes, how I can achieve desired result?
I've tried to pass PM * CM to camera matrix, but both problems still exists. Mainly I more worried about ignored translation, since rotation looks good.
I'd appreciate any ideas/suggestions! Thanks.
Tested on Processing 2.2.1 & 3.0a2 on OS X.
The code I've tweaked below may look familiar to some of you, it's what Imgur now uses as their loading animation. It was posted on OpenProcessing.org and I've been able to get it working in Processing, but the arcs are constantly wobbling around (relative movement within 1 pixel). I'm new to Processing and I don't see anything in the sketch that could be causing this, it runs in ProcessingJS without issue (though very high CPU utilization).
int num = 6;
float step, spacing, theta, angle, startPosition;
void setup() {
frameRate( 60 );
size( 60, 60 );
strokeWeight( 3 );
noFill();
stroke( 51, 51, 51 );
step = 11;
startPosition = -( PI / 2 );
}
void draw() {
background( 255, 255, 255, 0 );
translate( width / 2, height / 2 );
for ( int i = 0; i < num; i++ ) {
spacing = i * step;
angle = ( theta + ( ( PI / 4 / num ) * i ) ) % PI;
float arcEnd = map( sin( angle ), -1, 1, -TWO_PI, TWO_PI );
if ( angle <= ( PI / 2 ) ) {
arc( 0, 0, spacing, spacing, 0 + startPosition , arcEnd + startPosition );
}
else {
arc( 0, 0, spacing, spacing, TWO_PI - arcEnd + startPosition , TWO_PI + startPosition );
}
}
arc( 0, 0, 1, 1, 0, TWO_PI );
theta += .02;
}
If it helps, I'm trying to export this to an animated GIF. I tried doing this with ProcessingJS and jsgif, but hit some snags. I'm able to get it exported in Processing using gifAnimation just fine.
UPDATE
Looks like I'm going with hint( ENABLE_STROKE_PURE );, cleaned up with strokeCap( SQUARE ); within setup(). It doesn't look the same as the original but I do like the straight edges. Sometimes when you compromise, the result ends up even better than the "ideal" solution.
I see the problem on 2.2.1 for OS X, and calling hint(ENABLE_STROKE_PURE) in setup() fixes it for me. I couldn't find good documentation for this call, though; it's just something that gets mentioned here and there.
As for the root cause, if I absolutely had to speculate, I'd guess that Processing's Java renderer approximates a circular arc using a spline with a small number of control points. The control points are spaced out between the endpoints, so as the endpoints move, so do the bumps in the approximation. The approximation might be good enough for a single frame, but the animation makes the bumps obvious. Setting ENABLE_STROKE_PURE might increase the number of control points, or it might force Processing to use a more expensive circular arc primitive in the underlying graphics library it's built upon. Again, though, this is just a guess as to why a drawing environment might have a bug like the one you've seen. I haven't read Processing's source code to verify the guess.
I created two rulers - one vertical and one horizontal:
Now in the vertical ruler, is 'size' of the text visually larger(aprox. 5-6 pixels longer).
Why?
Relevant code:
WM_CREATE:
LOGFONT Lf = {0};
Lf.lfHeight = 12;
lstrcpyW(Lf.lfFaceName, L"Arial");
if (!g_pGRI->bHorizontal)
{
Lf.lfEscapement = 900; // <----For vertical ruler!
}
g_pGRI->hfRuler = CreateFontIndirectW(&Lf);
SelectFont(g_pGRI->hdRuler, g_pGRI->hfRuler);
WM_PAINT:
SetTextColor(g_pGRI->hdRuler, g_pGRI->cBorder);
SetBkColor(g_pGRI->hdRuler, g_pGRI->cBackground);
SetTextAlign(g_pGRI->hdRuler, TA_CENTER);
#define INCREMENT 10
WCHAR wText[16] = {0};
if (g_pGRI->bHorizontal)
{
INT ixTicks = RECTWIDTH(g_pGRI->rRuler) / INCREMENT;
for (INT ix = 0; ix < ixTicks + 1; ix++)
{
MoveToEx(g_pGRI->hdRuler, INCREMENT * ix, 0, NULL);
if (ix % INCREMENT == 0)
{
//This is major tick.
LineTo(g_pGRI->hdRuler, INCREMENT * ix, g_pGRI->lMajor);
wsprintfW(wText, L"%d", INCREMENT * ix);
TextOutW(g_pGRI->hdRuler, INCREMENT * ix + 1, g_pGRI->lMajor + 1, wText, CHARACTERCOUNT(wText));
}
else
{
//This is minor tick.
LineTo(g_pGRI->hdRuler, INCREMENT * ix, g_pGRI->lMinor);
}
}
}
else
{
INT iyTicks = RECTHEIGHT(g_pGRI->rRuler) / INCREMENT;
for (INT iy = 0; iy < iyTicks + 1; iy++)
{
MoveToEx(g_pGRI->hdRuler, 0, INCREMENT * iy, NULL);
if (iy % INCREMENT == 0)
{
//This is major tick.
LineTo(g_pGRI->hdRuler, g_pGRI->lMajor, INCREMENT * iy);
wsprintfW(wText, L"%d", INCREMENT * iy);
TextOutW(g_pGRI->hdRuler, g_pGRI->lMajor + 1, INCREMENT * iy + 1, wText, CHARACTERCOUNT(wText));
}
else
{
//This is minor tick.
LineTo(g_pGRI->hdRuler, g_pGRI->lMinor, INCREMENT * iy);
}
}
}
}
Background
There are several different schemes for rasterizing text in a legible way when the text is small relative to the size of a pixel. For example, if the stroke width is supposed to be 1.25 pixels wide, you either have to round it off to a whole number of pixels, use antialiasing, or use subpixel rendering (like ClearType). Rounding is usually controlled by "hints" built into the font by the font designer.
Hinting is the main reason why text width doesn't always scale exactly with the text height. For example, if, because of rounding, the left hump of a lowercase m is a pixel wider than the right one, a hint might tell the renderer to round the width up to make the letter symmetric. The result is that the character is a tad wider relative to its height than the ideal character.
This issue
What's likely happening here is that when GDI renders the string horizontally, each subsequent character may start at a fractional position, which is simulated by antialiasing or subpixel (ClearType) rendering. But, when rendering vertically, it appears that each subsequent character's starting position is rounded up to the next whole pixel, which tends to make the vertical text a couple pixels "longer" than its horizontal counterpart. Effectively, the kerning is always rounded up to the next whole pixel.
It's likely that more effort was put into the common case of horizontal text rendering, making it easier to read (and possibly faster to render). The general case of rendering at any other angle may have been implemented in a simpler manner, working glyph-by-glyph instead of with the entire string.
Things to Try
If you want them to look that same, you'll probably have to make a small compromise in the visual quality of the horizontal labels. Here are a few things I can think of to try:
Render the labels with regular antialiasing instead of ClearType subpixel rendering. (You can do this by setting the lfQuality field in the LOGFONT.) You would then draw the horizontal labels in the normal manner. For the vertical labels, draw them to an offscreen buffer horizontally, rotate it, and then blit the buffer to the screen. This gives you labels that look identical. The reason I suggest regular antialiasing is that it's invariant to the rotation. ClearType rendering had an inherent orientation and thus cannot be rotated without creating fringing. I've used this approach for graph labels with good results.
Render the horizontal labels character by character, rounding the starting point up to the next whole pixel. This should make the horizontal labels look like the vertical ones. Typographically, they won't look as good, but for small labels like this, it's probably less distracting than having the horizontal and vertical labels visually mismatched.
Another answer suggested rendering the horizontal labels with a very small, but non-zero, escapement and orientation, forcing those to go through the same rendering pipeline as the vertical labels. This may be the easiest solution for short labels like yours. If you had to handle longer strings of text, I'd suggest one of the first two methods.
When using lfEscapement, you will often get strange behaviour as it renders text using a fairly different pipeline.
A trick would be to have lfEscapement set for both. One with 900, and one with a very low value (such as 1 or even 10. Once you have both rendering with escapement, you should be good.
If you're still having issues with smoothing, try doing something like this:
BOOL bSmooth;
//Get previous smooth value.
SystemParametersInfo(SPI_GETFONTSMOOTHING, 0, &bSmooth, 0);
//Set no smoothing.
SystemParametersInfo(SPI_SETFONTSMOOTHING, 0, NULL, 0);
//Draw text.
//Return smoothing.
SystemParametersInfo(SPI_SETFONTSMOOTHING, bSmooth, NULL, 0);