OSX Mountain Lion OpenGL - glDrawPixels() not working - macos

I am trying to get some basic demos to work on OSX Mountain Lion for OpenGL. Some work, but the basic ones aren't working. Specifically, I am getting garbage out on my X window. Here's the code from a demo (sorry if formatting is messed up):
/* W B Langdon at MUN 10 May 2007
* Program to demonstarte use of OpenGL's glDrawPixels
*/
#ifdef _WIN32
#include <windows.h>
#endif
#include <GL/gl.h>
#include <GL/glext.h>
#include <GL/glut.h>
#include <iostream>
#include <sstream>
#include "math.h"
unsigned int window_width = 512, window_height = 512;
const int size=window_width*window_height;
struct rgbf {float r; float g; float b;};
//WBL 9 May 2007 Based on
//http://www.codeguru.com/cpp/w-d/dislog/commondialogs/article.php/c1861/
//Common.h
void toRGBf(const float h, const float s, const float v,
rgbf* rgb)
{
/*
RGBType rgb;
if(!h && !s)
{
rgb.r = rgb.g = rgb.b = v;
}
*/
//rgbf* rgb = (rgbf*) out;
double min,max,delta,hue;
max = v;
delta = max * s;
min = max - delta;
hue = h;
if(h > 300 || h <= 60)
{
rgb->r = max;
if(h > 300)
{
rgb->g = min;
hue = (hue - 360.0)/60.0;
rgb->b = ((hue * delta - min) * -1);
}
else
{
rgb->b = min;
hue = hue / 60.0;
rgb->g = (hue * delta + min);
}
}
else
if(h > 60 && h < 180)
{
rgb->g = max;
if(h < 120)
{
rgb->b = min;
hue = (hue/60.0 - 2.0 ) * delta;
rgb->r = min - hue;
}
else
{
rgb->r = min;
hue = (hue/60 - 2.0) * delta;
rgb->b = (min + hue);
}
}
else
{
rgb->b = max;
if(h < 240)
{
rgb->r = min;
hue = (hue/60.0 - 4.0 ) * delta;
rgb->g = (min - hue);
}
else
{
rgb->g = min;
hue = (hue/60 - 4.0) * delta;
rgb->r = (min + hue);
}
}
}
//Convert a wide range of data values into nice colours
void colour(const float data, float* out) {
//convert data to angle
const float a = atan2(data,1)/(2*atan2(1,1)); // -1 .. +1
const float angle = (1+a)*180; //red=0 at -1,+1
const float saturation = 1;
const float h = (data<-1||data>1)? 1 : fabs(data);
toRGBf(angle,saturation,h,(rgbf*)out);
}
void display()
{
//Create some nice colours (3 floats per pixel) from data -10..+10
float* pixels = new float[size*3];
for(int i=0;i<size;i++) {
colour(10.0-((i*20.0)/size),&pixels[i*3]);
}
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//http://msdn2.microsoft.com/en-us/library/ms537062.aspx
//glDrawPixels writes a block of pixels to the framebuffer.
glDrawPixels(window_width,window_height,GL_RGB,GL_FLOAT,pixels);
glFlush();
glutSwapBuffers();
}
int main(int argc, char** argv) {
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH);
glutInitWindowSize(window_width, window_height);
glutCreateWindow("OpenGL glDrawPixels demo");
glutDisplayFunc(display);
//glutReshapeFunc(reshape);
//glutMouseFunc(mouse_button);
//glutMotionFunc(mouse_motion);
//glutKeyboardFunc(keyboard);
//glutIdleFunc(idle);
glEnable(GL_DEPTH_TEST);
glClearColor(0.0, 0.0, 0.0, 1.0);
//glPointSize(2);
glutMainLoop();
}
This is what I get:
wrong

Please consider glDrawPixels as not relevant anymore. It has been deprecated (read: do not use in new code) from OpenGL ever since version OpenGL-2. It has been depreciated (different word, note the 'i' – read: Has been removed) ever since OpenGL-3 core profile.
Use textured quads instead.

Related

Maximum float value in 10-bit image in WIC

I'm trying to convert a HDR image float array I load to a 10-bit DWORD with WIC.
The type of the loading file is GUID_WICPixelFormat128bppPRGBAFloat and I got an array of 4 floats per color.
When I try to convert these to 10 bit as follows:
struct RGBX
{
unsigned int b : 10;
unsigned int g : 10;
unsigned int r : 10;
int a : 2;
} rgbx;
(which is the format requested by the NVIDIA encoding library for 10-bit rgb),
then I assume I have to divide each of the floats by 1024.0f in order to get them inside the 10 bits of a DWORD.
However, I notice that some of the floats are > 1, which means that their range is not [0,1] as it happens when the image is 8 bit.
What would their range be? How to store a floating point color into a 10-bits integer?
I'm trying to use the NVidia's HDR encoder which requires an ARGB10 like the above structure.
How is the 10 bit information of a color stored as a floating point number?
Btw I tried to convert with WIC but conversion from GUID_WICPixelFormat128bppPRGBAFloat to GUID_WICPixelFormat32bppR10G10B10A2 fails.
HRESULT ConvertFloatTo10(const float* f, int wi, int he, std::vector<DWORD>& out)
{
CComPtr<IWICBitmap> b;
wbfact->CreateBitmapFromMemory(wi, he, GUID_WICPixelFormat128bppPRGBAFloat, wi * 16, wi * he * 16, (BYTE*)f, &b);
CComPtr<IWICFormatConverter> wf;
wbfact->CreateFormatConverter(&wf);
wf->Initialize(b, GUID_WICPixelFormat32bppR10G10B10A2, WICBitmapDitherTypeNone, 0, 0, WICBitmapPaletteTypeCustom);
// This last call fails with 0x88982f50 : The component cannot be found.
}
Edit: I found a paper (https://hal.archives-ouvertes.fr/hal-01704278/document), is this relevant to this question?
Floating-point color content that is greater than the 0..1 range is High Dynamic Range (HDR) content. If you trivially convert it to 10:10:10:2 UNORM then you are using 'clipping' for values over 1. This doesn't give good results.
SDR 10:10:10 or 8:8:8
You should instead use tone-mapping which converts the HDR signal to a SDR (Standard Dynamic Range a.k.a. 0..1) before or as part of doing the conversion to 10:10:10:2.
There a many different approaches to tone-mapping, but a common 'generic' solution is the Reinhard tone-mapping operator. Here's an implementation using DirectXTex.
std::unique_ptr<ScratchImage> timage(new (std::nothrow) ScratchImage);
if (!timage)
{
wprintf(L"\nERROR: Memory allocation failed\n");
return 1;
}
// Compute max luminosity across all images
XMVECTOR maxLum = XMVectorZero();
hr = EvaluateImage(image->GetImages(), image->GetImageCount(), image->GetMetadata(),
[&](const XMVECTOR* pixels, size_t w, size_t y)
{
UNREFERENCED_PARAMETER(y);
for (size_t j = 0; j < w; ++j)
{
static const XMVECTORF32 s_luminance = { { { 0.3f, 0.59f, 0.11f, 0.f } } };
XMVECTOR v = *pixels++;
v = XMVector3Dot(v, s_luminance);
maxLum = XMVectorMax(v, maxLum);
}
});
if (FAILED(hr))
{
wprintf(L" FAILED [tonemap maxlum] (%08X%ls)\n", static_cast<unsigned int>(hr), GetErrorDesc(hr));
return 1;
}
maxLum = XMVectorMultiply(maxLum, maxLum);
hr = TransformImage(image->GetImages(), image->GetImageCount(), image->GetMetadata(),
[&](XMVECTOR* outPixels, const XMVECTOR* inPixels, size_t w, size_t y)
{
UNREFERENCED_PARAMETER(y);
for (size_t j = 0; j < w; ++j)
{
XMVECTOR value = inPixels[j];
const XMVECTOR scale = XMVectorDivide(
XMVectorAdd(g_XMOne, XMVectorDivide(value, maxLum)),
XMVectorAdd(g_XMOne, value));
const XMVECTOR nvalue = XMVectorMultiply(value, scale);
value = XMVectorSelect(value, nvalue, g_XMSelect1110);
outPixels[j] = value;
}
}, *timage);
if (FAILED(hr))
{
wprintf(L" FAILED [tonemap apply] (%08X%ls)\n", static_cast<unsigned int>(hr), GetErrorDesc(hr));
return 1;
}
HDR10
UPDATE: If you are trying to convert HDR floating-point content to an "HDR10" signal, then you need to do:
Color-space rotate from Rec.709 or P3D65 to Rec.2020.
Normalize for 'paper white' / 10,000 nits.
Apply the ST.2084 gamma curve.
Quantize to 10-bit.
// HDTV to UHDTV (Rec.709 color primaries into Rec.2020)
const XMMATRIX c_from709to2020 =
{
0.6274040f, 0.0690970f, 0.0163916f, 0.f,
0.3292820f, 0.9195400f, 0.0880132f, 0.f,
0.0433136f, 0.0113612f, 0.8955950f, 0.f,
0.f, 0.f, 0.f, 1.f
};
// DCI-P3-D65 https://en.wikipedia.org/wiki/DCI-P3 to UHDTV (DCI-P3-D65 color primaries into Rec.2020)
const XMMATRIX c_fromP3D65to2020 =
{
0.753845f, 0.0457456f, -0.00121055f, 0.f,
0.198593f, 0.941777f, 0.0176041f, 0.f,
0.047562f, 0.0124772f, 0.983607f, 0.f,
0.f, 0.f, 0.f, 1.f
};
// Custom Rec.709 into Rec.2020
const XMMATRIX c_fromExpanded709to2020 =
{
0.6274040f, 0.0457456f, -0.00121055f, 0.f,
0.3292820f, 0.941777f, 0.0176041f, 0.f,
0.0433136f, 0.0124772f, 0.983607f, 0.f,
0.f, 0.f, 0.f, 1.f
};
inline float LinearToST2084(float normalizedLinearValue)
{
const float ST2084 = pow((0.8359375f + 18.8515625f * pow(abs(normalizedLinearValue), 0.1593017578f)) / (1.0f + 18.6875f * pow(abs(normalizedLinearValue), 0.1593017578f)), 78.84375f);
return ST2084; // Don't clamp between [0..1], so we can still perform operations on scene values higher than 10,000 nits
}
// You can adjust this up to 10000.f
float paperWhiteNits = 200.f;
hr = TransformImage(image->GetImages(), image->GetImageCount(), image->GetMetadata(),
[&](XMVECTOR* outPixels, const XMVECTOR* inPixels, size_t w, size_t y)
{
UNREFERENCED_PARAMETER(y);
const XMVECTOR paperWhite = XMVectorReplicate(paperWhiteNits);
for (size_t j = 0; j < w; ++j)
{
XMVECTOR value = inPixels[j];
XMVECTOR nvalue = XMVector3Transform(value, c_from709to2020);
// Some people prefer the look of using c_fromP3D65to2020
// or c_fromExpanded709to2020 instead.
// Convert to ST.2084
nvalue = XMVectorDivide(XMVectorMultiply(nvalue, paperWhite), c_MaxNitsFor2084);
XMFLOAT4A tmp;
XMStoreFloat4A(&tmp, nvalue);
tmp.x = LinearToST2084(tmp.x);
tmp.y = LinearToST2084(tmp.y);
tmp.z = LinearToST2084(tmp.z);
nvalue = XMLoadFloat4A(&tmp);
value = XMVectorSelect(value, nvalue, g_XMSelect1110);
outPixels[j] = value;
}
}, *timage);
You should really take a look at texconv.
Reference
Reinhard et al., "Photographic tone reproduction for digital images", ACM Transactions on Graphics, Volume 21, Issue 3 (July 2002). ACM DL.
#ChuckWalbourn answer is helpful, however I don't want to tonemap to [0,1] as there is no point in tonemapping to SDR then going to 10-bit HDR.
What I 'd think it's correct is to scale to [0,4] instead by first using g_XMFour.
const XMVECTOR scale = XMVectorDivide(
XMVectorAdd(g_XMFour, XMVectorDivide(v, maxLum)),
XMVectorAdd(g_XMFour, v));
then using a specialized 10-bit store which scales by 255 instead of 1023:
void XMStoreUDecN4a(DirectX::PackedVector::XMUDECN4* pDestination,DirectX::FXMVECTOR V)
{
using namespace DirectX;
XMVECTOR N;
static const XMVECTOR Scale = { 255.0f, 255.0f, 255.0f, 3.0f };
assert(pDestination);
N = XMVectorClamp(V, XMVectorZero(), g_XMFour);
N = XMVectorMultiply(N, Scale);
pDestination->v = ((uint32_t)DirectX::XMVectorGetW(N) << 30) |
(((uint32_t)DirectX::XMVectorGetZ(N) & 0x3FF) << 20) |
(((uint32_t)DirectX::XMVectorGetY(N) & 0x3FF) << 10) |
(((uint32_t)DirectX::XMVectorGetX(N) & 0x3FF));
}
And then a specialized 10-bit load which divides with 255 instead of 1023:
DirectX::XMVECTOR XMLoadUDecN4a(DirectX::PackedVector::XMUDECN4* pSource)
{
using namespace DirectX;
fourx vectorOut;
uint32_t Element;
Element = pSource->v & 0x3FF;
vectorOut.r = (float)Element / 255.f;
Element = (pSource->v >> 10) & 0x3FF;
vectorOut.g = (float)Element / 255.f;
Element = (pSource->v >> 20) & 0x3FF;
vectorOut.b = (float)Element / 255.f;
vectorOut.a = (float)(pSource->v >> 30) / 3.f;
const DirectX::XMVECTORF32 j = { vectorOut.r,vectorOut.g,vectorOut.b,vectorOut.a };
return j;
}

What is this called and how to achieve! Visuals in processing

Hey does anyone know how to achieve this effect using processing or what this is called?
I have been trying to use the wave gradient example in the processing library and implementing Perlin noise but I can not get close to the gif quality.
I know the artist used processing but can not figure out how!
Link to gif:
https://giphy.com/gifs/processing-jodeus-QInYLzY33wMwM
The effect is reminescent of Op Art (optical illusion art): I recommend reading/learning more about this fascinating genre and artists like:
Bridget Riley
(Bridget Riley, Intake, 1964)
(Bridget Riley, Hesistate, 1964,
Copyright: (c) Bridget Riley 2018. All rights reserved. / Photo (c) Tate)
Victor Vasarely
(Victor Vasarely, Zebra Couple)
(Victor Vasarely, VegaII)
Frank Stella
(Frank Stella, Untitled 1965, Image curtesy of Art Gallery NSW)
and more
You notice this waves are reminiscent/heavily inspired by Bridget Riley's work.
I also recommend checking out San Charoenchai;s album visualiser for Beach House - 7
As mentioned in my comment: you should post your attempt.
Waves and perlin noise could work for sure.
There are many ways to achieve a similar look.
Here's tweaked version of Daniel Shiffman's Noise Wave example:
int numWaves = 24;
float[] yoff = new float[numWaves]; // 2nd dimension of perlin noise
float[] yoffIncrements = new float[numWaves];
void setup() {
size(640, 360);
noStroke();
for(int i = 0 ; i < numWaves; i++){
yoffIncrements[i] = map(i, 0, numWaves - 1, 0.01, 0.03);
}
}
void draw() {
background(0);
float waveHeight = height / numWaves;
for(int i = 0 ; i < numWaves; i++){
float waveY = i * waveHeight;
fill(i % 2 == 0 ? color(255) : color(0));
// We are going to draw a polygon out of the wave points
beginShape();
float xoff = 0; // Option #1: 2D Noise
// float xoff = yoff; // Option #2: 1D Noise
// Iterate over horizontal pixels
for (float x = 0; x <= width + 30; x += 20) {
// Calculate a y value according to noise, map to
float y = map(noise(xoff, yoff[i]), 0, 1, waveY , waveY + (waveHeight * 3)); // Option #1: 2D Noise
// float y = map(noise(xoff), 0, 1, 200,300); // Option #2: 1D Noise
// Set the vertex
vertex(x, y);
// Increment x dimension for noise
xoff += 0.05;
}
// increment y dimension for noise
yoff[i] += yoffIncrements[i];
vertex(width, height);
vertex(0, height);
endShape(CLOSE);
}
}
Notice the quality of the noise wave in comparison to the image you're trying to emulate: there is a constant rhythm to it. To me that is a hint that it's using cycling sine waves changing phase and amplitude (potentially even adding waves together).
I've written an extensive answer on animating sine waves here
(Reuben Margolin's kinectic sculpture system demo)
From your question it sounds like you would be comfortable implementing a sine wave animation. It it helps, here's an example of adding two waves together:
void setup(){
size(600,600);
noStroke();
}
void draw(){
background(0);
// how many waves per sketch height
int heightDivisions = 30;
// split the sketch height into equal height sections
float heightDivisionSize = (float)height / heightDivisions;
// for each height division
for(int j = 0 ; j < heightDivisions; j++){
// use % 2 to alternate between black and white
// see https://processing.org/reference/modulo.html and
// https://processing.org/reference/conditional.html for more
fill(j % 2 == 0 ? color(255) : color(0));
// offset drawing on Y axis
translate(0,(j * heightDivisionSize));
// start a wave shape
beginShape();
// first vertex is at the top left corner
vertex(0,height);
// how many horizontal (per wave) divisions ?
int widthDivisions = 12;
// equally space the points on the wave horizontally
float widthDivsionSize = (float)width / widthDivisions;
// for each point on the wave
for(int i = 0; i <= widthDivisions; i++){
// calculate different phases
// play with arithmetic operators to make interesting wave additions
float phase1 = (frameCount * 0.01) + ((i * j) * 0.025);
float phase2 = (frameCount * 0.05) + ((i + j) * 0.25);
// calculate vertex x position
float x = widthDivsionSize * i;
// multiple sine waves
// (can use cos() and use other ratios too
// 150 in this case is the wave amplitude (e.g. from -150 to + 150)
float y = ((sin(phase1) * sin(phase2) * 150));
// draw calculated vertex
vertex(x,y);
}
// last vertex is at bottom right corner
vertex(width,height);
// finish the shape
endShape();
}
}
The result:
Minor note on performance: this could be implemented more efficiently using PShape, however I recommend playing with the maths/geometry to find the form you're after, then as a last step think of optimizing it.
My intention is not to show you how to create an exact replica, but to show there's more to Op Art than an effect and hopefully inspire you to explore other methods of achieving something similar in the hope that you will discover your own methods and outcomes: something new and of your own through fun happy accidents.
In terms of other techniques/avenues to explore:
displacement maps:
Using an alternating black/white straight bars texture on wavy 3D geometry
using shaders:
Shaders are a huge topic on their own, but it's worth noting:
There's a very good Processing Shader Tutorial
You might be able to explore frament shaders on shadertoy, tweak the code in browser then make slight changes so you can run them in Processing.
Here are a few quick examples:
https://www.shadertoy.com/view/Wts3DB
tweaked for black/white waves in Processing as shader-Wts3DB.frag
// https://www.shadertoy.com/view/Wts3DB
uniform vec2 iResolution;
uniform float iTime;
#define COUNT 6.
#define COL_BLACK vec3(23,32,38) / 255.0
#define SF 1./min(iResolution.x,iResolution.y)
#define SS(l,s) smoothstep(SF,-SF,l-s)
#define hue(h) clamp( abs( fract(h + vec4(3,2,1,0)/3.) * 6. - 3.) -1. , 0., 1.)
// Original noise code from https://www.shadertoy.com/view/4sc3z2
#define MOD3 vec3(.1031,.11369,.13787)
vec3 hash33(vec3 p3)
{
p3 = fract(p3 * MOD3);
p3 += dot(p3, p3.yxz+19.19);
return -1.0 + 2.0 * fract(vec3((p3.x + p3.y)*p3.z, (p3.x+p3.z)*p3.y, (p3.y+p3.z)*p3.x));
}
float simplex_noise(vec3 p)
{
const float K1 = 0.333333333;
const float K2 = 0.166666667;
vec3 i = floor(p + (p.x + p.y + p.z) * K1);
vec3 d0 = p - (i - (i.x + i.y + i.z) * K2);
vec3 e = step(vec3(0.0), d0 - d0.yzx);
vec3 i1 = e * (1.0 - e.zxy);
vec3 i2 = 1.0 - e.zxy * (1.0 - e);
vec3 d1 = d0 - (i1 - 1.0 * K2);
vec3 d2 = d0 - (i2 - 2.0 * K2);
vec3 d3 = d0 - (1.0 - 3.0 * K2);
vec4 h = max(0.6 - vec4(dot(d0, d0), dot(d1, d1), dot(d2, d2), dot(d3, d3)), 0.0);
vec4 n = h * h * h * h * vec4(dot(d0, hash33(i)), dot(d1, hash33(i + i1)), dot(d2, hash33(i + i2)), dot(d3, hash33(i + 1.0)));
return dot(vec4(31.316), n);
}
void mainImage( vec4 fragColor, vec2 fragCoord )
{
}
void main(void) {
//vec2 uv = vec2(gl_FragColor.x / iResolution.y, gl_FragColor.y / iResolution.y);
vec2 uv = gl_FragCoord.xy / iResolution.y;
float m = 0.;
float t = iTime *.5;
vec3 col;
for(float i=COUNT; i>=0.; i-=1.){
float edge = simplex_noise(vec3(uv * vec2(2., 0.) + vec2(0, t + i*.15), 3.))*.2 + (.95/COUNT)*i;
float mi = SS(edge, uv.y) - SS(edge + .095, uv.y);
m += mi;
if(mi > 0.){
col = vec3(1.0);
}
}
col = mix(COL_BLACK, col, m);
gl_FragColor = vec4(col,1.0);
// mainImage(gl_FragColor,gl_FragCoord);
}
loaded in Processing as:
PShader shader;
void setup(){
size(300,300,P2D);
noStroke();
shader = loadShader("shader-Wts3DB.frag");
shader.set("iResolution",(float)width, float(height));
}
void draw(){
background(0);
shader.set("iTime",frameCount * 0.05);
shader(shader);
rect(0,0,width,height);
}
https://www.shadertoy.com/view/MtsXzl
tweaked as shader-MtsXzl.frag
//https://www.shadertoy.com/view/MtsXzl
#define SHOW_GRID 1
const float c_scale = 0.5;
const float c_rate = 2.0;
#define FLT_MAX 3.402823466e+38
uniform vec3 iMouse;
uniform vec2 iResolution;
uniform float iTime;
//=======================================================================================
float CubicHermite (float A, float B, float C, float D, float t)
{
float t2 = t*t;
float t3 = t*t*t;
float a = -A/2.0 + (3.0*B)/2.0 - (3.0*C)/2.0 + D/2.0;
float b = A - (5.0*B)/2.0 + 2.0*C - D / 2.0;
float c = -A/2.0 + C/2.0;
float d = B;
return a*t3 + b*t2 + c*t + d;
}
//=======================================================================================
float hash(float n) {
return fract(sin(n) * 43758.5453123);
}
//=======================================================================================
float GetHeightAtTile(vec2 T)
{
float rate = hash(hash(T.x) * hash(T.y))*0.5+0.5;
return (sin(iTime*rate*c_rate) * 0.5 + 0.5) * c_scale;
}
//=======================================================================================
float HeightAtPos(vec2 P)
{
vec2 tile = floor(P);
P = fract(P);
float CP0X = CubicHermite(
GetHeightAtTile(tile + vec2(-1.0,-1.0)),
GetHeightAtTile(tile + vec2(-1.0, 0.0)),
GetHeightAtTile(tile + vec2(-1.0, 1.0)),
GetHeightAtTile(tile + vec2(-1.0, 2.0)),
P.y
);
float CP1X = CubicHermite(
GetHeightAtTile(tile + vec2( 0.0,-1.0)),
GetHeightAtTile(tile + vec2( 0.0, 0.0)),
GetHeightAtTile(tile + vec2( 0.0, 1.0)),
GetHeightAtTile(tile + vec2( 0.0, 2.0)),
P.y
);
float CP2X = CubicHermite(
GetHeightAtTile(tile + vec2( 1.0,-1.0)),
GetHeightAtTile(tile + vec2( 1.0, 0.0)),
GetHeightAtTile(tile + vec2( 1.0, 1.0)),
GetHeightAtTile(tile + vec2( 1.0, 2.0)),
P.y
);
float CP3X = CubicHermite(
GetHeightAtTile(tile + vec2( 2.0,-1.0)),
GetHeightAtTile(tile + vec2( 2.0, 0.0)),
GetHeightAtTile(tile + vec2( 2.0, 1.0)),
GetHeightAtTile(tile + vec2( 2.0, 2.0)),
P.y
);
return CubicHermite(CP0X, CP1X, CP2X, CP3X, P.x);
}
//=======================================================================================
vec3 NormalAtPos( vec2 p )
{
float eps = 0.01;
vec3 n = vec3( HeightAtPos(vec2(p.x-eps,p.y)) - HeightAtPos(vec2(p.x+eps,p.y)),
2.0*eps,
HeightAtPos(vec2(p.x,p.y-eps)) - HeightAtPos(vec2(p.x,p.y+eps)));
return normalize( n );
}
//=======================================================================================
float RayIntersectSphere (vec4 sphere, in vec3 rayPos, in vec3 rayDir)
{
//get the vector from the center of this circle to where the ray begins.
vec3 m = rayPos - sphere.xyz;
//get the dot product of the above vector and the ray's vector
float b = dot(m, rayDir);
float c = dot(m, m) - sphere.w * sphere.w;
//exit if r's origin outside s (c > 0) and r pointing away from s (b > 0)
if(c > 0.0 && b > 0.0)
return -1.0;
//calculate discriminant
float discr = b * b - c;
//a negative discriminant corresponds to ray missing sphere
if(discr < 0.0)
return -1.0;
//ray now found to intersect sphere, compute smallest t value of intersection
float collisionTime = -b - sqrt(discr);
//if t is negative, ray started inside sphere so clamp t to zero and remember that we hit from the inside
if(collisionTime < 0.0)
collisionTime = -b + sqrt(discr);
return collisionTime;
}
//=======================================================================================
vec3 DiffuseColor (in vec3 pos)
{
#if SHOW_GRID
pos = mod(floor(pos),2.0);
return vec3(mod(pos.x, 2.0) < 1.0 ? 1.0 : 0.0);
#else
return vec3(0.1, 0.8, 0.9);
#endif
}
//=======================================================================================
vec3 ShadePoint (in vec3 pos, in vec3 rayDir, float time, bool fromUnderneath)
{
vec3 diffuseColor = DiffuseColor(pos);
vec3 reverseLightDir = normalize(vec3(1.0,1.0,-1.0));
vec3 lightColor = vec3(1.0);
vec3 ambientColor = vec3(0.05);
vec3 normal = NormalAtPos(pos.xz);
normal *= fromUnderneath ? -1.0 : 1.0;
// diffuse
vec3 color = diffuseColor;
float dp = dot(normal, reverseLightDir);
if(dp > 0.0)
color += (diffuseColor * lightColor);
return color;
}
//=======================================================================================
vec3 HandleRay (in vec3 rayPos, in vec3 rayDir, in vec3 pixelColor, out float hitTime)
{
float time = 0.0;
float lastHeight = 0.0;
float lastY = 0.0;
float height;
bool hitFound = false;
hitTime = FLT_MAX;
bool fromUnderneath = false;
vec2 timeMinMax = vec2(0.0, 20.0);
time = timeMinMax.x;
const int c_numIters = 100;
float deltaT = (timeMinMax.y - timeMinMax.x) / float(c_numIters);
vec3 pos = rayPos + rayDir * time;
float firstSign = sign(pos.y - HeightAtPos(pos.xz));
for (int index = 0; index < c_numIters; ++index)
{
pos = rayPos + rayDir * time;
height = HeightAtPos(pos.xz);
if (sign(pos.y - height) * firstSign < 0.0)
{
fromUnderneath = firstSign < 0.0;
hitFound = true;
break;
}
time += deltaT;
lastHeight = height;
lastY = pos.y;
}
if (hitFound) {
time = time - deltaT + deltaT*(lastHeight-lastY)/(pos.y-lastY-height+lastHeight);
pos = rayPos + rayDir * time;
pixelColor = ShadePoint(pos, rayDir, time, fromUnderneath);
hitTime = time;
}
return pixelColor;
}
//=======================================================================================
void main()
{
// scrolling camera
vec3 cameraOffset = vec3(iTime, 0.5, iTime);
//----- camera
vec2 mouse = iMouse.xy / iResolution.xy;
vec3 cameraAt = vec3(0.5,0.5,0.5) + cameraOffset;
float angleX = iMouse.z > 0.0 ? 6.28 * mouse.x : 3.14 + iTime * 0.25;
float angleY = iMouse.z > 0.0 ? (mouse.y * 6.28) - 0.4 : 0.5;
vec3 cameraPos = (vec3(sin(angleX)*cos(angleY), sin(angleY), cos(angleX)*cos(angleY))) * 5.0;
// float angleX = 0.8;
// float angleY = 0.8;
// vec3 cameraPos = vec3(0.0,0.0,0.0);
cameraPos += vec3(0.5,0.5,0.5) + cameraOffset;
vec3 cameraFwd = normalize(cameraAt - cameraPos);
vec3 cameraLeft = normalize(cross(normalize(cameraAt - cameraPos), vec3(0.0,sign(cos(angleY)),0.0)));
vec3 cameraUp = normalize(cross(cameraLeft, cameraFwd));
float cameraViewWidth = 6.0;
float cameraViewHeight = cameraViewWidth * iResolution.y / iResolution.x;
float cameraDistance = 6.0; // intuitively backwards!
// Objects
vec2 rawPercent = (gl_FragCoord.xy / iResolution.xy);
vec2 percent = rawPercent - vec2(0.5,0.5);
vec3 rayTarget = (cameraFwd * vec3(cameraDistance,cameraDistance,cameraDistance))
- (cameraLeft * percent.x * cameraViewWidth)
+ (cameraUp * percent.y * cameraViewHeight);
vec3 rayDir = normalize(rayTarget);
float hitTime = FLT_MAX;
vec3 pixelColor = vec3(1.0, 1.0, 1.0);
pixelColor = HandleRay(cameraPos, rayDir, pixelColor, hitTime);
gl_FragColor = vec4(clamp(pixelColor,0.0,1.0), 1.0);
}
and the mouse interactive Processing sketch:
PShader shader;
void setup(){
size(300,300,P2D);
noStroke();
shader = loadShader("shader-MtsXzl.frag");
shader.set("iResolution",(float)width, float(height));
}
void draw(){
background(0);
shader.set("iTime",frameCount * 0.05);
shader.set("iMouse",(float)mouseX , (float)mouseY, mousePressed ? 1.0 : 0.0);
shader(shader);
rect(0,0,width,height);
}
Shadertoy is great way to play/learn: have fun !
Update
Here's a quick test tweaking Daniel Shiffman's 3D Terrain Generation example to add a stripped texture and basic sine waves instead of perlin noise:
// Daniel Shiffman
// http://codingtra.in
// http://patreon.com/codingtrain
// Code for: https://youtu.be/IKB1hWWedMk
int cols, rows;
int scl = 20;
int w = 2000;
int h = 1600;
float flying = 0;
float[][] terrain;
PImage texture;
void setup() {
size(600, 600, P3D);
textureMode(NORMAL);
noStroke();
cols = w / scl;
rows = h/ scl;
terrain = new float[cols][rows];
texture = getBarsTexture(512,512,96);
}
void draw() {
flying -= 0.1;
float yoff = flying;
for (int y = 0; y < rows; y++) {
float xoff = 0;
for (int x = 0; x < cols; x++) {
//terrain[x][y] = map(noise(xoff, yoff), 0, 1, -100, 100);
terrain[x][y] = map(sin(xoff) * sin(yoff), 0, 1, -60, 60);
xoff += 0.2;
}
yoff += 0.2;
}
background(0);
translate(width/2, height/2+50);
rotateX(PI/9);
translate(-w/2, -h/2);
for (int y = 0; y < rows-1; y++) {
beginShape(TRIANGLE_STRIP);
texture(texture);
for (int x = 0; x < cols; x++) {
float u0 = map(x,0,cols-1,0.0,1.0);
float u1 = map(x+1,0,cols-1,0.0,1.0);
float v0 = map(y,0,rows-1,0.0,1.0);
float v1 = map(y+1,0,rows-1,0.0,1.0);
vertex(x*scl, y*scl, terrain[x][y], u0, v0);
vertex(x*scl, (y+1)*scl, terrain[x][y+1], u1, v1);
}
endShape();
}
}
PGraphics getBarsTexture(int textureWidth, int textureHeight, int numBars){
PGraphics texture = createGraphics(textureWidth, textureHeight);
int moduleSide = textureWidth / numBars;
texture.beginDraw();
texture.background(0);
texture.noStroke();
for(int i = 0; i < numBars; i+= 2){
texture.rect(0, i * moduleSide, textureWidth, moduleSide);
}
texture.endDraw();
return texture;
}

Why is wrapping coordinates not making my simplex noise tile seamlessly?

I've been trying to create a fake 3D texture that repeats in shadertoy (see here, use wasd to move, arrow keys to rotate) But as you can see, it doesn't tile.
I generate the noise myself, and I've isolated the noise generation in this minimal example, however it does not generate seamlessly tileable noise seemingly no matter what I do.
Here is the code:
//Common, you probably won't have to look here.
vec2 modv(vec2 value, float modvalue){
return vec2(mod(value.x, modvalue),
mod(value.y, modvalue));
}
vec3 modv(vec3 value, float modvalue){
return vec3(mod(value.x, modvalue),
mod(value.y, modvalue),
mod(value.z, modvalue));
}
vec4 modv(vec4 value, float modvalue){
return vec4(mod(value.x, modvalue),
mod(value.y, modvalue),
mod(value.z, modvalue),
mod(value.w, modvalue));
}
//MATH CONSTANTS
const float pi = 3.1415926535897932384626433832795;
const float tau = 6.2831853071795864769252867665590;
const float eta = 1.5707963267948966192313216916397;
const float SQRT3 = 1.7320508075688772935274463415059;
const float SQRT2 = 1.4142135623730950488016887242096;
const float LTE1 = 0.9999999999999999999999999999999;
const float inf = uintBitsToFloat(0x7F800000u);
#define saturate(x) clamp(x,0.0,1.0)
#define norm01(x) ((x + 1.0) / 2.0)
vec2 pos3DTo2D(in vec3 pos,
const in int size_dim,
const in ivec2 z_size){
float size_dimf = float(size_dim);
pos = vec3(mod(pos.x, size_dimf), mod(pos.y, size_dimf), mod(pos.z, size_dimf));
int z_dim_x = int(pos.z) % z_size.x;
int z_dim_y = int(pos.z) / z_size.x;
float x = pos.x + float(z_dim_x * size_dim);
float y = pos.y + float(z_dim_y * size_dim);
return vec2(x,y);
}
vec4 textureAs3D(const in sampler2D iChannel,
in vec3 pos,
const in int size_dim,
const in ivec2 z_size,
const in vec3 iResolution){
//only need whole, will do another texture read to make sure interpolated?
vec2 tex_pos = pos3DTo2D(pos, size_dim, z_size)/iResolution.xy;
vec4 base_vec4 = texture(iChannel, tex_pos);
vec2 tex_pos_z1 = pos3DTo2D(pos+vec3(0.0,0.0,1.0), size_dim, z_size.xy)/iResolution.xy;
vec4 base_vec4_z1 = texture(iChannel, tex_pos_z1);
//return base_vec4;
return mix(base_vec4, base_vec4_z1, fract(pos.z));
}
vec4 textureZ3D(const in sampler2D iChannel,
in int y,
in int z,
in int offsetX,
const in int size_dim,
const in ivec2 z_size,
const in vec3 iResolution){
int tx = (z%z_size.x);
int ty = z/z_size.x;
int sx = offsetX + size_dim * tx;
int sy = y + (ty *size_dim);
if(ty < z_size.y){
return texelFetch(iChannel, ivec2(sx, sy),0);
}else{
return vec4(0.0);
}
//return texelFetch(iChannel, ivec2(x, y - (ty *32)),0);
}
//Buffer B this is what you are going to have to look at.
//noise
//NOISE CONSTANTS
// captured from https://en.wikipedia.org/wiki/SHA-2#Pseudocode
const uint CONST_A = 0xcc9e2d51u;
const uint CONST_B = 0x1b873593u;
const uint CONST_C = 0x85ebca6bu;
const uint CONST_D = 0xc2b2ae35u;
const uint CONST_E = 0xe6546b64u;
const uint CONST_F = 0x510e527fu;
const uint CONST_G = 0x923f82a4u;
const uint CONST_H = 0x14292967u;
const uint CONST_0 = 4294967291u;
const uint CONST_1 = 604807628u;
const uint CONST_2 = 2146583651u;
const uint CONST_3 = 1072842857u;
const uint CONST_4 = 1396182291u;
const uint CONST_5 = 2227730452u;
const uint CONST_6 = 3329325298u;
const uint CONST_7 = 3624381080u;
uvec3 singleHash(uvec3 uval){
uval ^= uval >> 16;
uval.x *= CONST_A;
uval.y *= CONST_B;
uval.z *= CONST_C;
return uval;
}
uint combineHash(uint seed, uvec3 uval){
// can move this out to compile time if need be.
// with out multiplying by one of the randomizing constants
// will result in not very different results from seed to seed.
uint un = seed * CONST_5;
un ^= (uval.x^uval.y)* CONST_0;
un ^= (un >> 16);
un = (un^uval.z)*CONST_1;
un ^= (un >> 16);
return un;
}
/*
//what the above hashes are based upon, seperate
//out this mumurhash based coherent noise hash
uint fullHash(uint seed, uvec3 uval){
uval ^= uval >> 16;
uval.x *= CONST_A;
uval.y *= CONST_B;
uval.z *= CONST_D;
uint un = seed * CONST_6;
un ^= (uval.x ^ uval.y) * CONST_0;
un ^= un >> 16;
un = (un^uval.z) * CONST_2;
un ^= un >> 16;
return un;
}
*/
const vec3 gradArray3d[8] = vec3[8](
vec3(1, 1, 1), vec3(1,-1, 1), vec3(-1, 1, 1), vec3(-1,-1, 1),
vec3(1, 1,-1), vec3(1,-1,-1), vec3(-1, 1,-1), vec3(-1,-1,-1)
);
vec3 getGradient3Old(uint uval){
vec3 grad = gradArray3d[uval & 7u];
return grad;
}
//source of some constants
//https://github.com/Auburns/FastNoise/blob/master/FastNoise.cpp
const float SKEW3D = 1.0 / 3.0;
const float UNSKEW3D = 1.0 / 6.0;
const float FAR_CORNER_UNSKEW3D = -1.0 + 3.0*UNSKEW3D;
const float NORMALIZE_SCALE3D = 30.0;// * SQRT3;
const float DISTCONST_3D = 0.6;
float simplexNoiseV(uint seed, in vec3 pos, in uint wrap){
pos = modv(pos, float(wrap));
float skew_factor = (pos.x + pos.y + pos.z)*SKEW3D;
vec3 fsimplex_corner0 = floor(pos + skew_factor);
ivec3 simplex_corner0 = ivec3(fsimplex_corner0);
float unskew_factor = (fsimplex_corner0.x + fsimplex_corner0.y + fsimplex_corner0.z) * UNSKEW3D;
vec3 pos0 = fsimplex_corner0 - unskew_factor;
//subpos's are positions with in grid cell.
vec3 subpos0 = pos - pos0;
//precomputed values used in determining hash, reduces redundant hash computation
//shows 10% -> 20% speed boost.
uvec3 wrapped_corner0 = uvec3(simplex_corner0);
uvec3 wrapped_corner1 = uvec3(simplex_corner0+1);
wrapped_corner0 = wrapped_corner0 % wrap;
wrapped_corner1 = wrapped_corner1 % wrap;
//uvec3 hashes_offset0 = singleHash(uvec3(simplex_corner0));
//uvec3 hashes_offset1 = singleHash(uvec3(simplex_corner0+1));
uvec3 hashes_offset0 = singleHash(wrapped_corner0);
uvec3 hashes_offset1 = singleHash(wrapped_corner1);
//near corner hash value
uint hashval0 = combineHash(seed, hashes_offset0);
//mid corner hash value
uint hashval1;
uint hashval2;
//far corner hash value
uint hashval3 = combineHash(seed, hashes_offset1);
ivec3 simplex_corner1;
ivec3 simplex_corner2;
if (subpos0.x >= subpos0.y)
{
if (subpos0.y >= subpos0.z)
{
hashval1 = combineHash(seed, uvec3(hashes_offset1.x, hashes_offset0.yz));
hashval2 = combineHash(seed, uvec3(hashes_offset1.xy, hashes_offset0.z));
simplex_corner1 = ivec3(1,0,0);
simplex_corner2 = ivec3(1,1,0);
}
else if (subpos0.x >= subpos0.z)
{
hashval1 = combineHash(seed, uvec3(hashes_offset1.x, hashes_offset0.yz));
hashval2 = combineHash(seed, uvec3(hashes_offset1.x, hashes_offset0.y, hashes_offset1.z));
simplex_corner1 = ivec3(1,0,0);
simplex_corner2 = ivec3(1,0,1);
}
else // subpos0.x < subpos0.z
{
hashval1 = combineHash(seed, uvec3(hashes_offset0.xy, hashes_offset1.z));
hashval2 = combineHash(seed, uvec3(hashes_offset1.x, hashes_offset0.y, hashes_offset1.z));
simplex_corner1 = ivec3(0,0,1);
simplex_corner2 = ivec3(1,0,1);
}
}
else // subpos0.x < subpos0.y
{
if (subpos0.y < subpos0.z)
{
hashval1 = combineHash(seed, uvec3(hashes_offset0.xy, hashes_offset1.z));
hashval2 = combineHash(seed, uvec3(hashes_offset0.x, hashes_offset1.yz));
simplex_corner1 = ivec3(0,0,1);
simplex_corner2 = ivec3(0,1,1);
}
else if (subpos0.x < subpos0.z)
{
hashval1 = combineHash(seed, uvec3(hashes_offset0.x, hashes_offset1.y, hashes_offset0.z));
hashval2 = combineHash(seed, uvec3(hashes_offset0.x, hashes_offset1.yz));
simplex_corner1 = ivec3(0,1,0);
simplex_corner2 = ivec3(0,1,1);
}
else // subpos0.x >= subpos0.z
{
hashval1 = combineHash(seed, uvec3(hashes_offset0.x, hashes_offset1.y, hashes_offset0.z));
hashval2 = combineHash(seed, uvec3(hashes_offset1.xy, hashes_offset0.z));
simplex_corner1 = ivec3(0,1,0);
simplex_corner2 = ivec3(1,1,0);
}
}
//we would do this if we didn't want to seperate the hash values.
//hashval0 = fullHash(seed, uvec3(simplex_corner0));
//hashval1 = fullHash(seed, uvec3(simplex_corner0+simplex_corner1));
//hashval2 = fullHash(seed, uvec3(simplex_corner0+simplex_corner2));
//hashval3 = fullHash(seed, uvec3(simplex_corner0+1));
vec3 subpos1 = subpos0 - vec3(simplex_corner1) + UNSKEW3D;
vec3 subpos2 = subpos0 - vec3(simplex_corner2) + 2.0*UNSKEW3D;
vec3 subpos3 = subpos0 + FAR_CORNER_UNSKEW3D;
float n0, n1, n2, n3;
//http://catlikecoding.com/unity/tutorials/simplex-noise/
//circle distance factor to make sure second derivative is continuous
// t variables represent (1 - x^2 + y^2 + ...)^3, a distance function with
// continous first and second derivatives that are zero when x is one.
float t0 = DISTCONST_3D - subpos0.x*subpos0.x - subpos0.y*subpos0.y - subpos0.z*subpos0.z;
//if t < 0, we get odd dips in continuity at the ends, so we just force it to zero
// to prevent it
if(t0 < 0.0){
n0 = 0.0;
}else{
float t0_pow2 = t0 * t0;
float t0_pow4 = t0_pow2 * t0_pow2;
vec3 grad = getGradient3Old(hashval0);
float product = dot(subpos0, grad);
n0 = t0_pow4 * product;
}
float t1 = DISTCONST_3D - subpos1.x*subpos1.x - subpos1.y*subpos1.y - subpos1.z*subpos1.z;
if(t1 < 0.0){
n1 = 0.0;
}else{
float t1_pow2 = t1 * t1;
float t1_pow4 = t1_pow2 * t1_pow2;
vec3 grad = getGradient3Old(hashval1);
float product = dot(subpos1, grad);
n1 = t1_pow4 * product;
}
float t2 = DISTCONST_3D - subpos2.x*subpos2.x - subpos2.y*subpos2.y - subpos2.z*subpos2.z;
if(t2 < 0.0){
n2 = 0.0;
}else{
float t2_pow2 = t2 * t2;
float t2_pow4 = t2_pow2*t2_pow2;
vec3 grad = getGradient3Old(hashval2);
float product = dot(subpos2, grad);
n2 = t2_pow4 * product;
}
float t3 = DISTCONST_3D - subpos3.x*subpos3.x - subpos3.y*subpos3.y - subpos3.z*subpos3.z;
if(t3 < 0.0){
n3 = 0.0;
}else{
float t3_pow2 = t3 * t3;
float t3_pow4 = t3_pow2*t3_pow2;
vec3 grad = getGradient3Old(hashval3);
float product = dot(subpos3, grad);
n3 = t3_pow4 * product;
}
return (n0 + n1 + n2 + n3);
}
//settings for fractal brownian motion noise
struct BrownianFractalSettings{
uint seed;
int octave_count;
float frequency;
float lacunarity;
float persistence;
float amplitude;
};
float accumulateSimplexNoiseV(in BrownianFractalSettings settings, vec3 pos, float wrap){
float accumulated_noise = 0.0;
wrap *= settings.frequency;
vec3 octave_pos = pos * settings.frequency;
for (int octave = 0; octave < settings.octave_count; octave++) {
octave_pos = modv(octave_pos, wrap);
float noise = simplexNoiseV(settings.seed, octave_pos, uint(wrap));
noise *= pow(settings.persistence, float(octave));
accumulated_noise += noise;
octave_pos *= settings.lacunarity;
wrap *= settings.lacunarity;
}
float scale = 2.0 - pow(settings.persistence, float(settings.octave_count - 1));
return (accumulated_noise/scale) * NORMALIZE_SCALE3D * settings.amplitude;
}
const float FREQUENCY = 1.0/8.0;
const float WRAP = 32.0;
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
//set to zero in order to stop scrolling, scrolling shows the lack of tilability between
//wrapping.
const float use_sin_debug = 1.0;
vec3 origin = vec3(norm01(sin(iTime))*64.0*use_sin_debug,0.0,0.0);
vec3 color = vec3(0.0,0.0,0.0);
BrownianFractalSettings brn_settings =
BrownianFractalSettings(203u, 1, FREQUENCY, 2.0, 0.4, 1.0);
const int size_dim = 32;
ivec2 z_size = ivec2(8, 4);
ivec2 iFragCoord = ivec2(fragCoord.x, fragCoord.y);
int z_dim_x = iFragCoord.x / size_dim;
int z_dim_y = iFragCoord.y / size_dim;
if(z_dim_x < z_size.x && z_dim_y < z_size.y){
int ix = iFragCoord.x % size_dim;
int iy = iFragCoord.y % size_dim;
int iz = (z_dim_x) + ((z_dim_y)*z_size.x);
vec3 pos = vec3(ix,iy,iz) + origin;
float value = accumulateSimplexNoiseV(brn_settings, pos, WRAP);
color = vec3(norm01(value));
}else{
color = vec3(1.0,0.0,0.0);
}
fragColor = vec4(color,1.0);
}
//Image, used to finally display
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
const float fcm = 4.0;
//grabs a single 32x32 tile in order to test tileability, currently generates
//a whole array of images however.
vec2 fragCoordMod = vec2(mod(fragCoord.x, 32.0 * fcm), mod(fragCoord.y, 32.0 * fcm));
vec3 color = texture(iChannel2, fragCoordMod/(fcm*iResolution.xy)).xyz;
fragColor = vec4(color, 1.0);
}
What I've tried position % wrap value, modifying wrap value by lacunarity, and after warp % wrap value, which are currently in use (look in simplexNoiseV for the core algorithm, accumulateSimplexNoiseV for the octave summation).
According to these answers it should be that simple (mod position used for hashing), however this clearly just doesn't work. I'm not sure if it's partially because my hashing function is not Ken Perlin's, but it doesn't seem like that should make a difference. It does seem the skewing of coordinates should make this method not work at all, but apparently others have had success with this.
Here's an example of it not tiling:
Why is wrapping coordinates not making my simplex noise tile seamlessly?
UPDATE:
I've still not fixed the issue, but it appears that tiling works appropriately along the simplicies, and not the grid seen here:
Do I have to modify my modulus to account for the skewing?

How to remove edge of house using opencv

How to remove the circle content of above image. The original image is showed below:
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <cmath>
#include <iostream>
using namespace cv;
using namespace std;
/**
* Helper function to find a cosine of angle between vectors
* from pt0->pt1 and pt0->pt2
*/
static double angle(cv::Point pt1, cv::Point pt2, cv::Point pt0)
{
double dx1 = pt1.x - pt0.x;
double dy1 = pt1.y - pt0.y;
double dx2 = pt2.x - pt0.x;
double dy2 = pt2.y - pt0.y;
return (dx1*dx2 + dy1*dy2) / sqrt((dx1*dx1 + dy1*dy1)*(dx2*dx2 + dy2*dy2) + 1e-10);
}
/**
* Helper function to display text in the center of a contour
*/
void setLabel(cv::Mat& im, const std::string label, std::vector<cv::Point>& contour)
{
int fontface = cv::FONT_HERSHEY_SIMPLEX;
double scale = 0.4;
int thickness = 1;
int baseline = 0;
cv::Size text = cv::getTextSize(label, fontface, scale, thickness, &baseline);
cv::Rect r = cv::boundingRect(contour);
cv::Point pt(r.x + ((r.width - text.width) / 2), r.y + ((r.height + text.height) / 2));
cv::rectangle(im, pt + cv::Point(0, baseline), pt + cv::Point(text.width, -text.height), CV_RGB(255, 0, 255), CV_FILLED);
cv::putText(im, label, pt, fontface, scale, CV_RGB(0, 0, 0), thickness, 8);
}
int main()
{
//cv::Mat src = cv::imread("polygon.png");
Mat src = imread("3.bmp");//2.png 3.jpg
Mat src_copy;
src.copyTo(src_copy);
resize(src, src, cvSize(0, 0), 1, 1);
cout << "src type " << src.type() << endl;
Mat mask(src.size(), src.type(), Scalar(0));
if (src.empty())
return -1;
// Convert to grayscale
cv::Mat gray;
if (src.type() != CV_8UC1)
cv::cvtColor(src, gray, CV_BGR2GRAY);
// Use Canny instead of threshold to catch squares with gradient shading
cv::Mat bw;
cv::Canny(gray, bw, 0, 50, 5);
// Find contours
std::vector<std::vector<cv::Point> > contours;
cv::findContours(bw.clone(), contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
drawContours(src, contours, -1, Scalar(0, 255, 0), 2);
std::vector<cv::Point> approx;
cv::Mat dst = src.clone();
for (int i = 0; i < contours.size(); i++)
{
// Approximate contour with accuracy proportional
// to the contour perimeter
cv::approxPolyDP(cv::Mat(contours[i]), approx, cv::arcLength(cv::Mat(contours[i]), true)*0.1, true);
// Skip small or non-convex objects
if (std::fabs(cv::contourArea(contours[i])) < 500 || !cv::isContourConvex(approx))
continue;
if (approx.size() == 3)
{
//setLabel(dst, "TRI", contours[i]); // Triangles
}
else if (approx.size() >= 4 && approx.size() <= 6)
{
// Number of vertices of polygonal curve
int vtc = approx.size();
// Get the cosines of all corners
std::vector<double> cos;
for (int j = 2; j < vtc + 1; j++)
cos.push_back(angle(approx[j%vtc], approx[j - 2], approx[j - 1]));
// Sort ascending the cosine values
std::sort(cos.begin(), cos.end());
// Get the lowest and the highest cosine
double mincos = cos.front();
double maxcos = cos.back();
// Use the degrees obtained above and the number of vertices
// to determine the shape of the contour
if (vtc == 4 && mincos >= -0.2 && maxcos <= 0.5)
{
Mat element = getStructuringElement(MORPH_RECT, Size(13, 13));
setLabel(dst, "RECT", contours[i]);
drawContours(mask, contours, i, Scalar(255, 255, 255), CV_FILLED);
dilate(mask, mask, element);
src_copy = src_copy - mask;
}
else if (vtc == 5 && mincos >= -0.34 && maxcos <= -0.27)
setLabel(dst, "PENTA", contours[i]);
else if (vtc == 6 && mincos >= -0.55 && maxcos <= -0.45)
setLabel(dst, "HEXA", contours[i]);
}
else
{
// Detect and label circles
double area = cv::contourArea(contours[i]);
cv::Rect r = cv::boundingRect(contours[i]);
int radius = r.width / 2;
if (std::abs(1 - ((double)r.width / r.height)) <= 0.2 &&
std::abs(1 - (area / (CV_PI * std::pow(radius, 2)))) <= 0.2)
setLabel(dst, "CIR", contours[i]);
}
}
//cv::imshow("src", src);
imwrite("mask.jpg", mask);
imwrite("src.jpg", src_copy);
imwrite("dst.jpg", dst);
cv::imshow("dst", dst);
cv::waitKey(0);
return 0;
}

Greyscale Image from YUV420p data

From what I have read on the internet the Y value is the luminance value and can be used to create a grey scale image. The following link: https://web.archive.org/web/20141230145627/http://bobpowell.net/grayscale.aspx, has some C# code on working out the luminance of a bitmap image :
{
Bitmap bm = new Bitmap(source.Width,source.Height);
for(int y=0;y<bm.Height;y++) public Bitmap ConvertToGrayscale(Bitmap source)
{
for(int x=0;x<bm.Width;x++)
{
Color c=source.GetPixel(x,y);
int luma = (int)(c.R*0.3 + c.G*0.59+ c.B*0.11);
bm.SetPixel(x,y,Color.FromArgb(luma,luma,luma));
}
}
return bm;
}
I have a method that returns the YUV values and have the Y data in a byte array. I have the current piece of code and it is failing on Marshal.Copy – attempted to read or write protected memory.
public Bitmap ConvertToGrayscale2(byte[] yuvData, int width, int height)
{
Bitmap bmp;
IntPtr blue = IntPtr.Zero;
int inputOffSet = 0;
long[] pixels = new long[width * height];
try
{
for (int y = 0; y < height; y++)
{
int outputOffSet = y * width;
for (int x = 0; x < width; x++)
{
int grey = yuvData[inputOffSet + x] & 0xff;
unchecked
{
pixels[outputOffSet + x] = UINT_Constant | (grey * INT_Constant);
}
}
inputOffSet += width;
}
blue = Marshal.AllocCoTaskMem(pixels.Length);
Marshal.Copy(pixels, 0, blue, pixels.Length); // fails here : Attempted to read or write protected memory
bmp = new Bitmap(width, height, width, PixelFormat.Format24bppRgb, blue);
}
catch (Exception)
{
throw;
}
finally
{
if (blue != IntPtr.Zero)
{
Marshal.FreeHGlobal(blue);
blue = IntPtr.Zero;
}
}
return bmp;
}
Any help would be appreciated?
I think you have allocated pixels.Length bytes, but are copying pixels.Length longs, which is 8 times as much memory (a long is 64 bits or 8 bytes in size).
You could try:
blue = Marshal.AllocCoTaskMem(Marshal.SizeOf(pixels[0]) * pixels.Length);
You might also need to use int[] for pixels and PixelFormat.Format32bppRgb in the Bitmap constructor (as they are both 32 bits). Using long[] gives you 64 bits per pixel which isn't what a 24 bit pixel format is expecting.
You might end up with shades of blue instead of grey though - depends on what your values of UINT_Constant and INT_Constant are.
There is no need to do "& 0xff", as yuvData[] already contains a byte.
Here are another couple of approaches you could try.
public Bitmap ConvertToGrayScale(byte[] yData, int width, int height)
{
// 3 * width bytes per scanline, rounded up to a multiple of 4 bytes
int stride = 4 * (int)Math.Ceiling(3 * width / 4.0);
byte[] pixels = new byte[stride * height];
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
byte grey = yData[y * width + x];
pixels[y * stride + 3 * x] = grey;
pixels[y * stride + 3 * x + 1] = grey;
pixels[y * stride + 3 * x + 2] = grey;
}
}
IntPtr pixelsPtr = Marshal.AllocCoTaskMem(pixels.Length);
try
{
Marshal.Copy(pixels, 0, pixelsPtr, pixels.Length);
Bitmap bitmap = new Bitmap(
width,
height,
stride,
PixelFormat.Format24bppRgb,
pixelsPtr);
return bitmap;
}
finally
{
Marshal.FreeHGlobal(pixelsPtr);
}
}
public Bitmap ConvertToGrayScale(byte[] yData, int width, int height)
{
// 3 * width bytes per scanline, rounded up to a multiple of 4 bytes
int stride = 4 * (int)Math.Ceiling(3 * width / 4.0);
IntPtr pixelsPtr = Marshal.AllocCoTaskMem(stride * height);
try
{
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
byte grey = yData[y * width + x];
Marshal.WriteByte(pixelsPtr, y * stride + 3 * x, grey);
Marshal.WriteByte(pixelsPtr, y * stride + 3 * x + 1, grey);
Marshal.WriteByte(pixelsPtr, y * stride + 3 * x + 2, grey);
}
}
Bitmap bitmap = new Bitmap(
width,
height,
stride,
PixelFormat.Format24bppRgb,
pixelsPtr);
return bitmap;
}
finally
{
Marshal.FreeHGlobal(pixelsPtr);
}
}
I get a black image with a few pixel in the top left corner if I use this code and this is stable when running :
public static Bitmap ToGrayscale(byte[] yData, int width, int height)
{
Bitmap bm = new Bitmap(width, height, PixelFormat.Format32bppRgb);
Rectangle dimension = new Rectangle(0, 0, bm.Width, bm.Height);
BitmapData picData = bm.LockBits(dimension, ImageLockMode.ReadWrite, bm.PixelFormat);
IntPtr pixelStateAddress = picData.Scan0;
int stride = 4 * (int)Math.Ceiling(3 * width / 4.0);
byte[] pixels = new byte[stride * height];
try
{
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
byte grey = yData[y * width + x];
pixels[y * stride + 3 * x] = grey;
pixels[y * stride + 3 * x + 1] = grey;
pixels[y * stride + 3 * x + 2] = grey;
}
}
Marshal.Copy(pixels, 0, pixelStateAddress, pixels.Length);
bm.UnlockBits(picData);
}
catch (Exception)
{
throw;
}
return bm;
}

Resources