OpenGL ES 2.0 on SGX540 OpenGL Offscreen PIXMAP Support - opengl-es

On the DM370 ( TI OMAP 3 ) with the Imagination Technologies PowerVR SGX 530 I was able to use the following code to initialize my EglSurface using CMEM and PIXMAP offscreen surfaces:
// Index to bind the attributes to vertex shaders
#define VERTEX_ARRAY 0
#define TEXCOORD_ARRAY 1
// Bit types
#define SGXPERF_RGB565 0
#define SGXPERF_ARGB8888 2
// SurfaceTypes
#define SGXPERF_SURFACE_TYPE_WINDOW 0
#define SGXPERF_SURFACE_TYPE_PIXMAP_16 1
#define SGXPERF_SURFACE_TYPE_PIXMAP_32 2
typedef struct _NATIVE_PIXMAP_STRUCT
{
long pixelFormat;
long rotation;
long width;
long height;
long stride;
long sizeInBytes;
long pvAddress;
long lAddress;
} NATIVE_PIXMAP_STRUCT;
// Init EGL with offscreen PIXMAP support
void* GLWidget::commonEglInit(int surfaceType, NATIVE_PIXMAP_STRUCT** pNativePixmapPtr) {
int windowWidthTi, windowHeightTi;
EGLint iMajorVersion, iMinorVersion;
EGLint ai32ContextAttribs[] = { EGL_CONTEXT_CLIENT_VERSION, 2, EGL_NONE };
eglDisplay = eglGetDisplay((int)0);
if (!eglInitialize(eglDisplay, &iMajorVersion, &iMinorVersion))
return NULL;
if ( !eglBindAPI(EGL_OPENGL_ES_API) ) {
return NULL;
}
EGLint pi32ConfigAttribs[5];
pi32ConfigAttribs[0] = EGL_SURFACE_TYPE;
pi32ConfigAttribs[1] = EGL_WINDOW_BIT | EGL_PIXMAP_BIT;
pi32ConfigAttribs[2] = EGL_RENDERABLE_TYPE;
pi32ConfigAttribs[3] = EGL_OPENGL_ES2_BIT;
pi32ConfigAttribs[4] = EGL_NONE;
int iConfigs;
if (!eglChooseConfig(eglDisplay, pi32ConfigAttribs, &eglConfig, 1, &iConfigs) || (iConfigs != 1))
{
fprintf(stderr,"Error: eglChooseConfig() failed.\n");
return NULL;
}
commonCreateNativePixmap(SGXPERF_ARGB8888,WIDTH, HEIGHT, pNativePixmapPtr);
eglSurface = eglCreatePixmapSurface(eglDisplay, eglConfig, *pNativePixmapPtr, NULL);
if (!fprintf(stderr,"eglCreateSurface\n"))
return NULL;
eglContext = eglCreateContext(eglDisplay, eglConfig, NULL, ai32ContextAttribs);
if (!fprintf(stderr,"eglCreateContext\n"))
return NULL;
eglMakeCurrent(eglDisplay, eglSurface, eglSurface, eglContext);
if (!fprintf(stderr,"eglMakeCurrent\n"))
return NULL;
EGLBoolean success = eglSwapInterval(eglDisplay, 1);
if ( !success ) {
fprintf(stderr,"eglSwapInterval\n");
sleep(3600);
return NULL;
}
eglQuerySurface(eglDisplay, eglSurface, EGL_WIDTH, &windowWidthTi);
eglQuerySurface(eglDisplay, eglSurface, EGL_HEIGHT, &windowHeightTi);
fprintf(stderr,"Window width=%d, Height=%d\n", windowWidthTi, windowHeightTi);
(void*)(*pNativePixmapPtr)->lAddress;
return (void*)(*pNativePixmapPtr)->lAddress;
}
On the OMAP 5 / Sitara - AM57xx EVM, with the SGX 540 GPU, I've built and deployed the processor SDK with the OpenGL libraries, cmemk.ko, and pvrsrvctl. I can successfully run the PVR OpenGL demos and they show up on the display. I'm trying to run my application on this new EVM and it always fails with:
Error: eglChooseConfig() failed.
Error creating EGL surface!
If I remove the EGL_PIXMAP_BIT in the pi32ConfigAttribs, then it gets further.
Do the AM57xx OpenGL libraries not support PIXMAP surfaces? If they do, how can I get them to work? Thanks!

You should not be using the EGL_PIXMAP_BIT. It requires the EGL to provide surfaces in a format which is directly compatible with the OS's windowing system for off-screen image transfers. Use FBOs for this instead.
Note that pixmaps are not the same thing as pixel buffers or (pbuffers).
It looks like you are using TI's embedded Linux distribution, so pixmaps would have to be compatible with something like Qt, DirectFB or X11. TI has never provided EGL drivers for OMAP that were that well integrated for specific windowing system's off-screen images. EGL_PIXMAP_BIT may have worked in the past with some specific windowing system, but not necessarily the one you are using. This article explains in more detail the differences between various types of off-screen images for OpenGL ES:
Render to Texture with OpenGL ES

Related

Printwindow prints with empty space

The width and height of application Melon is 438 x 615 pixels and ::GetWindowRect() function grab it correctly.
However, ::PrintWindow() function draws smaller size which is 348 x 489 pixels, rest of them filled with black blank(may be draw nothing)
...may the one picture will be better than hundreds of discription.
here is result of the code
bool result = true;
HWND appHWnd = ::FindWindow(nullptr, TEXT("Melon"));
RECT appWindowRect; ::GetWindowRect(appHWnd, &appWindowRect);
HDC appDC = ::GetWindowDC(appHWnd);
// HDC appDC = ::GetDC(appHWnd); // same issue occured either
// HDC appDC = ::GetDC(nullptr);
HDC memoryDC = ::CreateCompatibleDC(appDC);
HBITMAP capturedScreenBitmap = ::CreateCompatibleBitmap(
appDC,
appWindowRect.right - appWindowRect.left,
appWindowRect.bottom - appWindowRect.top
);
HBITMAP memoryBitmap = static_cast<HBITMAP>(::SelectObject(memoryDC, capturedScreenBitmap));
result = ::PrintWindow(appHWnd, memoryDC, 0);
//copy to clipboard
OpenClipboard(nullptr);
EmptyClipboard();
SetClipboardData(CF_BITMAP, capturedScreenBitmap);
CloseClipboard();
::SelectObject(memoryDC, memoryBitmap);
::DeleteObject(capturedScreenBitmap);
::DeleteDC(memoryDC);
::ReleaseDC(appHWnd, appDC);
Strangely, C# version of the code works correctly. import same user32 library, use same of it and output different result? why?
It will be down to DPI awareness – David Heffernan
::GetWindowRect, which used to c# project and C++ console project in Visual Studio, aren't affected from the scaling by dpi awareness. but, what used to qt studio are affected from it.
here is my solution.
RECT appWindowRect; {
::GetWindowRect(hwnd, &appWindowRect);
}
POINT appWindowSize; {
qreal dotsPerInch = QApplication::screens().at(0)->logicalDotsPerInch();
appWindowSize.x = static_cast<LONG>((appWindowRect.right - appWindowRect.left) * 96 / dotsPerInch);
appWindowSize.y = static_cast<LONG>((appWindowRect.bottom - appWindowRect.top) * 96 / dotsPerInch);
}

SDL2: How to draw rectangles as quickly as possible?

Background
I am working on a rendering client that draws graphical information it receives from a server. The server sends packets containing non-overlapping rectangles with different solid colors at a frame rate variably defined on the server. I currently have it configured so that the size of the screen being transmitted by the server is different than the size of the window onto which the client is drawing, so scaling is done. I need the client to draw these rectangles as quickly as possible to not fall behind the server's stream.
Currently, I am using SDL 2.0. I am using the streaming texture technique described in the SDL 2 Migration Guide to draw the rectangles onto an SDL_Surface. When the time to display a frame arrives, I calll SDL_UpdateTexture() to overwrite the pixel data for an SDL_Texture, and then I use SDL_RenderCopyEx() to copy the texture to the renderer. I need this function instead of SDL_RenderCopy() so I can specify SDL_FLIP_VERTICAL to account for the fact that the coordinates passed are bitmap-style.
Question
My current approach does not render the rectangles quickly enough. To get the client to be able to keep up with the server, I currently have to reduce the server's upload rate from 30+ FPS to 15- FPS. Even then, I have to make the socket's buffer dangerously large, and I end up getting to watch the client's rendering slowly fall behind and eventually result in packet loss.
What is the fastest way to get SDL to render these rectangles? If I am currently using the fastest method, what other APIs would others recommend to make a client that can keep up?
I have included a stripped-down version of my source code so others can look for improvements/mistakes.
Technical Details
I am using C++11, MinGW32, and SDL2 with Eclipse Kepler CDT and GCC 4.8.2 on Window 7 64-bit.
Stripped Code
int main(int argc, char** args) {
// omitted initialization code
SDL_Init(SDL_INIT_VIDEO);
SDL_Window* window = SDL_CreateWindow(
"RTSC",
SDL_WINDOWPOS_CENTERED,
SDL_WINDOWPOS_CENTERED,
windowWidth,
windowHeight,
SDL_WINDOW_SHOWN | SDL_WINDOW_RESIZABLE
);
SDL_Renderer* renderer = SDL_CreateRenderer(window, -1, 0);
SDL_Surface* surface = SDL_CreateRGBSurface(
0,
sourceWidth,
sourceHeight,
24,
0xFF << 16,
0xFF << 8,
0xFF,
0
);
SDL_FillRect(surface, nullptr, 0);
SDL_Texture* texture = SDL_CreateTexture(
renderer,
surface->format->format,
SDL_TEXTUREACCESS_STREAMING,
sourceWidth,
sourceHeight
);
bool running {true};
while (running) {
SDL_Event event;
while (SDL_PollEvent(&event)) {
switch (event.type) {
case SDL_QUIT:
running = false;
break;
case SDL_WINDOWEVENT:
switch (event.windowevent.event) {
case SDL_WINDOWEVENT_CLOSE:
running = false;
break;
default:
break;
}
break;
default:
break;
}
}
// omitted packet reception and interpretation code
for (uint32_t i {0}; i < receivedRegions; ++i) {
Region& region = regions[i];
SDL_Rect rect {
(int) region.x,
(int) region.y,
(int) region.width,
(int) region.height
};
uint32_t color =
(region.red << 16) +
(region.green << 8) +
region.blue;
SDL_FillRect(surface, &rect, color);
}
// omitted logic for determining whether to present the frame
SDL_RenderClear(renderer);
SDL_UpdateTexture(texture, nullptr, surface->pixels, surface->pitch);
SDL_RenderCopyEx(
renderer,
texture,
nullptr,
nullptr,
0,
nullptr,
SDL_FLIP_VERTICAL
);
SDL_RenderPresent(renderer);
SDL_FillRect(surface, nullptr, 0);
}
// omitted clean-up and return code
}
This is embarassing. Because of earlier instrumentation I had done on my server, I assumed all the problem was with the SDL rendering client. However, it turns out the client slows only when the server does. It has nothing to do with SDL at all. Sorry.

How to create a window and fill it with color using OpenES 2.0 + X11?

I googled as hard as I can, but I found nothing.
What I want to do:
create a window with X11 (Xlib) and show it
fill the window with color using OpenGL ES 2.0
For OpenGL ES 2.0 support on my ArchLinux, I use MESA. I know how to create a simple X window using Xlib, I have a basic knowledge of EGL and OpenGL ES, but I can't understand how to use all them (X11 + EGL + OpenGL ES 2.0) in conjuction.
I would be very thakful if someone wrote at least a short code example on how to prepare a X window and connect it with OpenGL ES 2.0 correctly and start rendering.
Create Window:
Window root;
XSetWindowAttributes swa;
XSetWindowAttributes xattr;
Atom wm_state;
XWMHints hints;
XEvent xev;
EGLConfig ecfg;
EGLint num_config;
Window win;
/*
* X11 native display initialization
*/
x_display = XOpenDisplay(NULL);
if ( x_display == NULL )
{
return EGL_FALSE;
}
root = DefaultRootWindow(x_display);
swa.event_mask = ExposureMask | PointerMotionMask | KeyPressMask;
win = XCreateWindow(
x_display, root,
0, 0, esContext->width, esContext->height, 0,
CopyFromParent, InputOutput,
CopyFromParent, CWEventMask,
&swa );
xattr.override_redirect = FALSE;
XChangeWindowAttributes ( x_display, win, CWOverrideRedirect, &xattr );
hints.input = TRUE;
hints.flags = InputHint;
XSetWMHints(x_display, win, &hints);
// make the window visible on the screen
XMapWindow (x_display, win);
XStoreName (x_display, win, title);
// get identifiers for the provided atom name strings
wm_state = XInternAtom (x_display, "_NET_WM_STATE", FALSE);
memset ( &xev, 0, sizeof(xev) );
xev.type = ClientMessage;
xev.xclient.window = win;
xev.xclient.message_type = wm_state;
xev.xclient.format = 32;
xev.xclient.data.l[0] = 1;
xev.xclient.data.l[1] = FALSE;
XSendEvent (
x_display,
DefaultRootWindow ( x_display ),
FALSE,
SubstructureNotifyMask,
&xev );
Set color:
glClearColor ( 0.0f, 0.0f, 0.0f, 0.0f );
// Set the viewport
glViewport ( 0, 0, esContext->width, esContext->height );
// Clear the color buffer
glClear ( GL_COLOR_BUFFER_BIT );
Sources:
https://github.com/danginsburg/opengles-book-samples/blob/master/LinuxX11/Chapter_2/Hello_Triangle/Hello_Triangle.c
https://github.com/danginsburg/opengles-book-samples/blob/master/LinuxX11/Common/esUtil.c
https://github.com/danginsburg/opengles-book-samples/blob/master/LinuxX11/Common/esUtil.h
The opengles-book-samples code actually fails for me. If you see the same init failures, a call to eglBindAPI(EGL_OPENGL_ES_API) seemed to fix it.

Grabbing the backbuffer with DirectX 7

I'm trying to write a small chunk of code to grab the backbuffer into an array of pixels. I've barely used directX before as I'm more of a OpenGL fan.
My wish is to actually replace some code in a project that grabs the backbuffer using BitBlt and DC which is very slow.
This is supposed to work on all computers and that's why I chose directx7.
My question is.. how would I do that?
Thank you.
What I do is to use a helper class to do the lock /unlock as below. Then you use it like so :
mBackBuffer->Flip( DDFLIP_WAIT );
{
DDSURFACEDESC2 ddsd;
ZeroMemory( &ddsd, sizeof( ddsd ) );
ddsd.dwSize = sizeof( ddsd );
ReadLock r( mBackBuffer, ddsd, NULL /* for whole surface */ );
if ( r )
{
// ddsd.lpSurface contains the void* pointer to the bytes
// ddsd.lPitch contains the byte count of each horizontal line
}
} // ReadLock unlocks when it goes out of scope
class ReadLock
{
public:
ReadLock(IDirectDrawSurface7* surface, DDSURFACEDESC2& ddsd, LPRECT pRect = 0 ) : surface_(surface), mpRect( pRect ), hr( S_OK )
{
hr = surface_->Lock( mpRect, &ddsd, DDLOCK_SURFACEMEMORYPTR | DDLOCK_NOSYSLOCK | DDLOCK_WAIT | DDLOCK_READONLY, 0 );
}
HRESULT getResult() const { return hr; }
bool operator!() const { return FAILED( hr ); }
operator bool() const { return SUCCEEDED( hr ); }
~ReadLock()
{
if ( surface_ && SUCCEEDED( hr ) )
surface_->Unlock(mpRect);
}
private:
HRESULT hr;
RECT* mpRect;
IDirectDrawSurface7* surface_;
};
TBH DirectX 9 will work even with ancient cards. You don't have all the features available but you have a a SHED load more usable information out there. Although I think you might be a bit knackered on Win 95/9/me support and win 2K. Bear in mind NT4 never had a decent version of DirectX.
Alas I don't have the DX7 docs anywhere handy but I'm pretty sure you could just get the back buffer surface and then lock it to get at the data. Though you need to bear in mind just how slow grabbing the back buffer can be, especially on old cards. Copying the back buffer from local video memory to system memory across the PCI or AGP bus is incredibly slow.
What exactly are you trying to achieve? There must be better ways to achieve what you are after doing ...

what is the wrong in this code(openAl in vc++)

how are you all?
i need your help
i have this code
#include <conio.h>
#include <stdlib.h>
#include <stdio.h>
#include <al.h>
#include <alc.h>
#include <alut.h>
#pragma comment(lib, "openal32.lib")
#pragma comment(lib, "alut.lib")
/*
* These are OpenAL "names" (or "objects"). They store and id of a buffer
* or a source object. Generally you would expect to see the implementation
* use values that scale up from '1', but don't count on it. The spec does
* not make this mandatory (as it is OpenGL). The id's can easily be memory
* pointers as well. It will depend on the implementation.
*/
// Buffers to hold sound data.
ALuint Buffer;
// Sources are points of emitting sound.
ALuint Source;
/*
* These are 3D cartesian vector coordinates. A structure or class would be
* a more flexible of handling these, but for the sake of simplicity we will
* just leave it as is.
*/
// Position of the source sound.
ALfloat SourcePos[] = { 0.0, 0.0, 0.0 };
// Velocity of the source sound.
ALfloat SourceVel[] = { 0.0, 0.0, 0.0 };
// Position of the Listener.
ALfloat ListenerPos[] = { 0.0, 0.0, 0.0 };
// Velocity of the Listener.
ALfloat ListenerVel[] = { 0.0, 0.0, 0.0 };
// Orientation of the Listener. (first 3 elements are "at", second 3 are "up")
// Also note that these should be units of '1'.
ALfloat ListenerOri[] = { 0.0, 0.0, -1.0, 0.0, 1.0, 0.0 };
/*
* ALboolean LoadALData()
*
* This function will load our sample data from the disk using the Alut
* utility and send the data into OpenAL as a buffer. A source is then
* also created to play that buffer.
*/
ALboolean LoadALData()
{
// Variables to load into.
ALenum format;
ALsizei size;
ALvoid* data;
ALsizei freq;
ALboolean loop;
// Load wav data into a buffer.
alGenBuffers(1, &Buffer);
if(alGetError() != AL_NO_ERROR)
return AL_FALSE;
alutLoadWAVFile((ALbyte *)"C:\Users\Toshiba\Desktop\Graduation Project\OpenAL\open AL test\wavdata\FancyPants.wav", &format, &data, &size, &freq, &loop);
alBufferData(Buffer, format, data, size, freq);
alutUnloadWAV(format, data, size, freq);
// Bind the buffer with the source.
alGenSources(1, &Source);
if(alGetError() != AL_NO_ERROR)
return AL_FALSE;
alSourcei (Source, AL_BUFFER, Buffer );
alSourcef (Source, AL_PITCH, 1.0 );
alSourcef (Source, AL_GAIN, 1.0 );
alSourcefv(Source, AL_POSITION, SourcePos);
alSourcefv(Source, AL_VELOCITY, SourceVel);
alSourcei (Source, AL_LOOPING, loop );
// Do another error check and return.
if(alGetError() == AL_NO_ERROR)
return AL_TRUE;
return AL_FALSE;
}
/*
* void SetListenerValues()
*
* We already defined certain values for the Listener, but we need
* to tell OpenAL to use that data. This function does just that.
*/
void SetListenerValues()
{
alListenerfv(AL_POSITION, ListenerPos);
alListenerfv(AL_VELOCITY, ListenerVel);
alListenerfv(AL_ORIENTATION, ListenerOri);
}
/*
* void KillALData()
*
* We have allocated memory for our buffers and sources which needs
* to be returned to the system. This function frees that memory.
*/
void KillALData()
{
alDeleteBuffers(1, &Buffer);
alDeleteSources(1, &Source);
alutExit();
}
int main(int argc, char *argv[])
{
printf("MindCode's OpenAL Lesson 1: Single Static Source\n\n");
printf("Controls:\n");
printf("p) Play\n");
printf("s) Stop\n");
printf("h) Hold (pause)\n");
printf("q) Quit\n\n");
// Initialize OpenAL and clear the error bit.
alutInit(NULL, 0);
alGetError();
// Load the wav data.
if(LoadALData() == AL_FALSE)
{
printf("Error loading data.");
return 0;
}
SetListenerValues();
// Setup an exit procedure.
atexit(KillALData);
// Loop.
ALubyte c = ' ';
while(c != 'q')
{
c = getche();
switch(c)
{
// Pressing 'p' will begin playing the sample.
case 'p': alSourcePlay(Source); break;
// Pressing 's' will stop the sample from playing.
case 's': alSourceStop(Source); break;
// Pressing 'h' will pause the sample.
case 'h': alSourcePause(Source); break;
};
}
return 0;
}
and it is run will>>but i cant here any thing>>
also i am new in programong and wont to program a virtual reality sound in my graduation project >>>and start to learn opeal and vc++
but i dont how to start and from where i must begin
and i want to ask if i need to learn about API win ?? and if i need how i can learn that>>
thank you alote
and i am sorry coz of my english
I recently tried running this very same sample source code from devmaster.net as well. Make sure to change the #includes to reflect the appropriate paths to your OpenAL header files. If you are using C++ and not C, you will need to change #include <conio.h> to be #include <iostream> and use getchar() instead of getche().
Also, I've discovered that the version of alut.h that I have declares the alutLoadWAVFile function to only accept 5 parameters, not 6. The 6th parameter in this example (the loop variable) makes for too many arguments to the function (at least in my version of OpenAL).
{Edited note: Okay, I just discovered that apparently, the Windows version of OpenAL requires the 6th boolean loop parameter, while the Mac version of OpenAL does not.}
Hope this helps.

Resources