This "Pitch Transposing" test code is not working with AudioUnit samplerUnit.
Sending the same MIDI messages thru MIDI Out to an external sound module, it works perfect. What do I wrong. I can't use "Pitch Bend" for this case. Is the controller "0x06" eventually not implemented in the API?
- (void) demoTranspose {
[at0 setInstrument :0x11 :0x00];
[at0 midiEvent :0xB0 :0x65 :0x00];
[at0 midiEvent :0xB0 :0x64 :0x02];
[at0 midiEvent :0x90 :0x3C :0x7F]; // Note ON
usleep(1000000);
[at0 midiEvent :0xB0 :0x06 :0x40+6]; // 40: center, +6 semi tones higher
usleep(1000000);
[at0 midiEvent :0xB0 :0x06 :0x40+12]; // 40: center, +12 semi tones higher
usleep(1000000);
[at0 midiEvent :0x90 :0x3C :0x00]; // Note OFF
[at0 midiEvent :0xB0 :0x06 :0x40]; // reset to center
}
- (void) midiEvent :(Byte)statusInfoA :(Byte)param1A :(Byte)param2A {
MusicDeviceMIDIEvent (self.samplerUnit, statusInfoA, param1A, param2A, 0);
}
Related
I'm an experienced programmer specialized in Computer Graphics, mainly using Direct3D 9.0c, OpenGL and general algorithms. Currently, I am evaluating Direct2D as rendering technology for a professional application dealing with medical image data. As for rendering, it is a x64 desktop application in windowed mode (not fullscreen).
Already with my very initial steps I struggle with a task I thought would be a no-brainer: Rendering a single-channel bitmap on screen.
Running on a Windows 8.1 machine, I create an ID2D1DeviceContext with a Direct3D swap chain buffer surface as render target. The swap chain is created from a HWND and buffer format DXGI_FORMAT_B8G8R8A8_UNORM. Note: See also the code snippets at the end.
Afterwards, I create a bitmap with pixel format DXGI_FORMAT_R8_UNORM and alpha mode D2d1_ALPHA_MODE_IGNORE. When calling DrawBitmap(...) on the device context, a debug break point is triggered with the debug message "D2d DEBUG ERROR - This operation is not compatible with the pixel format of the bitmap".
I know that this output is quite clear. Also, when changing the pixel format to DXGI_FORMAT_R8G8B8A8_UNORM with DXGI_ALPHA_MODE_IGNORE everything works well and I see the bitmap rendered. However, I simply cannot believe that! Graphics cards support single-channel textures ever since - every 3D graphics application can use them without thinking twice. This goes without speaking.
I tried to find anything here and at Google, without success. The only hint I could find was the MSDN Direct2D page with the (supported pixel formats). The documentation suggests - by not mentioning it - that DXGI_FORMAT_R8_UNORM is indeed not supported as bitmap format. I also find posts talking about alpha masks (using DXGI_FORMAT_A8_UNORM), but that's not what I'm after.
What am I missing that I can't convince Direct2D to create and draw a grayscale bitmap? Or is it really true that Direct2D doesn't support drawing of R8 or R16 bitmaps??
Any help is really appreciated as I don't know how to solve this. If I can't get this trivial basics to work, I think I'd have to stop digging deeper into Direct2D :-(.
And here is the code snippets of relevance. Please note that they might not compile since I ported this on the fly from my C++/CLI code to plain C++. Also, I threw away all error checking and other noise:
Device, Device Context and Swap Chain Creation (D3D and Direct2D):
// Direct2D factory creation
D2D1_FACTORY_OPTIONS options = {};
options.debugLevel = D2D1_DEBUG_LEVEL_INFORMATION;
ID2D1Factory1* d2dFactory;
D2D1CreateFactory(D2D1_FACTORY_TYPE_MULTI_THREADED, options, &d2dFactory);
// Direct3D device creation
const auto type = D3D_DRIVER_TYPE_HARDWARE;
const auto flags = D3D11_CREATE_DEVICE_BGRA_SUPPORT;
ID3D11Device* d3dDevice;
D3D11CreateDevice(nullptr, type, nullptr, flags, nullptr, 0, D3D11_SDK_VERSION, &d3dDevice, nullptr, nullptr);
// Direct2D device creation
IDXGIDevice* dxgiDevice;
d3dDevice->QueryInterface(__uuidof(IDXGIDevice), reinterpret_cast<void**>(&dxgiDevice));
ID2D1Device* d2dDevice;
d2dFactory->CreateDevice(dxgiDevice, &d2dDevice);
// Swap chain creation
DXGI_SWAP_CHAIN_DESC1 desc = {};
desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
desc.SampleDesc.Count = 1;
desc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
desc.BufferCount = 2;
IDXGIAdapter* dxgiAdapter;
dxgiDevice->GetAdapter(&dxgiAdapter);
IDXGIFactory2* dxgiFactory;
dxgiAdapter->GetParent(__uuidof(IDXGIFactory), reinterpret_cast<void **>(&dxgiFactory));
IDXGISwapChain1* swapChain;
dxgiFactory->CreateSwapChainForHwnd(d3dDevice, hwnd, &swapChainDesc, nullptr, nullptr, &swapChain);
// Direct2D device context creation
const auto options = D2D1_DEVICE_CONTEXT_OPTIONS_NONE;
ID2D1DeviceContext* deviceContext;
d2dDevice->CreateDeviceContext(options, &deviceContext);
// create render target bitmap from swap chain
IDXGISurface* swapChainSurface;
swapChain->GetBuffer(0, __uuidof(swapChainSurface), reinterpret_cast<void **>(&swapChainSurface));
D2D1_BITMAP_PROPERTIES1 bitmapProperties;
bitmapProperties.dpiX = 0.0f;
bitmapProperties.dpiY = 0.0f;
bitmapProperties.bitmapOptions = D2D1_BITMAP_OPTIONS_TARGET | D2D1_BITMAP_OPTIONS_CANNOT_DRAW;
bitmapProperties.pixelFormat.format = DXGI_FORMAT_B8G8R8A8_UNORM;
bitmapProperties.pixelFormat.alphaMode = D2D1_ALPHA_MODE_IGNORE;
bitmapProperties.colorContext = nullptr;
ID2D1Bitmap1* swapChainBitmap = nullptr;
deviceContext->CreateBitmapFromDxgiSurface(swapChainSurface, &bitmapProperties, &swapChainBitmap);
// set swap chain bitmap as render target of D2D device context
deviceContext->SetTarget(swapChainBitmap);
D2D single-channel Bitmap Creation:
const D2D1_SIZE_U size = { 512, 512 };
const UINT32 pitch = 512;
D2D1_BITMAP_PROPERTIES1 d2dProperties;
ZeroMemory(&d2dProperties, sizeof(D2D1_BITMAP_PROPERTIES1));
d2dProperties.pixelFormat.alphaMode = D2D1_ALPHA_MODE_IGNORE;
d2dProperties.pixelFormat.format = DXGI_FORMAT_R8_UNORM;
char* sourceData = new char[512*512];
ID2D1Bitmap1* d2dBitmap;
deviceContext->DeviceContextPointer->CreateBitmap(size, sourceData, pitch, d2dProperties, &d2dBitmap);
Bitmap drawing (FAILING):
deviceContext->BeginDraw();
D2D1_COLOR_F d2dColor = {};
deviceContext->Clear(d2dColor);
// THIS LINE FAILS WITH THE DEBUG BREAKPOINT IF SINGLE CHANNELED
deviceContext->DrawBitmap(bitmap, nullptr, 1.0f, D2D1_INTERPOLATION_MODE_LINEAR, nullptr);
swapChain->Present(1, 0);
deviceContext->EndDraw();
From my little experience, Direct2D seems very limited, indeed.
Have you tried Direct2D effects (ID2D1Effect)? You can write your own [it seems comparatively complicated], or use one of the built-in effects [which is rather simple].
There is one called Color matrix effect (CLSID_D2D1ColorMatrix). It might work to have your DXGI_FORMAT_R8_UNORM (or DXGI_FORMAT_A8_UNORM, any single-channel would do) as input (inputs to effects are ID2D1Image, and ID2D1Bitmap inherits from ID2D1Image). Then set the D2D1_COLORMATRIX_PROP_COLOR_MATRIX for copying the input channel to all output channels. Have not tried it, though.
I'm trying to build Qt5.6 project in MSVS2013 express (i wrote all code under QtCreator in Linux). First of all in Visual studio it i could build it only in Release mode, and it works fine. Then i used windeployqt.exe utility for creating deployment pack. I also put assimp32.dll (i use it for model loading).
And everything works fine, except PIXEL_BUFFER functionality (I draw some stuff to texture in additional Framebuffer, make some analysis of drawing result, prepare another one texture and push it for drawing).
I've got some errors in Dependency Walker (msvcr90.dll, Dcomp.dll, API-MS-WIN-CORE-*.dll) even though i've installed every MS Redistributable crap that exist in this world.
Here the code that i try to use:
void AUVGBO::PrepareGBO(QOpenGLShaderProgram *shader) {
if (mEnabled) {
this->PrepareRender(shader, AUVCamera::PR_PROJECTION | AUVCamera::PR_VIEW | AUVCamera::PR_VIEW_POS | AUVCamera::PR_LIGHT | AUVCamera::PR_LIGHT_POS );
// Attach framebuffer for intermediate rendering
m_GL->glBindFramebuffer(GL_FRAMEBUFFER, mFBO);
m_GL->glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
m_GL->glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
m_GL->glViewport(0, 0, mGBO_Width, mGBO_Height);
}
}
void AUVGBO::FinishGBO(QOpenGLShaderProgram *shader) {
// Read pixels from FBO texture
GLuint indexAsync = mPBO_inIndex;
GLuint indexSync = (mPBO_inIndex + 1) % PBO_NUM;
// Bind pixel buffer for asynchronous reading from framebuffer
m_GL->glReadBuffer(GL_COLOR_ATTACHMENT0);
m_GL->glBindBuffer(GL_PIXEL_PACK_BUFFER, mPBO_in[indexAsync]);
m_GL->glReadPixels(0, 0, mGBO_Width, mGBO_Height, GL_RGB, GL_FLOAT, 0); // This call will be async
if (firstAsyncCalls && ( indexSync != 0 )) {
mPBO_inIndex = indexSync;
return;
} else {
firstAsyncCalls = false;
}
// Bind pixel buffer which already has data fetched one step ago
m_GL->glBindBuffer(GL_PIXEL_PACK_BUFFER, mPBO_in[indexSync]);
memcpy(mGBO_Pixels,
m_GL->glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_WRITE),
mGBO_Size * 3 * sizeof(GLfloat));
m_GL->glUnmapBuffer(GL_PIXEL_PACK_BUFFER);
m_GL->glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
// Swap buffer for sync/async readback
mPBO_inIndex = indexSync;
m_GL->glBindFramebuffer(GL_FRAMEBUFFER, 0);
so now every pixel in memory area [mGBO_Pixels : mGBO_Size * 3 * sizeof(GLfloat)] is black, but they should'not be black. I put (for test) 0.12345f value in third byte of every pixel
color = vec4(cos, length(r), 0.12345f, 1.0f);
but (mGBO_Pixels + 2) is 0.0 if i run exe file. But in VS everything is ok as i've already said ((mGBO_Pixels + 2) = 0.12345f).
I found some SO answers, where people said that it could be Qt openGL bugs (their application crashes on initializeGLContext() stuff), but in my situation pushing created texture to openGL is ok. So guess that i've made a mistake somewhere. It drives me crazy. huh. Wish Sara and John Conor with T800 go to Microsoft instead of Cyberdyne.
P.S. I create release build using Cmake in my Arch Linux, and everything work as Avtomat Kalashnikova, so at least in Linux Qt OpenGL stuff works. If i find enough time i will try to use Cmake with MinGW in Windows.
I am trying to use the Tango device to capture HDR images, but no matter how I set the Tango config ISO and exposure settings, there is no apparent change in the image.
I am disabling the auto-exposure and auto-white balance and setting manual values for the ISO and Exposure time. Regardless of my settings, the colour camera images returned from onFrameAvailable always seem to be in auto mode. The measured average RGB of a given scene is the same, regardless of setting the ISO to 100, 200, 400, or 800, and the exposure to 11.1 ms or 2, 8 or 1/2 times this amount. It seems to still be in auto mode, because I point the device towards a bright window and the window appears pure white for 1 second, then the brightness drops and I can see what is outside the window.
So my Yellowstone tablet is up to date (KOT49H.150731) and I have the Turing release of the client API. I am using the C api with an app that is basically a combination of the example programs for motion tracking, depth, and augmented reality. Is the following code supposed to work?
const bool autoExposure = false;
const int32_t iso = 800;
const double exposure = 11.1*2.0; // milliseconds
if ( TangoConfig_setBool( config_, "config_color_mode_auto", autoExposure) != TANGO_SUCCESS) {
LOGE("config_color_mode_auto Failed");
return false;
}
if ( TangoConfig_setInt32(config_ , "config_color_iso", iso) != TANGO_SUCCESS) {
LOGE("config_color_iso Failed");
return false;
}
if ( TangoConfig_setInt32(config_ , "config_color_exp", (int32_t)::floor(exposure*1e6)) != TANGO_SUCCESS) {
LOGE("config_color_exp Failed");
return false;
}
bool verifyAutoExposureState;
int32_t verifyIso, verifyExp;
TangoConfig_getBool( config_, "config_color_mode_auto", &verifyAutoExposureState );
TangoConfig_getInt32( config_, "config_color_iso", &verifyIso );
TangoConfig_getInt32( config_, "config_color_exp", &verifyExp );
LOGE( "config_colour autoExposure=%s %d %d", verifyAutoExposureState?"On" : "Off", verifyIso, verifyExp );
The reason to use the Tango API for capturing HDR on Android instead of going through the Android API is to get pose estimates along with the images.
Hi everyone For the pass two days I was trying to solve the problem I'm having without success..
I want to make game when the hero jumps on platform and goes up. It is Doodle Jump type of a game.
Here is the code
I make the the world node
SKNode *world;
Then the hero
SKSpriteNode *hero;
// setting the physicsBody
// add categoryBitMask = heroCat;
// add contactTestBitMask = platformCat;
// add collisionBitMask = 0;
The platform
SKSpriteNode *platform;
// setting the physicsBody
// add categoryBitMask = platformCat;
// add collisionBitMask = 0;
In didBeginContact
// Only bounce the hero if he's falling
if (hero.physicsBody.velocity.dy < 0){
hero.physicsBody.velocity = CGVectorMake(hero.physicsBody.velocity.dx, 500.0f);
}
In didSimulatePhysics
// if the game is started and the hero position is > 0 and hero position > the new max hero y
if(self.isStarted && hero.position.y > 0 && hero.position.y > _maxHeroY){
CGPoint positionInScene = [self convertPoint:hero.position fromNode:hero.parent];
world.position = CGPointMake(world.position.x, world.position.y -positionInScene.y);
}
So far so good but what the I notice that sometimes I can jump when:
in didSimulatePhysics i also gen new platform on random Y with fixed x value from the preveos platform
when the hero goes up the world position change
the hero is falling
he is not above the platform but abble to contact with his body, this is wrong cause he has to jump only when the feet are above the platform
So I change the didBeginContact
// hero position in the world
CGPoint positionInScene = [self convertPoint:hero.position fromNode:node.parent];
// platform position in the world
CGPoint positionInSceneP = [self convertPoint:node.position fromNode:node.parent];
// get the positon of the bottom edge of the hero (the feet)
int heroBottomEdge = positionInScene - hero.frame.size.height/2;
// get the positon of the top edge of the platfrom
int platformTopEdge = positionInScene - node.frame.size.height/2;
// Only bounce the hero if he's falling and the bottom edge of the hero node(the feet) is above the platform top edge or or equal to jump
if (hero.physicsBody.velocity.dy < 0 && heroBottomEdge >= platformTopEdge){
hero.physicsBody.velocity = CGVectorMake(hero.physicsBody.velocity.dx, 500.0f);
}
The problem is that sometimes platformTopEdge is bigger that the heroBottomEdge and ther hero pass through the platform without jumping
I also try with just hero.position.y and node.position.y but the effect is the same
When I add NSOpenGLProfileVersion3_2Core to the attributes pixelformat variable is nil, but when i remove it pixel format gets allocated. I dont get what is the problem : (
GLuint attributes[] = {
NSOpenGLProfileVersionLegacy,
NSOpenGLProfileVersion3_2Core,
NSOpenGLPFAWindow,
NSOpenGLPFAColorSize, 24,
NSOpenGLPFAAlphaSize, 8,
NSOpenGLPFAAccelerated,
NSOpenGLPFADoubleBuffer,
0
};
_pixelformat = [[NSOpenGLPixelFormat alloc]
initWithAttributes:
(NSOpenGLPixelFormatAttribute *) attributes];
if (_pixelformat == nil){
NSLog(#"No valid OpenGL pixel format");
exit(0);
}
NSLog(#"Have a valid pixel format");
The result is "No valid OpenGL pixel format."
genpfault has the right idea, but neither of those attributes is valid on its own. That is to say, they are not boolean attributes/flags.
You need to match the constant with an appropriate attribute name.
Replace this code:
GLuint attributes[] = {
NSOpenGLProfileVersionLegacy,
NSOpenGLProfileVersion3_2Core,
[...]
With this:
NSOpenGLPixelFormatAttribute attributes[] = {
NSOpenGLPFAOpenGLProfile, (NSOpenGLPixelFormatAttribute)NSOpenGLProfileVersion3_2Core,
[...]
I also took the liberty of correcting your use of typedefs. NSOpenGLPixelFormatAttribute is defined as uint32_t, while OpenGL requires GLuint be an unsigned integer type at least 32-bits wide. OpenGL does not forbid GLuint from being implemented using something like uint64_t in the future.
Use the correct API-defined typedef whenever possible.
solved. need to remove NSOpenGLPFAWindow attribute.
NSOpenGLProfileVersionLegacy,
NSOpenGLProfileVersion3_2Core,
Pick one. You can't have a context that is somehow both Core and not-Core at the same time.