When I add NSOpenGLProfileVersion3_2Core to the attributes pixelformat variable is nil, but when i remove it pixel format gets allocated. I dont get what is the problem : (
GLuint attributes[] = {
NSOpenGLProfileVersionLegacy,
NSOpenGLProfileVersion3_2Core,
NSOpenGLPFAWindow,
NSOpenGLPFAColorSize, 24,
NSOpenGLPFAAlphaSize, 8,
NSOpenGLPFAAccelerated,
NSOpenGLPFADoubleBuffer,
0
};
_pixelformat = [[NSOpenGLPixelFormat alloc]
initWithAttributes:
(NSOpenGLPixelFormatAttribute *) attributes];
if (_pixelformat == nil){
NSLog(#"No valid OpenGL pixel format");
exit(0);
}
NSLog(#"Have a valid pixel format");
The result is "No valid OpenGL pixel format."
genpfault has the right idea, but neither of those attributes is valid on its own. That is to say, they are not boolean attributes/flags.
You need to match the constant with an appropriate attribute name.
Replace this code:
GLuint attributes[] = {
NSOpenGLProfileVersionLegacy,
NSOpenGLProfileVersion3_2Core,
[...]
With this:
NSOpenGLPixelFormatAttribute attributes[] = {
NSOpenGLPFAOpenGLProfile, (NSOpenGLPixelFormatAttribute)NSOpenGLProfileVersion3_2Core,
[...]
I also took the liberty of correcting your use of typedefs. NSOpenGLPixelFormatAttribute is defined as uint32_t, while OpenGL requires GLuint be an unsigned integer type at least 32-bits wide. OpenGL does not forbid GLuint from being implemented using something like uint64_t in the future.
Use the correct API-defined typedef whenever possible.
solved. need to remove NSOpenGLPFAWindow attribute.
NSOpenGLProfileVersionLegacy,
NSOpenGLProfileVersion3_2Core,
Pick one. You can't have a context that is somehow both Core and not-Core at the same time.
Related
I'm using Xcode 6.3.1 and Swift.
When a function with multiple parameter get some error on parameter type, it's hard to know which argument is wrong.
For example, CGBitmapContextCreate(), this code:
let colorSpace:CGColorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(CGImageAlphaInfo.PremultipliedLast.rawValue)
let context = CGBitmapContextCreate(nil, UInt(size.width), UInt(size.height), 8, 0, colorSpace, bitmapInfo)
will produce an error like this:
MyFile.swift:23:19: Cannot invoke 'CGBitmapContextCreate' with an argument list of type '(nil, UInt, UInt, Int, Int, CGColorSpace, CGBitmapInfo)'
By comparing the document and my argument list carefully, I can find that it is the 2nd and 3rd arguments, which should be Int.
Is there any way to make the compiler more smarter on this?
The problem is probably that the online documentation you are looking at for CGBitmapContextCreate is wrong based on the definition you are actually accessing when you are compiling.
The last parameter needs to be of type UInt32 , and CGBitmapInfo is returning a CGBitmapInfo object. That is why the compiler is erring. You are passing in the wrong type of parameters. You can even right click the function and click "see definition", this will verify what I am saying.
Try instead, passing in CGImageAlphaInfo.PremultipliedLast.rawValue directly, as it is the UInt32 that is being looked for.
Example Solution:
let colorSpace:CGColorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGImageAlphaInfo.PremultipliedLast.rawValue
let context = CGBitmapContextCreate(nil, UInt(size.width), UInt(size.height), 8, 0, colorSpace, bitmapInfo)
You'll find that you will be able to compile the source, and get the expected result. Note that you can still apply any bitwise operations you want to on this value.
PS: I had the same issue you were having, and was amply frustrated when I couldn't find a solution.
It works!
let width = CGImageGetWidth(image)
let height = CGImageGetHeight(image)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bytesPerRow = 4 * width;
let bitsPerComponent :Int = 8
let pixels = UnsafeMutablePointer<UInt8>(malloc(width*height*4))
var context = CGBitmapContextCreate(pixels, width, height, bitsPerComponent, bytesPerRow, colorSpace, CGBitmapInfo())
Pango syntax supports some text only markup. As far as i can see this does not extend to embedding images as well.
Looking around I cannot find much in the way of an existing implementation, but i havent done pango+cairo work before so i might be missing the obvious community for it.
As far as i can tell a reasonable approach would be to just analyse a string, pull out any tags, create cairo images, and then modify the pango layout around them accordingly.
It also seems like something someone might have done before.
Im specifically looking for an answer on these questions:
Does pango+cairo already solve this and I have just misread the docs?
Has something like this been done before, and where is a reference?
Is this a reasonable approach, or should i try something else, and what?
(also note i am using ruby, so that may affect my options)
I've been through the source of the markup parser and it does not allow for "shape" attributes (the way Pango almost incorporates graphics) but it is possible to do it "by hand".
Since there is absolutely no example code on the Web, here's Pango/Cairo/Images 101.
For a simple demo, I created an 800x400 window, added a GtkDrawingArea and connected up the "draw" signal. Before entering the main program loop, I initialized it with the following code:
PangoLayout *Pango;
void init_drawingArea (GtkWidget *pWidget)
{
cairo_surface_t *pImg = cairo_image_surface_create_from_png ("linux.png");
PangoRectangle r = {0, 0, PANGO_SCALE * cairo_image_surface_get_width (pImg),
PANGO_SCALE * cairo_image_surface_get_height(pImg)};
PangoContext *ctxt = gtk_widget_get_pango_context (pWidget);
PangoAttrList *attList = pango_attr_list_new();
PangoAttribute *attr;
Pango = pango_layout_new (ctxt);
pango_cairo_context_set_shape_renderer (ctxt, render, NULL, NULL);
pango_layout_set_text (Pango, pszLorem, -1);
pango_layout_set_width(Pango, PANGO_SCALE * 800);
attr = pango_attr_shape_new_with_data(&r, &r, pImg, NULL, NULL);
attr->start_index = 0; attr->end_index = 1;
pango_attr_list_insert (attList, attr);
attr = pango_attr_shape_new_with_data(&r, &r, pImg, NULL, NULL);
attr->start_index = 152; attr->end_index = 153;
pango_attr_list_insert (attList, attr);
pango_layout_set_attributes (Pango, attList);
}
The context's shape renderer is set to render () and a PangoLayout is created and initialized. It then creates 2 shape attributes, sets the user data to a cairo surface which we populate from a png file and applies the attributes to characters 0 and 152 of the text.
The "draw" signal processing is straightforward.
gboolean onDraw (GtkWidget *pWidget, cairo_t *cr, gpointer user_data)
{
pango_cairo_show_layout (cr, Pango);
return 1;
}
and the render () PangoCairoShapeRenderFunc function is called as needed:
void render (cairo_t *cr, PangoAttrShape *pShape, gboolean do_path, gpointer data)
{
cairo_surface_t *img = (cairo_surface_t *)pShape->data;
double dx, dy;
cairo_get_current_point(cr, &dx, &dy);
cairo_set_source_surface(cr, img, dx, dy);
cairo_rectangle (cr, dx, dy, pShape->ink_rect.width/PANGO_SCALE,
pShape->ink_rect.height/PANGO_SCALE);
cairo_fill(cr);
}
Taking the current point from cairo, it draws a rectangle and fills it with the image.
And that's pretty much all it does. Images were added as an afterthought and it shows. They are subject to the same rules as any other glyph so they are limited to the equivalent of CSS's display: inline.
I've put the code up at http://immortalsofar.com/PangoDemo/ if anyone wants to play with it. Me, I arrived here trying to get around GtkTextBuffer's limitations. Guess I'll just have to go deeper.
I have a question regarding using VTCompressionSession in VideoToolbox on ios8.
My VTCompressionSession has setup properly without returning an error code. However, when I am trying to encode my frame using VTCompressionSessionEncodeFrame, I got an OSStatus error code of -12780. I cannot find any defintion or explanation of such error code. Does anyone have any ideas ?
Here is a snippet of my code:
CVPixelBufferRef input_frame_buffer_;
void* plane_ptrs[3] = {frame->channel[0], frame->channel[1], frame->channel[3]};
size_t plane_widths[3] = {frame->header.width,
frame->header.width / 2, frame->header.width / 2};
size_t plane_heights[3] = {frame->header.height,
frame->header.height / 2, frame->header.height / 2};
size_t plane_bytes_per_row[3] = {frame->stride[0], frame->stride[1], frame->stride[2]};
CVReturn pixel_buffer_status =
CVPixelBufferCreateWithPlanarBytes(kCFAllocatorDefault,
frame->header.width,
frame->header.height,
(OSType) kCVPixelFormatType_420YpCbCr8Planar,
frame->channel[0],
0,
3,
plane_ptrs,
plane_widths,
plane_heights,
plane_bytes_per_row,
&MyPixelBufferReleaseCallback,
(void*)this,
nullptr,
&input_frame_buffer_);
if (pixel_buffer_status != kCVReturnSuccess) {
logger_->Error("CVPixelBufferCreateWithBytes failed.");
return;
}
VTEncodeInfoFlags info_flags;
OSStatus status = VTCompressionSessionEncodeFrame(session_,
input_frame_buffer_, frame->header.pts, kCMTimeInvalid, nullptr,
mySourceFrameRefCon,
&info_flags);
I have the same problem before, you can follow these steps as below
List item
// 1. Get SPS,PPS form stream data, and create CMFormatDescription, VTDecompressionSession
// 2. create CMFormatDescription
// 3. create VTDecompressionSession
// 4. get NALUnit payload into a CMBlockBuffer
// 5. making sure to replace the separator code with a 4 byte length code (the length of the NalUnit including the unit code)
// 6. create a CMSampleBuffer
// 7. use VTDecompressionSessionDecodeFrame
// 8. use VideoToolBox Decompress Frame CallBack to get CVImageBufferRef
you can get the reference in my git https://github.com/htaiwan/HWDecoder
It looks like you need to pass a value for the dataSize parameter. See my answer on the apple developers forum and this example code: https://gist.github.com/roxlu/79e50d2ac869763d712d
roxlu
The solution is to use the pixel buffer pool from VTCompressionSesion.
https://devforums.apple.com/message/1063536#1063536
I'm trying to use the member variables from my client rect, but they are holding extremely negative values, like -858993460. Even when I call rect.Width(), it returns an extremely negative number. I need the values to determine the corresponding section of a wave file to play when I select the wave that I have drawn on the screen. Would anyone happen to know why it could be doing this?
Note: I threw int's right, left, and width in just to see what values they are holding. I really only need rect.Width() to scale the selection to be able to access the data array of my wave file.
void CWaveEditView::OnToolsPlay32775()
{
// TODO: Add your command handler code here
CWaveEditDoc* pDoc = GetDocument();
ASSERT_VALID(pDoc);
if(!pDoc){
return;
}
if(pDoc->wave.hdr==NULL){
return;
}
if(selectionStart!=selectionEnd){
CRect rect;
GetClientRect(&rect);
rect.NormalizeRect();
int right = rect.right;
int left = rect.left;
int width = rect.Width();
int startms=(1000.0*pDoc->wave.lastSample/pDoc->wave.sampleRate)*selectionStart/rect.Width();
int endms=(1000.0*pDoc->wave.lastSample/pDoc->wave.sampleRate)*selectionEnd/rect.Width();
WaveFile * selection = new WaveFile(pDoc->wave.numChannels, pDoc->wave.sampleRate, pDoc->wave.bitsPerSample);
while(startms<=endms){
selection->add_sample(pDoc->wave.get_sample(startms));
startms++;
}
selection->updateHeader();
selection->play();
delete selection;
}
The default constructor of CRect does not initialise its members (because it is a thin wrapper for the RECT structure). You could initialise it to (0,0,0,0) and check whether or not it is empty after your call to GetClientRect.
Since GetClientRect appears to be failing, you may want to check that your window handle is valid using GetSafeHwnd().
I'm trying to learn to use the different ffmpeg libs with Cocoa, and I'm trying to get frames to display with help of Core Video. It seems I have gotten the CV callbacks to work, and it gets frames which I try to put in a CVImageBufferRef that I later draw with Core Image.
The problem is I'm trying to get PIX_FMT_YUYV422 to work with libswscale, but as soon as I change the pixel format to anything other than PIX_FMT_YUV420P it crashes with EXC_BAD_ACCESS.
As long as I use YUV420P the program runs, allthough it doesn't display properly. I suspected that the pixel format isn't supported, so I wanted to try PIX_FMT_YUYV422.
I have had it running before and successfully wrote PPM files with PIX_FMT_RGB24. For some reason it just crashes on me now, and I don't see what might be wrong.
I'm a bit in over my head here, but that is how I prefer to learn. :)
Here's how I allocate the AVFrames:
inFrame = avcodec_alloc_frame();
outFrame = avcodec_alloc_frame();
int frameBytes = avpicture_get_size(PIX_FMT_YUYV422, cdcCtx->width, cdcCtx->height);
uint8_t *frameBuffer = malloc(frameBytes);
avpicture_fill((AVPicture *)outFrame, frameBuffer, PIX_FMT_YUYV422, cdcCtx->width, cdcCtx->height);
Then I try to run it through swscale like so:
static struct SwsContext *convertContext;
if (convertContext == NULL) {
int w = cdcCtx->width;
int h = cdcCtx->height;
convertContext = sws_getContext(w, h, cdcCtx->pix_fmt, outWidth, outHeight, PIX_FMT_YUYV422, SWS_BICUBIC, NULL, NULL, NULL);
if (convertContext == NULL) {
NSLog(#"Cannot initialize the conversion context!");
return NO;
}
}
sws_scale(convertContext, inFrame->data, inFrame->linesize, 0, outHeight, outFrame->data, outFrame->linesize);
And finally I try to write it to a pixel buffer for use with Core Image:
int ret = CVPixelBufferCreateWithBytes(0, outWidth, outHeight, kYUVSPixelFormat, outFrame->data[0], outFrame->linesize[0], 0, 0, 0, ¤tFrame);
With 420P it runs, but it doesnt match up with the kYUVSPixelformat for the pixel buffer, and as I understand it doesnt accept YUV420.
I would really appreciate any help, no matter how small, as it might help me struggle on. :)
This certainly isn't a complete code sample, since you never decode anything into the input frame. If you were to do that, it looks correct.
You also don't need to fill the output picture, or even allocate an AVFrame for it, really.
YUV420P is a planar format. Therefore, AVFrame.data[0] is not the whole story. I see a mistake in
int ret = CVPixelBufferCreateWithBytes(0, outWidth, outHeight, kYUVSPixelFormat, outFrame->data[0], outFrame->linesize[0], 0, 0, 0, ¤tFrame);
For planar formats, you will have to read data blocks from AVFrame.data[0] up to AVFrame.data[3]