I am creating a very big buffer (called buffer2 in the code) using CGDataProviderRef with the following code:
-(UIImage *) glToUIImage {
NSInteger myDataLength = 768 * 1024 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 768, 1024, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y <1024; y++)
{
for(int x = 0; x <768 * 4; x++)
{
buffer2[(1023 - y) * 768 * 4 + x] = buffer[y * 4 * 768 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, &releaseBufferData);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 768;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(768, 1024, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
free(buffer);
//[provider autorelease];
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(imageRef);
return myImage;
}
I expect CGProvider to call back the releaseBufferData method when it is done with buffer2 so that I can free up the memory it's taken. The code for this method is:
static void releaseBufferData (void *info, const void *data, size_t size){
free(data);
}
However, even though my callback method is called, the memory that data (buffer2) takes is never freed and hence it results in massive memory leaks. What am I doing wrong?
Have you ever CGDataProviderRelease your provider? The callback will not be called if you don't release the data provider.
For some peculiar reason this is not an issue anymore.
Just in case this helps someone else. I was having the same problem. It started working once I called
CGImageRelease(imageRef);
right before the
CGDataProviderRelease(provider);
malloc isn't freed in a "release" callback when it allocates on one thread but the callback that deallocates it is executed on another. Wrap both your allocation and deallocation in this:
dispatch_async(dispatch_get_main_queue(), ^{
// *malloc* and *free* go here; don't call &releaseCallBack or some such anywhere
});
A second thing to try is a completion block. Instead of returning an image in the traditional way (via a method return property), use a completion block. The UIImage will be freed as soon as the completion block is closed.
For example, if you're trying to save multiple images to the Photos library, but the malloc'd data isn't freeing after each image is created, then pass the image back via a completion block, making sure you create no new instance of the image that is passed back, and it will be gone as soon as it hits the };
A third thing is calloc instead of malloc:
GLubyte *buffer = (GLubyte *)calloc(myDataLength, sizeof(GLubyte));
That's what I use now where I once had malloc, which obviates the need for the prior two suggestions. I use OpenGL to populate a collection view consisting of a single row of cells, each with one frame from a video. To skim the video, you slide the collection view, if you see a frame you want to save as an image, you long press it; if you want to advance to that frame in the video, you tap it. As you know, even short videos have a lot of frames; the calloc solution knocks about 256 MB off total memory usage every call to the release callback, to which it builds when you scroll blurry fast.
Related
I'm currently doing some tests to see if my app runs correctly on Retina Macs. I have installed Quartz Debug for this purpose and I'm currently running a scaled mode. My screen mode is now 960x540 but of course the physical size of the monitor is still Full HD, i.e. 1920x1080 pixels.
When querying the monitor database using CGGetActiveDisplayList() and then using CGDisplayBounds() on the individual monitors in the list, the returned monitor size is 960x540. This is what I expected because CGDisplayBounds() is said to use the global display coordinate space, not pixels.
To my surprise, however, CGDisplayPixelsWide() and CGDisplayPixelsHigh() also return 960x540, although they're explicitly said to return pixels so I'd expect them to return 1920x1080 instead. But they don't.
This leaves me wondering how can I retrieve the real physical resolution of the monitor instead of the scaled mode using the CGDisplay APIs, i.e. 1920x1080 instead of 960x540 in my case? Is there any way to get the scaling coefficient for a CGDisplay so that I can compute the real physical resolution on my own?
I know I can get this scaling coefficient using the backingScaleFactor method but this is only possible for NSScreen, how can I get the scaling coefficient for a CGDisplay?
You need to examine the mode of the display, not just the display itself. Use CGDisplayCopyDisplayMode() and then CGDisplayModeGetPixelWidth() and CGDisplayModeGetPixelHeight(). These last two are relatively newer functions and the documentation primarily exists in the headers.
And, of course, don't forget to CGDisplayModeRelease() the mode object.
From Ken's answer it is not obvious how you find the native mode(s). To do this, call CGDisplayModeGetIOFlags and choose from the modes that have the kDisplayModeNativeFlag set (see IOKit/IOGraphicsTypes.h, the value is 0x02000000).
const int kFlagNativeMode = 0x2000000; // see IOGraphicsTypes.h
const CGFloat kNoSize = 100000.0;
NSScreen *screen = NSScreen.mainScreen;
NSDictionary *desc = screen.deviceDescription;
unsigned int displayID = [[desc objectForKey:#"NSScreenNumber"] unsignedIntValue];
CGSize displaySizeMM = CGDisplayScreenSize(displayID);
CGSize nativeSize = CGSizeMake(kNoSize, kNoSize);
CFStringRef keys[1] = { kCGDisplayShowDuplicateLowResolutionModes };
CFBooleanRef values[1] = { kCFBooleanTrue };
CFDictionaryRef options = CFDictionaryCreate(kCFAllocatorDefault, (const void**)keys, (const void**)values, 1, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks );
CFArrayRef modes = CGDisplayCopyAllDisplayModes(displayID, options);
int n = CFArrayGetCount(modes);
for (int i = 0; i < n; i++) {
CGDisplayModeRef mode = (CGDisplayModeRef) CFArrayGetValueAtIndex(modes, i);
if (CGDisplayModeGetIOFlags(mode) & kFlagNativeMode) {
int w = CGDisplayModeGetWidth(mode);
// We get both high resolution (screen.backingScaleFactor > 1)
// and the "low" resolution, in CGFloat units. Since screen.frame
// is CGFloat units, we want the lowest native resolution.
if (w < nativeSize.width) {
nativeSize.width = w;
nativeSize.height = CGDisplayModeGetHeight(mode);
}
}
// printf("mode: %dx%d %f dpi 0x%x\n", (int)CGDisplayModeGetWidth(mode), (int)CGDisplayModeGetHeight(mode), CGDisplayModeGetWidth(mode) / displaySizeMM.width * 25.4, CGDisplayModeGetIOFlags(mode));
}
if (nativeSize.width == kNoSize) {
nativeSize = screen.frame.size;
}
CFRelease(modes);
CFRelease(options);
float scaleFactor = screen.frame.size.width / nativeSize.width;
I would like to analyze chunks of audio data of one second. For this purpose I implemented an audio unit that fills a ringbuffer (TPCircularBuffer by Michael Tyson). In another file I try to read chunks of one second using a NStimer. Unfortunately, I receive errors with consuming these data.
The buffer is filled in a kAudioOutputUnitProperty_SetInputCallback and works fine
Device * THIS = (__bridge Device *)inRefCon;
// Render audio into buffer
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0].mNumberChannels = 2;
bufferList.mBuffers[0].mData = NULL;
bufferList.mBuffers[0].mDataByteSize = inNumberFrames * sizeof(SInt16) * 2;
CheckError(AudioUnitRender(THIS -> rioUnit, ioActionFlags, inTimeStamp, 1, inNumberFrames, &bufferList), "AudioUnitRender");
// Put audio into circular buffer
TPCircularBufferProduceBytes(&circBuffer, bufferList.mBuffers[0].mData, inNumberFrames * 2 * sizeof(SInt16));
To read one second of samples I implemented te following code:
- (void)initializeTimer {
timer = [NSTimer scheduledTimerWithTimeInterval:1
target:self
selector:#selector(timerFired:)
userInfo:nil
repeats:YES];
}
- (void) timerFired:(NSTimer *)theTimer {
NSLog(#"Reading %i second(s) from ring",1);
int32_t availableBytes;
SInt16 *tail = TPCircularBufferTail(&circBuffer, &availableBytes);
int availableSamples = availableBytes / sizeof(SInt16);
NSLog(#"Available samples %i", availableSamples);
for (int i = 0; i < availableSamples; i++) {
printf("%i\n",tail[i]);
}
TPCircularBufferConsume(&circBuffer, sizeof(SInt16) * availableBytes);
}
However, when I run this code the number of samples is printed but then I receive the following error:
Assertion failed: (buffer->fillCount >= 0), function TPCircularBufferConsume, file …/TPCircularBuffer.h, line 142.
Unfortunately, I don’t know what is going wrong with consuming the data. The buffer length is set to samplerate * 2 to be long enough.
I would be very happy if someone knows what is going wrong here.
Your circular buffer isn't long enough. You don't check that the available size is positive before emptying, and the time all your print statements take let the buffer get over-filled.
Make the buffer at least 2X larger than your timer, check before emptying, empty the buffer before printing, and use far fewer print statements.
I am trying to implement a simple audio unit graph that goes:
buffer of samples->low pass filter->generic output
Where the generic output would be copied into a new buffer that could then be processed further, saved to disk, etc.
All of the examples I can find online having to do with setting up an audio unit graph involve using a generator with the kAudioUnitSubType_AudioFilePlayer as the input source... I am dealing with a buffer of samples already acquired, so those examples do not help... Based on looking around in the AudioUnitProperties.h file, it looks like I should be using using is kAudioUnitSubType_ScheduledSoundPlayer?
I can't seem to much documentation on how to hook this up, so I am quite stuck and am hoping someone here can help me out.
To simplify things, I just started out by trying to get my buffer of samples to go straight to the system output, but am unable to make this work...
#import "EffectMachine.h"
#import <AudioToolbox/AudioToolbox.h>
#import "AudioHelpers.h"
#import "Buffer.h"
#interface EffectMachine ()
#property (nonatomic, strong) Buffer *buffer;
#end
typedef struct EffectMachineGraph {
AUGraph graph;
AudioUnit input;
AudioUnit lowpass;
AudioUnit output;
} EffectMachineGraph;
#implementation EffectMachine {
EffectMachineGraph machine;
}
-(instancetype)initWithBuffer:(Buffer *)buffer {
if (self = [super init]) {
self.buffer = buffer;
// buffer is a simple wrapper object that holds two properties:
// a pointer to the array of samples (as doubles) and the size (number of samples)
}
return self;
}
-(void)process {
struct EffectMachineGraph initialized = {0};
machine = initialized;
CheckError(NewAUGraph(&machine.graph),
"NewAUGraph failed");
AudioComponentDescription outputCD = {0};
outputCD.componentType = kAudioUnitType_Output;
outputCD.componentSubType = kAudioUnitSubType_DefaultOutput;
outputCD.componentManufacturer = kAudioUnitManufacturer_Apple;
AUNode outputNode;
CheckError(AUGraphAddNode(machine.graph,
&outputCD,
&outputNode),
"AUGraphAddNode[kAudioUnitSubType_GenericOutput] failed");
AudioComponentDescription inputCD = {0};
inputCD.componentType = kAudioUnitType_Generator;
inputCD.componentSubType = kAudioUnitSubType_ScheduledSoundPlayer;
inputCD.componentManufacturer = kAudioUnitManufacturer_Apple;
AUNode inputNode;
CheckError(AUGraphAddNode(machine.graph,
&inputCD,
&inputNode),
"AUGraphAddNode[kAudioUnitSubType_ScheduledSoundPlayer] failed");
CheckError(AUGraphOpen(machine.graph),
"AUGraphOpen failed");
CheckError(AUGraphNodeInfo(machine.graph,
inputNode,
NULL,
&machine.input),
"AUGraphNodeInfo failed");
CheckError(AUGraphConnectNodeInput(machine.graph,
inputNode,
0,
outputNode,
0),
"AUGraphConnectNodeInput");
CheckError(AUGraphInitialize(machine.graph),
"AUGraphInitialize failed");
// prepare input
AudioBufferList ioData = {0};
ioData.mNumberBuffers = 1;
ioData.mBuffers[0].mNumberChannels = 1;
ioData.mBuffers[0].mDataByteSize = (UInt32)(2 * self.buffer.size);
ioData.mBuffers[0].mData = self.buffer.samples;
ScheduledAudioSlice slice = {0};
AudioTimeStamp timeStamp = {0};
slice.mTimeStamp = timeStamp;
slice.mNumberFrames = (UInt32)self.buffer.size;
slice.mBufferList = &ioData;
CheckError(AudioUnitSetProperty(machine.input,
kAudioUnitProperty_ScheduleAudioSlice,
kAudioUnitScope_Global,
0,
&slice,
sizeof(slice)),
"AudioUnitSetProperty[kAudioUnitProperty_ScheduleStartTimeStamp] failed");
AudioTimeStamp startTimeStamp = {0};
startTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
startTimeStamp.mSampleTime = -1;
CheckError(AudioUnitSetProperty(machine.input,
kAudioUnitProperty_ScheduleStartTimeStamp,
kAudioUnitScope_Global,
0,
&startTimeStamp,
sizeof(startTimeStamp)),
"AudioUnitSetProperty[kAudioUnitProperty_ScheduleStartTimeStamp] failed");
CheckError(AUGraphStart(machine.graph),
"AUGraphStart failed");
// AUGraphStop(machine.graph); <-- commented out to make sure it wasn't stopping before actually finishing playing.
// AUGraphUninitialize(machine.graph);
// AUGraphClose(machine.graph);
}
Does anyone know what I am doing wrong here?
I think this is the documentation you're looking for.
To summarize: setup your augraph, setup your audio units & add them to the graph, write & attach a rendercallback function on the first node in your graph. Run the graph. Note that the rendercallback is where your app will be asked to provide buffers of samples to the augraph. This is where you'll need to read from your buffers and fill the buffers supplied by the rendercallback. I think this is what you're missing.
If you're on iOS8, i recommend AVAudioEngine, which helps conceal some of the grungier boiler-platey details of graphs and effects
Extras:
Complete pre-iOS8 example code on github
iOS Music player app that reads audio from your MP3 library into a circular buffer and then processes it via an augraph (using a mixer & eq AU). You can see how a rendercallback is setup to read from a buffer, etc.
Amazing Audio Engine
Novocaine Audio library
I want to copy a double pointer object to the host and compute over it on the GPU Device. When doing cudaMemcpy of the object to device it throws SEGFAULT.
BMP Input;
Input.ReadFromFile( fileName );
WIDTH = Input.TellWidth();
HEIGHT = Input.TellHeight();
RGBApixel** imageData = new RGBApixel* [HEIGHT];
for (int i = 0; i < HEIGHT; i++)
imageData[i] = new RGBApixel [WIDTH];
for(int j=0;j<Input.TellHeight();j++){
for(int i=0;i<Input.TellWidth();i++){
imageData[j][i] = Input.GetPixel(i,j);
}
}
long long imageSize = WIDTH*HEIGHT*sizeof(RGBApixel *);
RGBApixel** dev_imgdata,dev_imgdata_out;
//Allocating cudaMemory
cudaMalloc( (void **) &dev_imgdata, imageSize );
cudaMalloc( (void **) &dev_imgdata_out, imageSize );
Now the below line throws segfault
cudaMemcpy(dev_imgdata,imageData,imageSize,cudaMemcpyHostToDevice);
When declaring RGBApixel** imageData = new RGBApixel* [HEIGHT]; you have absolutely no guarantee that imageData will occupy a contiguous block of memory.
cudaMemcpy copies contiguous blocks of memory into the device RAM. Your statement tries to copy the start addresses of each matrix row but not the actual data. Also when using cudaMalloc, you need to properly allocate for each line, exactly as you did for the host buffer.
What you need to do is to declare imageData as just a RGMAPixel* - basically put the matrix in a single vector and use proper indexing and it will work.
You can also copy each line at a time but that's not a very good practice since every memory access will require an extra indirection and you will mess the caching efficiency.
Also, make sure that when you compile your program, you use -arch sm_20 to enable extra options for your graphic card ( if it has Capability 2.0). Without it I believe you can't use double and the result is unpredictable (or the double is diminished to float)
I'm wondering if anyone can provide some insight as to how to handle varying device sizes when designing in storyboard.
Do you have to check for device frame size before drawing views then?
Thanks.
There are two ways you can go about it. If you insist on using frames, you would want to check the frame size before drawing your views. One way you could go about it is writing a method in your utils file that will check for the device, something like this:
+ (NSString *)getHardwareModel {
AppDelegate_iPhone *appDelegate_iPhone = (AppDelegate_iPhone *) [[UIApplication sharedApplication] delegate];
size_t size;
// get the size of the returned device name
sysctlbyname("hw.machine", NULL, &size, NULL, 0);
// allocate the space to store name
char *machine = (char*)malloc(size);
// get the device name
sysctlbyname("hw.machine", machine, &size, NULL, 0);
// place the name into a NSString
NSString *platform = [NSString stringWithCString:machine encoding: NSUTF8StringEncoding];
free(machine);
appDelegate_iPhone.hardwareModel = platform;
return platform;
}
Once we get back what device we are using, we can then set the frame accordingly. So if I wanted to check for say the iPhone6 device, I would do something like this in my setFrameSize method:
NSString *hardwareVersion = [Utils getHardwareModel];
NSString *target=#"x86_64";
NSString *deviceTarget=#"iPhone7,1";
NSRange range =[hardwareVersion rangeOfString:target ];
NSRange deviceRange =[hardwareVersion rangeOfString:deviceTarget ];
NSLog(#"device=%#",hardwareVersion);
// Setting the frame size for the progress bar
if ([[[UIDevice currentDevice] systemVersion] floatValue] >= 8.0) {
if (range.location!=NSNotFound ||deviceRange.location!=NSNotFound) {
float frameSize = self.view.frame.size.width;
NSLog(#"Frame width===%f", frameSize);
self.view.frame = CGRectMake(0, 0, 480, 85);
}
Another way that would avoid all of this is to use autolayout and set constraints based on the different screen sizes. https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/AutolayoutPG/Introduction/Introduction.html