Codeigniter - cannot upload file .pdf in codeigniter 3 - codeigniter

I want to upload a pdf file, but that file cannot be uploaded if the file size is over 3 MB
$config['upload_path'] = './uploads/';
$config['allowed_types'] = 'gif|jpg|png';
$config['max_size'] = '100';
$config['max_width'] = '1024';
$config['max_height'] = '768';

The value, you are passing in max_size is in KB. So you should pass the value in KB. So if you want to restrict that file size should not be greater than 3 MB then you have to pass the value of max_size is 3000. Please try this.

you need to make 2 changes here
add pdf to allowed upload types
either remove or increase the max allowed upload size
$config['upload_path'] = './uploads/';
$config['allowed_types'] = 'gif|jpg|png|pdf';
$config['max_size'] = '10000';
$config['max_width'] = '1024';
$config['max_height'] = '768';

Related

AvMutableComposition issues having black frame at the end

I capturing a video using AVCaptureConnection in my iOS app. After that I add some images in the video as CALayers. Everything is working fine but I get a black frame at the very end of the resulting video after adding images. There is no frame of actual audio/video that has been affected in this. For audio I am extracting it and changing its pitch and then add it using AVMutableComposition. Here is the code that I am using. Please help me with what I am doing wrong or do I need to add something else.
cmp = [AVMutableComposition composition];
AVMutableCompositionTrack *videoComposition = [cmp addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *audioComposition = [cmp addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
AVAssetTrack *sourceVideoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *sourceAudioTrack = [[audioAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
[videoComposition insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:sourceVideoTrack atTime:kCMTimeZero error:nil] ;
[audioComposition insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:sourceAudioTrack atTime:kCMTimeZero error:nil];
animComp = [AVMutableVideoComposition videoComposition];
animComp.renderSize = CGSizeMake(320, 320);
animComp.frameDuration = CMTimeMake(1,30);
animComp.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
// to gather the audio part of the video
NSArray *tracksToDuck = [cmp tracksWithMediaType:AVMediaTypeAudio];
NSMutableArray *trackMixArray = [NSMutableArray array];
for (NSInteger i = 0; i < [tracksToDuck count]; i++) {
AVMutableAudioMixInputParameters *trackMix = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:[tracksToDuck objectAtIndex:i]];
[trackMix setVolume:5 atTime:kCMTimeZero];
[trackMixArray addObject:trackMix];
}
audioMix = [AVMutableAudioMix audioMix];
audioMix.inputParameters = trackMixArray;
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, [asset duration]);
AVMutableVideoCompositionLayerInstruction *layerVideoInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoComposition];
[layerVideoInstruction setOpacity:1.0 atTime:kCMTimeZero];
instruction.layerInstructions = [NSArray arrayWithObject:layerVideoInstruction] ;
animComp.instructions = [NSArray arrayWithObject:instruction];
[self exportMovie:self];
This is my method for exporting the video
-(IBAction) exportMovie:(id)sender{
//successCheck = NO;
NSArray *docPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *tempPath = [docPaths objectAtIndex:0];
//NSLog(#"Temp Path: %#",tempPath);
NSString *fileName = [NSString stringWithFormat:#"%#/Final.MP4",tempPath];
NSFileManager *fileManager = [NSFileManager defaultManager] ;
if([fileManager fileExistsAtPath:fileName ]){
NSError *ferror = nil ;
[fileManager removeItemAtPath:fileName error:&ferror];
}
NSURL *exportURL = [NSURL fileURLWithPath:fileName];
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:cmp presetName:AVAssetExportPresetMediumQuality] ;
exporter.outputURL = exportURL;
exporter.videoComposition = animComp;
//exporter.audioMix = audioMix;
exporter.outputFileType = AVFileTypeQuickTimeMovie;
[exporter exportAsynchronouslyWithCompletionHandler:^(void){
switch (exporter.status) {
case AVAssetExportSessionStatusFailed:{
NSLog(#"Fail");
break;
}
case AVAssetExportSessionStatusCompleted:{
NSLog(#"Success video");
});
break;
}
default:
break;
}
}];
NSLog(#"outside");
}
there is a property of exportsession to give the time range ,
try giving time range little less than the actual time (few nano seconds less)
You can get the true video duration from AVAssetTrack. The duration of AVAsset is sometimes longer than AVAssetTrack' one.
Check durations out like this.
print(asset.duration.seconds.description)
print(videoTrack.timeRange.duration.description)
So you can change this line.
[videoComposition insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:sourceVideoTrack atTime:kCMTimeZero error:nil] ;
To like this.
[videoComposition insertTimeRange:sourceVideoTrack.timeRange, ofTrack:sourceVideoTrack atTime:kCMTimeZero error:nil] ;
For swift 5
videoComposition.insertTimeRange(sourceVideoTrack.timeRange, of: sourceVideoTrack, at: CMTime.zero)
Then you will avoid the last black frame :)
Hope this helps someone still suffer.
Just wanted to write this for people with my specific issue.
I was taking a video and trying to speed it up / slow it down by taking an AVMutableComposition and scaling the time range of the audio and video components via scaleTimeRange
Scaling the time range to 2x speed or 3x speed sometimes caused the last few frames of the video to be black. Fortunately, #Khushboo's answer fixed my problem as well.
However, instead of decreasing the exporter's timeRange by a few nanoseconds, I just made it the same as the composition's duration which ended up working perfectly.
exporter?.timeRange = CMTimeRangeMake(start: CMTime.zero, duration: composition.duration)
Hope this helps!

NSImage + NSBitmapImageRep = Converting RAW image file from one format to another

I am trying to write a prototype to prove that RAW conversion from one format to another is possible. I have to convert a Nikon's raw file which is of .NEF format to Canon's .CR2 format. With help of various posts I create the original image TIFF representation's BitmapImageRep and use this to write the output file which has a .CR2 extension.
It does work but only problem for me is, the input file is of 21.5 MB but the output am getting is of 144.4 MB. While using NSTIFFCompressionPackBits gives me 142.1 MB.
I want to understand what is happening, I have tried various compression enums available but with no success.
Please help me understanding it. This is the source code:
#interface NSImage(RawConversion)
- (void) saveAsCR2WithName:(NSString*) fileName;
#end
#implementation NSImage(RawConversion)
- (void) saveAsCR2WithName:(NSString*) fileName
{
// Cache the reduced image
NSData *imageData = [self TIFFRepresentation];
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:imageData];
// http://www.cocoabuilder.com/archive/cocoa/151789-nsbitmapimagerep-compressed-tiff-large-files.html
NSDictionary *imageProps = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:NSTIFFCompressionJPEG],NSImageCompressionMethod,
[NSNumber numberWithFloat: 1.0], NSImageCompressionFactor,
nil];
imageData = [imageRep representationUsingType:NSTIFFFileType properties:imageProps];
[imageData writeToFile:fileName atomically:NO];
}
#end
How could I get the output file which is in CR2 format but almost around the size of the input file with little variation as required for a CR2 file?
Edit 1:
Done changes based on Peter's suggestion of using CGImageDestinationAddImageFromSource method, but still I am getting the same result. The input source NEF file size 21.5 MB but the destination file size after conversion 144.4 MB.
Please review the code:
-(void)saveAsCR2WithCGImageMethodUsingName:(NSString*)inDestinationfileName withSourceFile:(NSString*)inSourceFileName
{
CGImageSourceRef sourceFile = MyCreateCGImageSourceRefFromFile(inSourceFileName);
CGImageDestinationRef destinationFile = createCGImageDestinationRefFromFile(inDestinationfileName);
CGImageDestinationAddImageFromSource(destinationFile, sourceFile, 0, NULL);
//https://developer.apple.com/library/mac/#documentation/graphicsimaging/Conceptual/ImageIOGuide/ikpg_dest/ikpg_dest.html
CGImageDestinationFinalize(destinationFile);
}
CGImageSourceRef MyCreateCGImageSourceRefFromFile (NSString* path)
{
// Get the URL for the pathname passed to the function.
NSURL *url = [NSURL fileURLWithPath:path];
CGImageSourceRef myImageSource;
CFDictionaryRef myOptions = NULL;
CFStringRef myKeys[2];
CFTypeRef myValues[2];
// Set up options if you want them. The options here are for
// caching the image in a decoded form and for using floating-point
// values if the image format supports them.
myKeys[0] = kCGImageSourceShouldCache;
myValues[0] = (CFTypeRef)kCFBooleanTrue;
myKeys[1] = kCGImageSourceShouldAllowFloat;
myValues[1] = (CFTypeRef)kCFBooleanTrue;
// Create the dictionary
myOptions = CFDictionaryCreate(NULL, (const void **) myKeys,
(const void **) myValues, 2,
&kCFTypeDictionaryKeyCallBacks,
& kCFTypeDictionaryValueCallBacks);
// Create an image source from the URL.
myImageSource = CGImageSourceCreateWithURL((CFURLRef)url, myOptions);
CFRelease(myOptions);
// Make sure the image source exists before continuing
if (myImageSource == NULL){
fprintf(stderr, "Image source is NULL.");
return NULL;
}
return myImageSource;
}
CGImageDestinationRef createCGImageDestinationRefFromFile (NSString *path)
{
NSURL *url = [NSURL fileURLWithPath:path];
CGImageDestinationRef myImageDestination;
//https://developer.apple.com/library/mac/#documentation/graphicsimaging/Conceptual/ImageIOGuide/ikpg_dest/ikpg_dest.html
float compression = 1.0; // Lossless compression if available.
int orientation = 4; // Origin is at bottom, left.
CFStringRef myKeys[3];
CFTypeRef myValues[3];
CFDictionaryRef myOptions = NULL;
myKeys[0] = kCGImagePropertyOrientation;
myValues[0] = CFNumberCreate(NULL, kCFNumberIntType, &orientation);
myKeys[1] = kCGImagePropertyHasAlpha;
myValues[1] = kCFBooleanTrue;
myKeys[2] = kCGImageDestinationLossyCompressionQuality;
myValues[2] = CFNumberCreate(NULL, kCFNumberFloatType, &compression);
myOptions = CFDictionaryCreate( NULL, (const void **)myKeys, (const void **)myValues, 3,
&kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
//https://developer.apple.com/library/mac/#documentation/graphicsimaging/Conceptual/ImageIOGuide/imageio_basics/ikpg_basics.html#//apple_ref/doc/uid/TP40005462-CH216-SW3
CFStringRef destFileType = CFSTR("public.tiff");
// CFStringRef destFileType = kUTTypeJPEG;
CFArrayRef types = CGImageDestinationCopyTypeIdentifiers(); CFShow(types);
myImageDestination = CGImageDestinationCreateWithURL((CFURLRef)url, destFileType, 1, myOptions);
return myImageDestination;
}
Edit 2: Used the second approach told by #Peter. This gives interesting result. It's effect is the same as renaming the file in finder something like "example_image.NEF" to "example_image.CR2". Surprisingly what happens when converting both programmatically and in finder is, the source file which is 21.5 MB will turn out to be 59 KB. This is without any compression set in the code. Please see the code and suggest:
-(void)convertNEFWithTiffIntermediate:(NSString*)inNEFFile toCR2:(NSString*)inCR2File
{
NSData *fileData = [[NSData alloc] initWithContentsOfFile:inNEFFile];
if (fileData)
{
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:fileData];
// [imageRep setCompression:NSTIFFCompressionNone
// factor:1.0];
NSDictionary *imageProps = nil;
NSData *destinationImageData = [imageRep representationUsingType:NSTIFFFileType properties:imageProps];
[destinationImageData writeToFile:inCR2File atomically:NO];
}
}
The first thing I would try doesn't involve NSImage or NSBitmapImageRep at all. Instead, I would create a CGImageSource for the source file and a CGImageDestination for the destination file, and use CGImageDestinationAddImageFromSource to transfer all of the images from A to B.
You're converting to TIFF twice in this code:
You create an NSImage, I assume from the source file.
You ask the NSImage for its TIFFRepresentation (TIFF conversion #1).
You create an NSBitmapImageRep from the first TIFF data.
You ask the NSBitmapImageRep to generate a second TIFF representation (TIFF conversion #2).
Consider creating an NSBitmapImageRep directly from the source data, and not using NSImage at all. You would then skip directly to step 4 to generate the output data.
(But I still would try CGImageDestinationAddImageFromSource first.)
Raw image files have their own (proprietary) representation.
For example, they may use 14-bit per component, and mosaic patterns, which are not supported by your code.
I think you should use a lower-level API and really reverse engineer the RAW format you are trying to save to.
I would start with DNG, which is relatively easy, as Adobe provides an SDK to write it.

Using CGImageProperties to get EXIF properties

I want to be able to add a text comment to the metadata of a JPEG and be able to read it back from within an iphone app.
I thought this would be fairly simple as ios4 contains support for EXIF info. So I added metadata using a Windows tool called used AnalogExif and read it back from my app using:
NSData *jpeg = UIImageJPEGRepresentation(myUIImage,1.0);
CGImageSourceRef source = CGImageSourceCreateWithData((CFDataRef)jpeg, NULL);
NSDictionary *metadata = (NSDictionary *) CGImageSourceCopyPropertiesAtIndex(source,0,NULL);
NSMutableDictionary *metadataAsMutable = [[metadata mutableCopy]autorelease];
[metadata release];
NSMutableDictionary *EXIFDictionary = [[[metadataAsMutable objectForKey:(NSString *)kCGImagePropertyExifDictionary]
And that works...to a point :)
What I get back in the metadata dictionary is something like:
(gdb) po metadata
{
ColorModel = RGB;
Depth = 8;
Orientation = 1;
PixelHeight = 390;
PixelWidth = 380;
"{Exif}" = {
ColorSpace = 1;
PixelXDimension = 380;
PixelYDimension = 390;
};
"{JFIF}" = {
DensityUnit = 0;
JFIFVersion = (
1,
1
);
XDensity = 1;
YDensity = 1;
};
"{TIFF}" = {
Orientation = 1;
};
}
But thats all I can get! I've edited the JPEG file with every EXIF editor I can find (mostly PC ones I should say) and although they all say I have added JPEG comments and EXIF captions and keywords, none of that info seems to be available from the Apple SDK in my app.
Has anyone managed to set a text field in the metadata of a jpeg and manage to read it back from an iphone app?
I didn't want to use a third party library if at all possible
many thanks in advance
You're correct in thinking that iOS does support more metadata than what you're seeing. You probably lost the data when you created a UIImage and then converted it back to JPEG. Try NSData *jpeg = [NSData dataWithContentsOfFile:#"foo.jpg"] and you should see the EXIF.

route-me image overlay in iPhone

I'm using route-me in my project. Library is quite good. Adding markers, drawing polygone work fine. How about placing single image as overlay in given location (latitude, longitude)? This funcionality is missing I think. Has anyone done placing the image overlay without overloading the tiles source?
I've found the solution...
CLLocationCoordinate2D lcA = CLLocationCoordinate2DMake(oSector.f_overlay_lat_min,oSector.f_overlay_long_min);
CLLocationCoordinate2D lcB = CLLocationCoordinate2DMake(oSector.f_overlay_lat_max,oSector.f_overlay_long_max);
CGPoint cgA = [mvMap latLongToPixel:lcA];
CGPoint cgB = [mvMap latLongToPixel:lcB];
float fLatMin = MIN(cgA.x,cgB.x);
float fLongMin = MIN(cgA.y,cgB.y);
float fWidth = sqrt((cgA.x - cgB.x)*(cgA.x - cgB.x));
float fHeight = sqrt((cgA.y - cgB.y)*(cgA.y - cgB.y));
RMMapLayer *mlLayer = [[RMMapLayer alloc] init];
mlLayer.contents = (id) oSector.im_overlay.CGImage;
mlLayer.frame = CGRectMake(fLatMin,fLongMin,fWidth,fHeight);
[[mvMap.contents overlay] addSublayer:mlLayer];
the mvMap is IBOutlet RMMapView *mvMap somewhere in your h file
the oSector.im_overlay.CGImage can be
UIImage *i = [UIImage imageNamed:<something>];
lmLayer.contents = i.CGImage
Why not just use an RMMarker? You can apply any image you want to it and place it as needed. Even make it draggable if you want to:
UIImage *imgLocation = [UIImage imageWithContentsOfFile :
[[NSBundle mainBundle] pathForResource:#"location_ind"
ofType:#"png"]];
markerCurrentLocation = [[RMMarker alloc] initWithUIImage:imgLocation];
// make sure it is always above everything else.
markerCurrentLocation.zPosition = -1.0;
[mapView.markerManager addMarker:markerCurrentLocation
AtLatLong:startingPoint];
Ciao!
-- Randy

Why is my QTKit based image encoding application so slow?

in a cocoa application I'm currently coding, I'm getting snapshot images from a Quartz Composer renderer (NSImage objects) and I would like to encode them in a QTMovie in 720*480 size, 25 fps, and H264 codec using the addImage: method. Here is the corresponding piece of code:
qRenderer = [[QCRenderer alloc] initOffScreenWithSize:NSMakeSize(720,480) colorSpace:CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB) composition:[QCComposition compositionWithFile:qcPatchPath]]; // define an "offscreen" Quartz composition renderer with the right image size
imageAttrs = [NSDictionary dictionaryWithObjectsAndKeys: #"avc1", // use the H264 codec
QTAddImageCodecType, nil];
qtMovie = [[QTMovie alloc] initToWritableFile: outputVideoFile error:NULL]; // initialize the output QT movie object
long fps = 25;
frameNum = 0;
NSTimeInterval renderingTime = 0;
NSTimeInterval frameInc = (1./fps);
NSTimeInterval myMovieDuration = 70;
NSImage * myImage;
while (renderingTime <= myMovieDuration){
if(![qRenderer renderAtTime: renderingTime arguments:NULL])
NSLog(#"Rendering failed at time %.3fs", renderingTime);
myImage = [qRenderer snapshotImage];
[qtMovie addImage:myImage forDuration: QTMakeTimeWithTimeInterval(frameInc) withAttributes:imageAttrs];
[myImage release];
frameNum ++;
renderingTime = frameNum * frameInc;
}
[qtMovie updateMovieFile];
[qRenderer release];
[qtMovie release];
It works, however my application is not able to do that in real time on my new MacBook Pro, while I know that QuickTime Broadcaster can encode images in real time in H264 with an even higher quality that the one I use, on the same computer.
So why ? What's the issue here? Is this a hardware management issue (multi-core threading, GPU,...) or am I missing something? Let me preface that I'm new (2 weeks of practice) in the Apple development world, both in objective-C, cocoa, X-code, Quicktime and Quartz Composer libraries, etc.
Thanks for any help
AVFoundation is a more efficient way to render a QuartzComposer animation to an H.264 video stream.
size_t width = 640;
size_t height = 480;
const char *outputFile = "/tmp/Arabesque.mp4";
QCComposition *composition = [QCComposition compositionWithFile:#"/System/Library/Screen Savers/Arabesque.qtz"];
QCRenderer *renderer = [[QCRenderer alloc] initOffScreenWithSize:NSMakeSize(width, height)
colorSpace:CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB) composition:composition];
unlink(outputFile);
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:#(outputFile)] fileType:AVFileTypeMPEG4 error:NULL];
NSDictionary *videoSettings = #{ AVVideoCodecKey : AVVideoCodecH264, AVVideoWidthKey : #(width), AVVideoHeightKey : #(height) };
AVAssetWriterInput* writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
[videoWriter addInput:writerInput];
[writerInput release];
AVAssetWriterInputPixelBufferAdaptor *pixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput sourcePixelBufferAttributes:NULL];
int framesPerSecond = 30;
int totalDuration = 30;
int totalFrameCount = framesPerSecond * totalDuration;
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
__block long frameNumber = 0;
dispatch_queue_t workQueue = dispatch_queue_create("com.example.work-queue", DISPATCH_QUEUE_SERIAL);
NSLog(#"Starting.");
[writerInput requestMediaDataWhenReadyOnQueue:workQueue usingBlock:^{
while ([writerInput isReadyForMoreMediaData]) {
NSTimeInterval frameTime = (float)frameNumber / framesPerSecond;
if (![renderer renderAtTime:frameTime arguments:NULL]) {
NSLog(#"Rendering failed at time %.3fs", frameTime);
break;
}
CVPixelBufferRef frame = (CVPixelBufferRef)[renderer createSnapshotImageOfType:#"CVPixelBuffer"];
[pixelBufferAdaptor appendPixelBuffer:frame withPresentationTime:CMTimeMake(frameNumber, framesPerSecond)];
CFRelease(frame);
frameNumber++;
if (frameNumber >= totalFrameCount) {
[writerInput markAsFinished];
[videoWriter finishWriting];
[videoWriter release];
[renderer release];
NSLog(#"Rendered %ld frames.", frameNumber);
break;
}
}
}];
In my testing this is around twice as fast as your posted code that uses QTKit. The biggest improvement appears to come from the H.264 encoding being handed off to the GPU rather than being performed in software. From a quick glance at a profile it appears that the remaining bottlenecks are the rendering of the composition itself, and reading the rendered data back from the GPU in to a pixel buffer. Obviously the complexity of your composition will have some impact on this.
It may be possible to further optimize this by using QCRenderer's ability to provide snapshots as CVOpenGLBufferRefs, which may keep the frame's data on the GPU rather than reading it back to hand it off to the encoder. I didn't look too far in to that though.

Resources