This question already has answers here:
How to resize NSImage?
(12 answers)
Closed 6 years ago.
I want to resize an NSImage from 512px to 60px, I found only the code for iOS, but nothing for OSX.
I found a function on GitHub, it is working fine for me.
func resize(image: NSImage, w: Int, h: Int) -> NSImage {
var destSize = NSMakeSize(CGFloat(w), CGFloat(h))
var newImage = NSImage(size: destSize)
newImage.lockFocus()
image.drawInRect(NSMakeRect(0, 0, destSize.width, destSize.height), fromRect: NSMakeRect(0, 0, image.size.width, image.size.height), operation: NSCompositingOperation.CompositeSourceOver, fraction: CGFloat(1))
newImage.unlockFocus()
newImage.size = destSize
return NSImage(data: newImage.TIFFRepresentation!)!
}
Related
This question already has answers here:
Xcode 8 Beta 4 CGColor.components unavailable
(5 answers)
Closed 6 years ago.
How can I refactor this code in Swift 3?:
extension UIColor {
var hexString: String {
// Produces "Use of unresolved identifier 'CGColorGetComponents'"
let components = CGColorGetComponents(self.cgColor)
....
}
}
var r:CGFloat = 0, g:CGFloat = 0, b:CGFloat = 0, a:CGFloat = 0
self.getRed(&r, green: &g, blue: &b:, alpha: &a)
...
Swift - How to set the UIWebView's background color to be the same as the html page loaded in this UIWebView?
Note: the background of html is loaded from style.css like:
body {
background: -webkit-linear-gradient(left, #fff, #F7F6FF);
}
Note2: there is many of html pages with different styles.css are loaded in the same UIWebView . So because that when I scroll down I want to see the same color behind the html page . thx :)
go plain, use UIColor and set background.
here some utilities ported to swift:
go plain, use UIColor, setting background...
Here some utilities ported to swift:
func colorFromHTML(HEXColor: String?)->UIColor?{
if HEXColor == nil{
return UIColor.grayColor()
}
var rgbValue: UInt32 = 0
let scanner = NSScanner(string: HEXColor!)
scanner.scanLocation = 1 // bypass '#' character
scanner.scanHexInt(&rgbValue)
let c : UIColor = UIColorFromRGB(rgbValue)
return c
}
func UIColorFromRGB(rgbValue: UInt32)-> UIColor{
let red : CGFloat = CGFloat( (rgbValue & 0xFF0000) >> 16 )
let green : CGFloat = CGFloat( (rgbValue & 0xFF00) >> 8 )
let blue : CGFloat = CGFloat(rgbValue & 0xFF)
let c = UIColor(red: red/255.0, green: green/255.0, blue: blue/255.0, alpha: 1)
return c
}
I'm wondering if anyone can provide some insight as to how to handle varying device sizes when designing in storyboard.
Do you have to check for device frame size before drawing views then?
Thanks.
There are two ways you can go about it. If you insist on using frames, you would want to check the frame size before drawing your views. One way you could go about it is writing a method in your utils file that will check for the device, something like this:
+ (NSString *)getHardwareModel {
AppDelegate_iPhone *appDelegate_iPhone = (AppDelegate_iPhone *) [[UIApplication sharedApplication] delegate];
size_t size;
// get the size of the returned device name
sysctlbyname("hw.machine", NULL, &size, NULL, 0);
// allocate the space to store name
char *machine = (char*)malloc(size);
// get the device name
sysctlbyname("hw.machine", machine, &size, NULL, 0);
// place the name into a NSString
NSString *platform = [NSString stringWithCString:machine encoding: NSUTF8StringEncoding];
free(machine);
appDelegate_iPhone.hardwareModel = platform;
return platform;
}
Once we get back what device we are using, we can then set the frame accordingly. So if I wanted to check for say the iPhone6 device, I would do something like this in my setFrameSize method:
NSString *hardwareVersion = [Utils getHardwareModel];
NSString *target=#"x86_64";
NSString *deviceTarget=#"iPhone7,1";
NSRange range =[hardwareVersion rangeOfString:target ];
NSRange deviceRange =[hardwareVersion rangeOfString:deviceTarget ];
NSLog(#"device=%#",hardwareVersion);
// Setting the frame size for the progress bar
if ([[[UIDevice currentDevice] systemVersion] floatValue] >= 8.0) {
if (range.location!=NSNotFound ||deviceRange.location!=NSNotFound) {
float frameSize = self.view.frame.size.width;
NSLog(#"Frame width===%f", frameSize);
self.view.frame = CGRectMake(0, 0, 480, 85);
}
Another way that would avoid all of this is to use autolayout and set constraints based on the different screen sizes. https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/AutolayoutPG/Introduction/Introduction.html
I really need you guy HELP , I run my program in Xcode and its successful but later,
Its show me this error: **Thread 1: Program received signal :"EXC_BAD_ACCESS" on my program line that I have **bold below :
- (NSString *) ocrImage: (UIImage *) uiImage
{
CGSize imageSize = [uiImage size];
double bytes_per_line = CGImageGetBytesPerRow([uiImage CGImage]);
double bytes_per_pixel = CGImageGetBitsPerPixel([uiImage CGImage]) / 8.0;
CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider([uiImage CGImage]));
const UInt8 *imageData = CFDataGetBytePtr(data);
// this could take a while. maybe needs to happen asynchronously.
**char* text = tess->TesseractRect(imageData,(int)bytes_per_pixel,(int)bytes_per_line, 0, 0,(int) imageSize.height,(int) imageSize.width);**
// Do something useful with the text!
NSLog(#"Converted text: %#",[NSString stringWithCString:text encoding:NSUTF8StringEncoding]);
return [NSString stringWithCString:text encoding:NSUTF8StringEncoding];
}
Thank you guy .
make sure that imageData is not NULL here. That's the most common cause of what you're seeing. You should reconsider your title to something more related to your problem, and focus on the stacktrace and all the variables you are passing to TesseractRect().
The other major likelihood is that tess (whatever that is) is a bad pointer, or that is not part of the correct C++ class (I assume this is Objective-C++; you're not clear on any of that).
- (NSString *)readAndProcessImage:(UIImage *)uiImage
{
CGSize imageSize = [uiImage size];
int bytes_per_line = (int)CGImageGetBytesPerRow([uiImage CGImage]);
int bytes_per_pixel = (int)CGImageGetBitsPerPixel([uiImage CGImage]) / 8.0;
CFDataRef data =
CGDataProviderCopyData(CGImageGetDataProvider([uiImage CGImage]));
const UInt8 *imageData = CFDataGetBytePtr(data);
// this could take a while. maybe needs to happen asynchronously?
char *text = tess.TesseractRect(imageData, bytes_per_pixel, bytes_per_line, 0,
0, imageSize.width, imageSize.height);
NSString *textStr = [NSString stringWithUTF8String:text];
delete[] text;
CFRelease(data);
return textStr;
}
I am trying to limit the number of colors of an animated GIF (created from an array of CGImageRef).
However, I am have difficulty actually setting the custom color table. Does anyone know how to do this with Core Graphics?
I know of kCGImagePropertyGIFImageColorMap. Below is some test code (borrowing heavily from this github gist -- since it's the only instance of kCGImagePropertyGIFImageColorMap Google could find).
NSString *path = [#"~/Desktop/test.png" stringByExpandingTildeInPath];
CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData((CFDataRef)[NSData dataWithContentsOfFile:path]);
CGImageRef image = CGImageCreateWithPNGDataProvider(imgDataProvider, NULL, true, kCGRenderingIntentDefault);
const uint8_t colorTable[ 8 ] = { 0, 0, 0, 0, 255, 255, 255 , 255};
NSData* colorTableData = [ NSData dataWithBytes: colorTable length: 8 ];
NSMutableDictionary *gifProps = [NSMutableDictionary dictionary];
[gifProps setObject:colorTableData forKey:(NSString *)kCGImagePropertyGIFImageColorMap];
[gifProps setObject:[NSNumber numberWithBool:NO] forKey:(NSString *)kCGImagePropertyGIFHasGlobalColorMap];
NSDictionary* imgProps = [ NSDictionary dictionaryWithObject: gifProps
forKey: (NSString*) kCGImagePropertyGIFDictionary ];
NSURL* destUrl = [NSURL fileURLWithPath:[#"~/Desktop/test.gif" stringByExpandingTildeInPath]];
CGImageDestinationRef dst = CGImageDestinationCreateWithURL( (CFURLRef) destUrl, kUTTypeGIF, 1, NULL );
CGImageDestinationAddImage( dst, image, imgProps );
CGImageDestinationSetProperties(dst, (CFDictionaryRef) imgProps);
CGImageDestinationFinalize( dst );
CFRelease( dst );
This, however, does not produce a black & white image.
Furthermore, I've tried opening a GIF to find the color table information, but that's providing little help.
CGImageSourceRef imageSourceRef = CGImageSourceCreateWithURL((CFURLRef) destUrl, NULL);
NSDictionary *dict = (NSDictionary *)CGImageSourceCopyPropertiesAtIndex(imageSourceRef,0, NULL);
CGImageRef img = CGImageSourceCreateImageAtIndex(imageSourceRef, 0, NULL);
printf("Color space model: %d, indexed=%d, rgb = %d\n", CGColorSpaceGetModel(CGImageGetColorSpace(img)), kCGColorSpaceModelIndexed,kCGColorSpaceModelRGB);
NSLog(#"%#", dict);
It says the color space is RGB for GIFs. Yet, if I try that code with an indexed PNG, it says the color space is indexed.
Furthermore, all the GIFs I've tried have a image dictionary that looks roughly like the following:
{
ColorModel = RGB;
Depth = 8;
HasAlpha = 1;
PixelHeight = 176;
PixelWidth = 314;
"{GIF}" = {
DelayTime = "0.1";
UnclampedDelayTime = "0.04";
};
}
(If I use CGImageSourceCopyProperties(...), it mentions a global color map, but again no color table is provided.)
I didn't run your code, but from reading it I saw one mistake: The color space in gif is rgb, so your color map should have numcolors*3 items (in stead of *4). IOS doesn't deal gracefully with color maps whose size is not a multiple of 3...
In other words:
const uint8_t colorTable[ 6 ] = { 0, 0, 0, 255, 255 , 255};
NSData* colorTableData = [ NSData dataWithBytes: colorTable length:6];
should do the job.