I must just be blind - can anyone point to documentation or source code for how to write a software component that allows OS X to support a custom image format system wide? (I'd like to be able for applications like Preview to be able to display my custom images.) I seems like it must be possible, but I just can't find the documentation or an example. Thanks.
Edit: I have gotten some information that leads me to believe this isn't possible. I did see the Apple sample code about making a service to convert a file, but that isn't what I'm after. I will leave this open in case someone has faced this before and has found something I can't find.
Related
I have a web application where user uploads the images of their locations. I want to write a program to detect the type of location and list of objects from the image. I write a program in C# using alturos YOLO to detect objects in the image. The result is fine for me but the problem is i want to detect the place type from the image. Like, if you upload some image that has snow then it should detect the "Snow" keyword. If you upload the "Lake" image then it should show keywords like "Lake, water, river etc". I am a web developer and never done any Machine Learning or image processing thing. But i am keen to learn this. Is there any way to do this or anyone can tell me the right path to do this.
I found this "https://www.clarifai.com/" but i want to write my own code because i have large number of images.
All in all, I'm pretty sure that there's no single correct answer to this. You could implement image recognition in a hundred different equally correct ways using different tools. So here's my opinionated perspective. Anyone and everyone is free to agree/disagree with what I'm saying.
I've worked a bit with Open CV (Python) in the past. There are a great number of libraries available based on it, so you can probably find a working base to build off of. I think that it should be capable of doing the task you specify, although I'm not quite sure how it would be done.
The other framework for machine learning and object recognition that I have seen is Apple's Create ML/ Core ML system (Swift or Objective-C). My experience with that one is as limited as cloning a git repo and poking around inside, but it looks pretty powerful.
im not even sure if what im asking for is possible.
i want to create a really lightweight interface for the RPi. it doesn't needs to should much in terms of graphics, but i would help.
i want to display data onto the unix console (so i don't have to start up a GUI desktop like Gnome).
but i don't even know what to google for what i want. basically, when installing something like Ubuntu, you get the console screen but it slightly formatted (unlike just logging to the console).
i want to create an interface similar to what might see when you load the BIOS menu. how do i do this?
it would also be really useful if i could get some touch functionality so if i touch certain parts of the screen it would register and i could get the interface to behave as i need it to.
You did not specify a programming language so maybe dialog will do.
ncurses is well known for C.
When you look at information for a font in FontBook in OSX, it lists all kinds of useful information, including Language, Version, Unique name, etc. Is there a nice way to get any/all of this information from objective C? In particular, I want to get the Version of a font.
I know how to make a CTFontDescriptorRef but I don't see any attributes on it that would give me the Version. I've looked similarly at NSFontDescriptor but not found anything, and googling hasn't helped.
I need to do this because the app I work on runs in Chinese, and I know that one font looks better than another as long as I have a "late enough" version of the font installed. So I'd like to use a particular font if the later version is installed, and otherwise fall back to another font.
Well I stumbled upon the answer five minutes after posting. I was basically looking at the wrong place, hoping to find it in CTFontDescriptorRef. It looks like the right place to look is CTFontRef, which you can create from a CTFontDescriptor via CTFontCreateWithFontDescriptor.
Then you can use CTFontCopyAttribute, and a bunch of different things are available, namely kCTFontVersionNameKey.
I saw this video (YouTube) and I want to make use of the ability shown at 3:00. Can anyone tell me what is being used here? Ideally suggestions would work on Windows 7 at least.
I've had a few google searches for "Active Windows Desktop", which was mentioned in the video, in an attempt to finding something that has this feature but I failed at finding anything
I'd recommend you look into WPF -- it has built-in features for arbitrary transformations of window content, so it would probably be a good place to start looking.
I have not been able to track down an answer on this. I'd like to be able to manipulate or create images to then compile into a video. I'm starting to think this is just not a good fit for GAE. I wanted to do this in Python but doesn't look like that is possible without C support. Even with Java I'm seeing conflicting information about what is possible.
Does anyone know for sure if there are any fully supported image libraries for Python or Java?
You're right - anything that involves heavy image manipulation isn't a good fit for App Engine - especially video encoding. Consider writing a service that does this on something such as EC2, and calling it from App Engine when needed.