Open source for reading and writing multi layer .psd image - cocoa

I am in Mac OSX, Cocoa .
I want to read and write multi layered .psd images.
With Cocoa native api's i can read or write the images as flat images.(i.e., single layered)
So is there any 3rd party library available to perform the operations?
This is my earlier query.
psd Image creation with layer properties using CGImageRef
Thanks,
Dhana

PSD is a lousy format for anything but use in PhotoShop, and most third party libraries will miss some of the finer points of layer composition, in part because Adobe keeps extending the format, and in part because it isn't extremely well documented.
If you need to keep altered images in PSD, then presumably you have PhotoShop in-house. Your best bet is to use Photoshop Batch Processing which can be as easy as keyboard macros or as complex as you want to script.

Related

How to read and write to a file in OpenGL ES

How does one read or write files (png, txt, jpg...) in OpenGL ES? Target is Android through Visual Studio.
Unfortunately it's not as simple as placing the assets in the same directory as the main program and then referencing them using fstream.h or stdio.h like with the Opengl equivalent. I've tried creating folders like res/raw and using android/asset_manager.h and similar libraries. Is it even possible through this IDE? I'll be done in Unity by the time this gets resolved...
You don't. OpenGL is an API concerned with transforming vertices, and drawing pixels on a screen. File formats are outside the definition of OpenGL. In other words, if you want to use a *.png as an input/output format, you'll need to find a 3rd party library that supports that file format (e.g. libPNG), and use that to transfer the pixel data to OpenGL.
The raw file stream classes (e.g. ifstream) have zero concept of a file format. Again, another reason why you use a 3rd party library.
Unity is a full fledged game engine, and as such has spent time building support for various file formats (e.g. PNG, obj, etc). OpenGL is far lower level that that. A good place to start for image data, is a lib such as DevIL (which itself includes other 3rd party libraries such as libPNG, libJPEG, etc).

Is there some easy use image processing/editing library for Cocoa?

Like OpenCV
I hope the library can do several simple image edit operation, like DrawLine(UiImage, startPoint, endPoint), or ConvertToGray(UiImage)
CoreImage is the built-in image manipulation library in Cocoa.
For example: What is the best Core Image filter to produce black and white effects?
I'd suggest using OpenCV , which is a great algorithms and image processing library.
Choosing Opencv would give you more future option.
Try this
OpenCV is not meant for image editing. You can do that, but it's like buying a big track to carry your groceries from the market.
The best way to do it is to look into some already integrated image editing libraries. And as I know, in Cocoa there are several of them. CoreImage, mentioned by Dor, is one of them.
And there are some specialized image editing / UI toolkits that may help you better than OpenCV. You may check whether ImageMagik or QT are available for Mac/iOS

Can FDT deal with .fla files or not?

I'm trying to find an all-in-one IDE for flash, one that can deal with various flash related files.
I just read this answer and it recommends fdt, but seems fdt can only deal with scripts but not .fla ones.
Which IDE should I use so that I can use it to develop various files involved in flash developing?
I am fairly certain it can not. Is there any particular reason you need this? Most developers code in external .as files. This way code is in one location and not buried in the timeline. Also the code can be placed in source control.
For an all in one solution, Adobe Flash CS5 is probably your best bet. They have somewhat improved the IDE and added things like autocomplete.
Flash Builder 4 and Adobe Flash CS5 have finally solved this problem - you can now create an FLA in Flash and then use the wizard to easily create a Flash Builder project around the .fla. All of your classes have access to library exports etc, and you can set it up so that when you click to edit a Class file in Flash it automatically opens the file in Flash Builder.
I really like it.

How do I read a video camera in a win32 C program

I have this garden variety USB video camera, and it came with two mini-apps, one that just lets you see what the camera sees, and one that records to an .avi file.
But what's the API if I want to grab images from the camera in my own C program? I am making the assumptions that it's (1) possible and (2) desirable to make some call and have a 2D array of pixel information filled in.
What I really want to do is tinker with image processing algorithms, and for that I'd really like to get my code around some live data.
EDIT -
Having had a healthy exposure to Linux, I can grasp how (ideally/in theory) you could open() the device, use ioctl() to configure it, and read() the data. And I'm virtually certain that that's not how Windows is going to present the API. Not knowing what function names Windows might use for a video device API, or even if it has one, makes it difficult to look up, at least with the win32 api search capabilities that I have at my disposal.
You'll probably need the DirectShow API, provided that's how the camera operates. If the manufacturer created their own code path, you'll need their API.
Your first step, as pointed out by ChrisBD, is to check if Windows supports your device.
If that is the case you have three possible Windows APIs for capture:
DirectShow
VFW. Has more or less been replaced by DirectShow
MediaFoundation. Is the newest API that is intended to replace DirectShow. AFAIK not fully implemented yet and only available in Vista.
From the three DirectShow is the best choice. However, learning and using DirectShow is not a trivial task. An excellent example can be found here.
Another possibility is to use OpenCV. OpenCV is an image processing library, that you can also use for processing the images. OpenCV has an image capture API that provides a simpler abstraction and is easier to use than the Windows APIs.
The API is the way to go.
A good indication of whether the camera requires a bespoke one or not is to see if it is recognised by a PC without the manufacturer's applications installed. If windows has the drivers built in the you should be able to use the windows APIs to capture the images.
Alternatively if you know what compression codec has been used for the AVI file you could unpack it.
Ideally it would be good if you could capture the video in native (YUV, RGB15 or similar) format as then you can work on compression as well as manipulation.

How do I create a container file?

I would like to create a file format for my app like Quake, OO, and MS Office 07 have.
Basically a uncompressed zip folder, or tar file.
I need this to be cross platform (mac and windows).
Can I do something via command prompt and bash?
If you want a single file that is portable to all platforms and which contain structured data, consider using sqlite. You'll get a full featured ACID compliant database that exists on disk as a single file.
There are libraries you can link against to directly access the file, and there is a command line tool you can use as well. No matter what language you are using, most likely there is support for it.
http://www.sqlite.org
Have a look at the open source 7Zip compression format. For your specific needs, you can use it in an "Archive" mode, zero compression but very fast.
It provides a powerful SDK, LZMA, from the site:
"LZMA is the default and general compression method of 7z format in the 7-Zip program. LZMA provides a high compression ratio and very fast decompression, so it is very suitable for embedded applications. For example, it can be used for ROM (firmware) compressing.
The LZMA SDK provides the documentation, samples, header files, libraries, and tools you need to develop applications that use LZMA compression."
Zip is supported everywhere. If a container is all you need, than those are surely good options.
SQLite is great.
A single file, crossplatform, a tiny library, SQL access to data, transactions, the whole enchilada.
you can use transactions to guarantee consistent return points in case of crashing. check uses for sqlite, they specifically advocate using it as a data model layer for desktop applications.
also, there's a command-line tool to manually access the data.
First thing you should ask yourself is, "Do I really need to make my own?"
Depending on what you want to use it for, you are probably better off using a common format and some pre-made libraries which already handle one of those formats very well.
Good places to start:
http://www.destructor.de/libtar/index.htm (tar -- a the 'container' format)
http://www.zlib.net/ (zlib -- a method of compressing data before or after you put it in the container)
If you still really think you need to make your own, I would suggest studying something very simple first, like tar's format:
http://en.wikipedia.org/wiki/Tar_(file_format)
or
http://schmidt.devlib.org/file-formats/tar-archive-file-format.html
Instead of making a format, I'd just decide on a convention. One or more named files within the container have the metadata you need to access the rest of the files, and know what to do with them. The container itself, though, should just be some ubiquitous format, such as zip. No need to reinvent the wheel, here.

Resources