I'd like to dynamically either create from scratch or update an image using some input parameters to tweak/seed the images. Lets say I have this image - http://cdn.smashingapps.com/wp-content/uploads/2009/07/patterns-mix.jpg - and I want to tweak it by making the flower size bigger or changing the color of the flower. I guess updating an existing image will be harder than creating it from group up.
I am not looking for a final solution but any pointers to the right resources will be greatly appreciated.
Consider using Processing to generate such images programmatically.
It is a simple programming language and environment, easy to learn and get started. There is available a collection of free tutorials. Generative Art book by Matt Pearson is a more artistically-inclined introduction.
Related
I am searching for a performant way to generate a PNG based on a layout. These layouts will mostly consist of text and a 1-2 icons. The Datasource for these informations is JSON. However, the JSON won't be normalized to fit the Layout/Screen size. Let me clarify: The JSON will contain an attribute "Title". The title may be too long, so the font size has to be decreased. Or the description has too many attributes and only some of them need to be displayed, and so on.
We currently have a system in place for creating these layouts and generating a PNG, but creating new layouts is very time consuming and frankly speaking, a pain. However, the current solution is extremely performant, as it can generate a PNG in around 1-2ms. For my PoC to be deemed successful, i need to reach 10ms or lower. If there is a solution that takes slightly longer to generate, but can be scaled horizontally, that's fine as well.
TL;DR:
I'm searching for a way to generate a PNG based on a layout i create. The PNG generation needs to be performant (< 10ms) and the implementation of new layouts should be as hassle free as possible.
What technologies are suited for this use case?
Here is an example, of what a layout might look like:
Edit: I can't post images yet, but please search for "electronic shelf labeling" on google images.
Also:
I've already made a similar question yesterday, but it was pointed out that my way of trying to achieve this, probably won't lead to success. Original Post
I work at a printer where we generate thumbnails of artwork for orders and store them in a folder before printing.
I'm looking for a code library that will allow us to take a photo of a printed item and look through the library of thumbnails for the design.
Just wondered if anyone knows of a library or api that could do this?
Thanks
David
pHash is one solution.
There are others but that mainly depends on your requirements: do you only want to identify identical images, if not, what types of transformations do you want to be able to capture etc.
In general you should look for near duplicate image search.
#david-jennings there are numerous methods to look for similar images in libraries. Remember that google already does this in google images.
Your problem falls under the scope of Content Based Image Retrieval (CBIR), which aims at looking for images with similarities in their content. MPEG-7 is a standard established many years ago to address these issues and the research field is very active with new techniques being developed constantly.
The main idea in CBIR is to extract some kind of a signature from an image and try to match it with all previously extracted signatures of all images in your database. Which method to use depends upon the specifics of your problem... According to your initial post I suppose that probably the use of SHIFT is going to do the work for you...
You may implement such a system using OpenCV with C/C++/Java/etc., or something more "scientific" using MATLAB.
Hey guys I was looking for different approaches/algorithms for placing textual/non-textual content in a book layout having 2 sides. So essentially it should look like a user is reading a book & content placed in a 2 page layout.
If you guys have any directives or suggestions on how to go about doing this. Way to decide how many content items can fit into 2 pages, no overflow. Suppose a page is 425 px BY 600 px & we have 2 such pages fit side by side (dimensions are flexible).
Any pointers appreciated?
P.S. I know this is not a pure programming question per se but more of an algorithmic question. If so, please direct me where this question can be best asked.
EDIT 1
I want to use this algorithm in a website application & not in a standalone app, so please consider that.
EDIT 2
I would like to mention that the order of the content items is pre-decided.
If your goal is to display data in a book like format, then the easiest method would be to reuse an already existing toolkit for doing text layout. I think the best tool for this purpose would be LaTeX, which is an evolution of the original digital typesetting program.
In order to use it you will have to convert your data into the LaTeX format, which is relatively painless (I have done it several times with several types of data). In this document you can specify that you want a book format, how large the pages are, and much more. You can then render the text to pdf/ps and then display the two pages of a "book" side by side.
If what you are looking for is the actual algorithms to do it yourself, you might search around the TeX/LaTeX community for information.
any ideas on how to do a simple image registration (I have IMAGE1 and IMAGE2 takes from the same subject, but with camera moving a little and want to match IMAGE2 with IMAGE1)?
I checked MANY softwares to do that, but they're all focused on medical images, so I couldn't input a simple JPEG (one even allowed PGM, but didn't work).
thanks
There is an excellent package called "ANTS", which you should refer to:
http://www.picsl.upenn.edu/ANTS/
You may also like to look into a popular package called "ITK":
http://itk.org/
To solve this problem you need to break it up into managable steps.
1. You have to have a set of similar points (this is typically found by feature detection) or user selection.
2. Once you have the points needed you need to find the transformation matrix between the two images (based on the given points you recieved).
3. Use the transformation matrix to translate one image onto another.
Things That Should Help:
Feature Detection Algorithms: SIFT
Topic that this is under in computer vision: Photo stitching, Homographies, Image Registration
There is a very easy way to perform it on slicer, look at the package: general registration
you can simply insert your images, define your registration type and your transformation file and then run it.
Simple ITK, primarily for medical, will read .jpg's and has the full suite of registration tools.
reader = sitk.ImageFileReader()
reader.SetImageIO("JPEGimageIO")
reader.SetFileName(inputImageFileName)
image = reader.Execute();
I've been admiring StackOverflow's default quilt-like profile pictures (which I notice are also on the Fail Blog) and am curious what program both are using to generate them.
But what I really want to know is: If you were to design the system to create default profile pictures, how would you do it?
I'm looking for ideas on what algorithm you'd use, as well as things like how you would related the image to the user, be it related to their username, or some portrayal of their progress (ie the image gets more complex, or larger, as they gain reputation).
FWIW, the default pictures are generated by gravatar, which is why you'll see them on more than this site.
It's called an Identicon. On Stackoverflow it Gravatar uses your IP address to generate the image.
This is an editorial, not necessarily an answer.
Those auto-generated avatars on this site come from a service (Gravatar) that focuses exclusively on providing avatars and is therefore the core of their business. For apps that aren't specifically intended to generate and display avatars, I would just go with an empty placeholder (like Facebook). It's a neat feature, but is it worth your development time when a simple placeholder would be just as effective?
A very good source of images would be flame fractals. They are rather computationally expensive, so simply sourcing them from a project like electric sheep or having them be rendered by the user's computer should be considered to offload the work.
Who wouldn't want default profile pictures like these?
alt text http://sheepserver.net/v2d6/gen/202/124809/icon.jpg alt text http://sheepserver.net/v2d6/gen/202/124805/icon.jpg alt text http://sheepserver.net/v2d6/gen/202/125373/i77.jpg alt text http://sheepserver.net/v2d6/gen/202/125431/i116.jpg
Use a Julia set or something like that and set the initial conditions to a hash of the user's email address.
I'd use a jpeg server tool (aspjpg or similar) to manipulate the image on load so it displays their badges within their profile pic.
In fact, using any tool to dynamically generate images is pretty cool. Applying some sort of 3d or flash technology to dynamically create images using random variables for eye spacing or facial structure would be pretty wicked as well.
But ya this is a weird question. hah!
I did something similar years back, I used POV-Ray to generate little 3D scenes with torusses (torii ?) and spheres. There were lots of parameters to tweak such as the position, size and colour of each object.
POV-Ray is a scriptable 3D render engine, you can find it here.
Unfortunately my images all looked too similar to each other. I love Gravatar's identicons as uses on this site. I think the symmetry helps and the shapes are unique enough that you can identify users fairly clearly.
In ruby there have a library http://github.com/swdyh/quilt to generate it!