Larger iTunes Search API Images - image

The iTunes search API provides album images for each artist's albums. The image sizes provided by the API are fairly small (100x100 was the largest). Playing with the urls I was able to write a script to access larger images. The largest I have been able to find was 225x225. Does anyone know of a larger size available, preferably around 500x500 or larger?
Thanks.
PS: the urls for the images are in this format, and the numbers at the end represent the image size.
http://a4.mzstatic.com/us/r1000/038/Music/db/09/6a/mzi.fivmbmtu.225x225-75.jpg

You should be able to get 600x600.
Check this out :
small : http://a4.mzstatic.com/us/r1000/038/Music/db/09/6a/mzi.fivmbmtu.225x225-75.jpg
large : http://a4.mzstatic.com/us/r1000/038/Music/db/09/6a/mzi.fivmbmtu.600x600-75.jpg
A simple string replace should do the trick.
Cheers,

The artworkUrl512 or artworkUrl100 are the highest ones 512*512 and 1024*1024

Related

Best image file format for book pages

I wanted to scan Book pages and combine the images to an pdf "ebook" (just for me), but the file sizes get really huge. Even .jpg resulted in an pdf file with 60mb+ in size.
Do you have any idea how I can compress it any further? I.e. which file format I could choose for this specific purpose? (The book contains pictures and written text.)
Thank you for your help.
I tried to save it as .jpg and other file formats like .png, but didnt get small enough for the file to be easy handled, without loosing to much resolution.
Images are expensive things.
Ignoring compression you’re looking at 3bytes per pixel of data.
If you want to keep images you could reduce this by turning your images into greyscale. That reduces it to 1byte per pixel (again ignoring compression).
Or you could turn it into black and white. Which would be 1 but per pixel.
Or, alternatively, you could use OCR to translate your image into actual text which is a much more efficient way of storing books.

Is it possible to get originals HQ images from CIFAR10 dataset?

I'm currently working on my thesis on the neural networks. I'm using the CIFAR10 as a reference dataset. Now I would like to show some example results in my paper. The problem is, that the images in the dataset are 32x32 pixels so it's really hard to recognize something on them when printed on paper.
Is there any way to get hold of the original images with higher resolution?
UPDATE: I'm not asking for image processing algorithm, but for the original images presented in CIFAR-10. I need some higher resolution samples to put in my paper.
I now have the same problem and I just found your question.
It seems that CIFAR was built from labeling the tinyimages dataset, and are kind enough to share the indexing from CIFAR to tinyimages. Now tinyimages contain metadata file with URL of the original images and a toolbox for getting for any image you wish (e.g. those included in the CIFAR index).
So one may write a mat file which does this and share the results...
They're just small:
The CIFAR-10 and CIFAR-100 are labeled subsets of the 80 million tiny images dataset.
You could use Google reverse image search if you're curious.

one image is a subset of another image

The scenario is,
There are two images and we are required to say whether one image is a subset of another image. In other words, image A is present within or part of image B.
We tried to use traditional bit by bit comparison, but it looks too time consuming.
Is there any other image comparison algorithm in place that can help us?
Thanks in advance for your responses.
Check out SIFT and SURF descriptors to find keypoints in the images, and then match them across the two images.

Is there a pattern or ratio for jpg image filesize in relation to image size?

I'm trying to optimize a page which loads a lot of images from S3 and which needs to run on mobile devices, too.
Images are available in S,M,L,XL resolutions, so on a smartphone I'm usually pulling Size M for the grid thumbnail images. These pictures measure: 194x230px and usually "weigh" around 20k, which I think is far too much.
Question:
If I use Irfan and the RIOT plugin, I can easily shave off 10k from the filesize with the image still looking ok. I'm wondering if there are any guidelines regarding optimal image filesize in relation to image dimensions or is this purely a trial and error process? As a side question, is there any server-side tool, which also uses the RIOT plugin?
Thanks!

Commands for ImageMagick to create thumbnails

Given an photograph uploaded by a user, what is best approach to creating a number various sized thumbnails Using ImageMagick (or GraphicsMagick)? My guess to the steps:
Create a super sample of the image, maintaining original aspect ratio
Apply watermark to super sample
Create the various sized thumbnails using the watermarked super sampled image
Additional requirements:
Best quality possible (does this mean PNG over JPG?)
Smallest file size possible (does this mean JPG over PNG?)
Use density of 72x72, units is ppi
Since I am not that familiar with the intricacies of IM (or GM), some guidance to the best commands that meet my objectives would be highly appreciated. Thanks.
Check out the ImageMagick documentation:
For a specific size http://www.imagemagick.org/Usage/thumbnails/#fit
For watermarks http://www.imagemagick.org/Usage/annotating/#watermarking
The best quality possible is complicated since difference images compress differently. I'm partial to PNG since it has a variety if compression techniques available to allow for experimentation.

Resources