Join images using Alibaba's OSS image processing - alibaba-cloud

I am using Alibaba's Object Storage Service's image processing to process my images. I need a way to join (stitch) a few images together and create a larger image.
Background: I want to scale up an image to 7680 × 4320 (8k) resolution using the OSS image processing. But every time I do that, it fails as it only allows scaling images to a maximum dimension of 4096 x 4096.
A solution that I came up with, for this problem, was this:
Crop my image into 4 quarters resulting into 4 smaller images
Can be made possible with the help of the Crop operation to make parts of the initial image and the Saveas operation to save those images.
Independently scale up those 4 images to 1920 x 1080
Can be made possible using the Resize operation to scale up those image parts.
Join those scaled images together to obtain the larger image
The documentation does not state any direct way to join images. I'm looking for a way or a workaround for the same.
How do I accomplish this 3rd step, so as to join those 4 images together to form the final 7680 × 4320 image output?

Looking at the official documentation for processing images with Alibaba Cloud's OSS it is evident it doesn't have any information on stitching images together.
If they have imposed a limit on the dimension it would be wise to assume that it is the highest you can go.
The documentation states:
File size cannot exceed 20 MB
That's for the original image, and any image going as high as 8k resolution will mostly be larger than 20MB, so assuming it doesn't take an input that big we can conclude it won't produce that big output either.
To me it looks like you can only manipulate one image at a time and in that case you may not be able to stitch images with Alibaba Cloud OSS.
Maybe contacting support and letting them know this may prove to be helpful, as it is a limit that they have set, and then it lacks image stitching as well, so letting them know this may help improve it in future.

Which region do you need to process images? Alibaba Cloud Function Compute can definitely help to achieve what you described, but as of 5/31/2018, the service is not available in all regions.
https://www.alibabacloud.com/help/doc-detail/53097.htm

Related

Change training dataset aspect ratio and size

I have a training dataset of 640x512 images that I would like to use with a 320x240 camera.
Is it ok to change the aspect ratio and the size of the training images to that of the camera?
Would it be better to upscale the camera frames?
It is better if you keep the aspect ratio of the images because you will be artificially modifying the composition of the objects in the image. What you can do is downscale the image by a factor of 2, so it's 320 x 256, then crop from the center so you have a 320 x 240 image. You can do this by simply removing the first 8 and last 8 columns of the image to get it to 320 x 240. Removing the first 8 and last 8 columns should be safe because it is very unlikely you will see meaningful information within an 8 pixel band on either side of the image.
If you are using a deep learning framework such as Tensorflow or PyTorch, there are pre-processing methods to automatically allow you to crop from the center as well as downscale the image by a factor of 2 for you. You just need to set up a pre-processing pipeline and have these two things in place. You don't have any code established so I can't help you with implementation details, but hopefully what I've said is enough to get you started.
Finally, do not upsample the images. There will be no benefit because you will be using existing information to interpolate to a larger space which is inaccurate. You can scale down, but never scale up. The only situation where this could be useful is if you use superresolution, but that would be for specific cases and it highly depends on what images you use. In general, I do not recommend upscaling. Take your training set and downscale to the resolution of the camera as the images from the camera would be what is used at inference and at that resolution.

What are the best practices for Product Images in CEF?

I know that CEF has an image resizer that can automatically fix image sizes, but do we have the best practices when it comes to Image sizes/resolution/etc. ?
The size of images are less important than the shape of them. The images in the catalog will be a square shape. So any images that are rectangular may not render as well in the catalog.
The higher resolution the image, the better it will look. The lower the resolution, the faster an image will render on the page. There is a balance between those two. Obviously having 3 GB image file will not work but you don't want pixilated images either. Typically this is a data issue that can be addressed by the client as the project progresses.
If that answer isn't sufficient for the client, tell them to shoot for images in a close to square shape between 1 and 5 MB and then decide whether they would like to change at a later time for more performance or high resolution images.

using image sets or dynamic image resize?

I couldn't find proper question here so I've decided to ask my own - according to your knowledge and experience. What is a better solution of making website prepared to all screen resolutions.
+ Media Queries + multiple image sets
+ Media Queries + jquery script to resize images from one set (e.g. images prepared for screen width 1600px)
Which solution is better? From one side i think multiple images are better because of img quality but they will absorb a lot of transfer..
thank u!
Having multiple image resolutions already processed won't mean you will use a lot of transfer, you will use the optimum amount of bandwidth given the resolution used by the user, without sacrificing image quality.
You should do this:
1) Figure out which image sizes you will need, given the kind of devices your users are using
2) Create a script that will convert your images to the needed sizes when you upload them
3) Create a proper folder structure to store each image size, so choosing the proper image will take as little time as possible
4) Avoid inline image resize, as this is what really wastes bandwidth

Is there a pattern or ratio for jpg image filesize in relation to image size?

I'm trying to optimize a page which loads a lot of images from S3 and which needs to run on mobile devices, too.
Images are available in S,M,L,XL resolutions, so on a smartphone I'm usually pulling Size M for the grid thumbnail images. These pictures measure: 194x230px and usually "weigh" around 20k, which I think is far too much.
Question:
If I use Irfan and the RIOT plugin, I can easily shave off 10k from the filesize with the image still looking ok. I'm wondering if there are any guidelines regarding optimal image filesize in relation to image dimensions or is this purely a trial and error process? As a side question, is there any server-side tool, which also uses the RIOT plugin?
Thanks!

What is the best way to dynamically resize images?

The problem:
We have large product images we want thumbnails of at various size but don't want to be stuck batch processing the images in Photoshop. We want a dynamic way or resizing images, that wont add an extra load time while the images is processing on the backend.
Amazon does this some how with their ecommerce solution. When you upload an image it resizes the image in square format and then gives you every size imagineable. ex 150x150, 149x149, etc. Starting at the largest size of the image, so if you upload a 1024x900 image it will resize it to 1023x899, 1023x1203 (add in white space where needed), then resize every pixel until it gets to 1x1px. The some how stores all the images to the server (if it even does that)
"there's got to be a better way"
Any suggestions on the best way to handle image resize on the fly?
Dynamic image processing can be incredibly fast, and it's a much better solution than generating every possible combination of sizes for an uploaded image.
The open-source ImageResizing.Net library allows dynamic crop/zoom and resizing, and with the WIC plugin can often have round-trip times of less than 20ms. That's hard to beat. It also offers disk caching and Amazon CloudFront & S3 support if you want to scale with the big folks. It's used by 20K-60K websites, and some servers host upwards of 20TB of images.
I'm pretty sure Amazon uses cached dynamic image resizing for their eCommerce solution. Pre-generating image versions is a very 2001-era solution.
[full disclosure: I'm the author.]

Resources