I have inherited a legacy project containing some 20+ APNG files that are considered instrumental to the design of the site. However, for whatever reason, Safari plays these APNG files twice where every other browser I have tested them in plays them only once because they were created with the loop count (num_plays) set to 1. This is standard because 0 means loop forever.
I understand APNG may be considered a thing of the past at this point; but, assuming I cannot change this part of the project, do I have any means of getting Safari to play the animations only once as designed? Or am I looking at some rancid Safari only fallback?
Here is an example of one of the images:
Thanks!
If all of the images contain as few colors as the example, you could convert them to animated GIFs without loss of quality to avoid a Safari-only fallback (at which you are, indeed, looking without the conversion).
You should modify the frame control chunk of the last frame by setting delay_num to 65535 and delay_den to 1 so that the first frame does not display again for 65535 s = 18.2 h!
Related
So I have a programming project that I have to do for my school. What I have to do is setup a 2 player dice game. I could have gone the easy way and just display the number of the 2 die, but I was thinking of using images that I made in photoshop instead. However, the problem is that I do not know how to change images in an efficient way.
My first option is using the visibility tag on several images laid on top of eachother and change it accordingly as such
image1.visible = false
image2.visible = true
However, I do not think that is very efficient. Images also do not support changing the image with code from my research.
Secondly, I could use a PictureBox instead, which do support changing the image as the program is running. However, it does not support transparency, and the die images are transparent. Plus it gives me the invalid image file error, I guess due to the transparency in the gif files.
There is also the cheap workaround of me making the background of the images the same as the form background.
So is there a more efficient way I am missing out? I know that the cheap workaround would be the best option for this case, but I would like to have this knowledge for future use like semi-transparent pixels that blend in and such.
And before you ask, no, I cannot use another programming language as visual basic 6 is what my school teaches. Thankfully they are changing it soon, but I am stuck with this for now.
Turns out you CAN change the pictures of Images, while keeping transparency and stretch. I am going to properly show it:
Image1.Picture = LoadPicture("YOURPATHHERE.gif")
This is what I get for believing what I've seen on some forum.
Also, the error of invalid image file was due to the images being corrupted for some reason.
Very large images will not render in Google Chrome (although the scrollbars will still behave as if the image is present). The same images will often render just fine in other browsers.
Here are two sample images. If you're using Google Chrome, you won't see the long red bar:
Short Blue
http://i.stack.imgur.com/ApGfg.png
Long Red
http://i.stack.imgur.com/J2eRf.png
As you can see, the browser thinks the longer image is there, but it simply doesn't render. The image format doesn't seem to matter either: I've tried both PNGs and JPEGs. I've also tested this on two different machines running different operating systems (Windows and OSX). This is obviously a bug, but can anyone think of a workaround that would force Chrome to render large images?
Not that anyone cares or is even looking at this post, but I did find an odd workaround. The problem seems to be with the way Chrome handles zooming. If you set the zoom property to 98.6% and lower or 102.6% and higher, the image will render (setting the zoom property to any value between 98.6% and 102.6% will cause the rendering to fail). Note that the zoom property is not officially defined in CSS, so some browsers may ignore it (which is a good thing in this case since this is a browser-specific hack). As long as you don't mind the image being resized slightly, I suppose this may be the best fix.
In short, the following code produces the desired result, as shown here:
<img style="zoom:98.6%" src="http://i.stack.imgur.com/J2eRf.png">
Update:
Actually, this is a good opportunity to kill two birds with one stone. As screens move to higher resolutions (e.g. the Apple Retina display), web developers will want to start serving up images that are twice as large and then scaling them down by 50%, as suggested here. So, instead of using the zoom property as suggested above, you could simply double the size of the image and render it at half the size:
<img style="width:50%;height:50%;" src="http://i.stack.imgur.com/J2eRf.png">
Not only will this solve your rendering problem in Chrome, but it will make the image look nice and crisp on the next generation of high-resolution displays.
I rely on ImageCR3 for a number of my sites. However, I've come up against a variety of limitations with it over the past couple years (single-threaded, no crop anchor, etc) and all of my emails to the support addresses have been ignored. So I'm looking for an alternative.
My first thought was CFImage, but it seems to produce far too low of quality for the same image size, and seems excessively slow. Is there any other tool out there that can do what ImageCR does, as efficiently as ImageCR does it, that I could use instead? Or am I best off loading the jpg in CFImage, cropping and saving as PNG, then loading the PNG in ImageCR for the remainder of editing?
I'm using ColdFusion MX 7 and ColdFusion MX 9 (soon all to be migrated to the latter).
ImageMagick? Seems to work like a charm.
We would like to display very large (50mb plus) images in Internet Explorer. We would like to avoid compression as compression algorithms are not what CSI would have us believe that they are and the resulting files are too lossy.
As a result, we have come up with two options: Silverlight Deep Zoom or a Flash based solution (such as Zoomify). The issue is that both of these require conversion to a tiled output and/or conversion to a specific file type (Zoomify supports a single proprietary file type, PFF).
What we are wondering is if a solution exists which will allow us to view the image without a conversion before hand.
PS: I know that you can write an application to tile the images (as needed or after the load process) and output them; however, we would like to do this without chopping up the file.
The tiled approach really is the right way to do it.
Your users don't want to download a 50mb file before they can start viewing the image. You don't want to spend the bandwidth to serve 50 megs to every user who might only view a fraction of your image.
If you serve the whole file, users will eventually be able to load and view it, but it won't run smoothly for most of them.
There is no simple non-tiled way to serve just a portion of an image unless you want to use a server-side library like imagemagik or PIL to extract a specific subset of the image for each user. You probably don't want to do that because it will place a significant load on your server.
Alternatively, you might use something like google's map tool to provide zooming and scaling. Some comments on doing that are available here:
http://webtide.wordpress.com/2008/08/27/custom-google-maps/
Take a look at OpenSeadragon. To make a image can work with OpenSeadragon, you should generate a zoomable image format which mentioned here. Then follow starting guide here
The browser isn't going to smoothly load a 50 meg file; if you don't chop it up, there's no reasonable way to make it not lag.
If you dont want to tile, you could have the server open the file and render a screen sized view of the image for display in the browser at the particular zoom resolution requested. This way you arent sending 50 meg files across the line when someone only wants to get an overview of the image. That is, the browser requests a set of coordinates and an output size in pixels, the server opens the larger image and creates a smaller image that fits the desired view, and sends that back to the web browser.
As far as compression, you say its too lossy, but if thats what you are seeing you are probably using the wrong compression algorithm or setting for the type of image you have. The jpg format has quality settings to control lossiness, and PNG compression is lossless (the pixels you get after decompressing are the exact values you had prior to compression). So consider changing what you are using as compression, and dont just rely on the default settings in an image editor.
I was wondering how do software like GotoMeeting capture desktop. I can do a full screen (or block by block) capture using GDI but that just seems too wasteful to me. Also I have looked into Mirror devices but I was wondering if there's a simpler technique or a library out there which does this.
I need fast and efficient desktop screen capture (10p15 fps) which I am eventually going to convert into a video file and integrate with my application to send the captured feed over the network or something.
Thanks!
Yes, taking a screen capture and finding the diff between previous capture would be a good way to reduce bandwidth on transmission by sending the changes across only, of course this is similar to video encoding techniques which does this block by block.
Still means you need to do a capture plus extra processing for getting the difference, i.e, encoding it.
by using the Mirror devices you can get both the updated Rectangle that are change and also pointer to the Screen. updated Rectangle pointer point to all the rectangle that are change , these rectangle are the change rectangle that are frequently changing. filter out some of the rectangle because in one second you can get 1000 of rectangles.
I would either:
Do full screen captures, and then
perform image processing to isolate
parts of the screen that have changed
to save bandwidth.
-OR-
Use a program like CamStudio.
i got 20 to 30 frame per second using memory driver i am display on my picture box but when i get full screen update then these frame are buffered. as picture box is slow and have change to my own component this is some how fast but not good on full screen as well averge i display 10 fps in full screen. i am facing problem in rendering frames i can capture 20 to 30 frames per second but my rendering is 8 to 10 full screen frames in one second . if any one has achive rendering frames in full screen please replay me.
What language?
.NET provides Graphics.CopyFromScreen.