I am using iText PDF 5.4 along with the Java2D interface (java.awt.Graphics canvas), and I have a severe problem with gradients.
I am painting many rectangular shapes whose paint is a LinearGradientPaint. This results in large files (e.g., 10 MB), and trying to open the results in e.g. Preview.app brings the computer to total halt. The problem seems to be memory usage, because the first dozens of boxes paint rather quickly and then performance slows down somewhat linearly with more boxes, which means that for a typical page it takes >10 minutes to open.
Adobe Acrobat is also slow but at least it takes some 4 or 5 seconds instead of several minutes.
Is this a bug of iText? Is there a setting or tweak in iText that controls the representation of gradients? I guess it decomposes them into hundreds of separate paint commands instead of using a direct gradient component (if that exists—I know it exists in SVG, but PDF I have no clue).
The condition is that I stay in the awt.Graphics, I cannot rewrite my rendering code to not use Java2D.
An alternative idea would be to use Apache Batik and output to SVG instead. There is an example that shows how to enable the correct transcoding of LinearGradientPaint to the SVG equivalent.
EDIT: There seems to be a new Java2D-to-SVG library JFreeSVG. Recent changes indicate that gradients are implemented.
Related
I am building a web application which will display a large number of image thumbnails as a 3D cloud and provide the ability to click on individual images to launch a large view. I have successfully done this in CSS3D using three.js by creating a THREE.CSS3DObject for each thumbnail and then append the thumbnail as an svg:image.
It works great for upto ~1200 thumbnails and then performance starts to drop off (very low FPS and long load time). By the time you hit 2500 thumbnails it is unusable. Ideally I want to work with over 10k thumbnails.
From what I can tell I would be able to achieve the same result by creating each thumbnail as a WebGL mesh with texture. I am a beginner with three.js though, so before I put in the effort I was hoping for guidance on whether I can expect performance to be better or am I just asking too much of 3D in the browser?
As far as rendering goes, CSS3 should be relatively okay for rendering quite big amount of "sprites". But 10k would probably be too much.
WebGL would probably be a better option though. You could also take care about further optimizations, storing thumbnails in atlas texture or such...
But rendering is just one part. Event handling can be serious bottleneck if not handled carefully.
I don't know how you're handling mouse clock event and transition towards fullsize image, but attaching event listener to each of 2.5k+ objects probably isn't a good choice anyway. With pure WebGL you could use imagespace for detecting clicked object. Encoding each tile with different id/color and using that to determine what's clicked. I imagine that WebGL/CSS3D combo could use this approach as well.
To answer question, WebGL should handle 10k fine. Maybe you'll need to think about some perf optimization if your rectangles are big and they take a significant amount on the screen, but there are ways around it if that problem appears.
I'm trying to setup GeoServer to display 2 data stores. Both are full Earth tile sets, 1 for Day and 1 for night. The imagery is 200m which roughly translates to 2x 50 1.2GB GeoTiffs. For context, the application this a museum exhibit that simulates the view from the Space Station. Tiles need to load quickly and often times for large areas if we're going to provide an oblique view (looking over the horizon). We're using CesiumJS for the renderer which has support for most of the imagery provider standards out there.
Steps I've tried already tried:
ImageMosaic. I can't load Zoom Levels 0-4 without the server running out of memory. At the further zoomed out levels I get a stupendously blurry image and it takes minutes for it to return the actual high-resolution tiles. I have caching on and I've even run the seeding process.
ImagePyramid: Using GDAL I built an ImagePyramid with 11 layers for each of the tile sets. This seemed to help a little, but seems to have capped the resolution greatly.
At this point I can only assume I need to do some fancy hybrid configuration of the 2, but I'm at a loss for where to actually start or if there is actually just a defacto way these sorts of configurations are handled.
Got anyone that is interested the solution that worked best is to merge the tiles together and use gdal2tiles to create a TMS server.
[Skip to the bottom for the question only]
While developing my XNA game I came to another horrible XNA limitation: Texture2D-s (at least on my PC) can't have dimensions higher than 2048*2048. No problem, I quickly wrote my custom texture class, which uses a [System.Drawing.] Bitmap by default and splits the texture into smaller Texture2D-s eventually and displays them as appropriate.
When I made this change I also had to update the method loading the textures. When loading the Texture2D-s in the old version I used Texture2D.FromStream() which worked pretty good but XNA can't even seem to store/load textures higher than the limit so if I tried to load/store a say 4092*2048 png file I ended up having a 2048*2048 Texture2D in my app. Therefore I switched to load the images using [System.Drawing.] Image.FromFile, then cast it to a Bitmap as it doesn't seem to have any limitation. (Later converting this Bitmap to a Texture2D list.)
The problem is that loading the textures this way is noticeably slower because now even those images that are under the 2048*2048 limit will be loaded as a Bitmap then converted to a Texture2D. So I am actually looking for a way to analyze an image file and check its dimensions (width;height) before even loading it into my application. Because if it is under the texture limit I can load it straight into a Texture2D without the need of loading it into a Bitmap then converting it into a single element Texture2D list.
Is there any (clean and possibly very quick) way to get the dimensions of an image file without loading the whole file into the application? And if it is, is it even worth using? As I guess that the slowest instruction is the file opening/seeking here (probably hardware-based, when it comes to hdd-s) and not streaming the contents into the application.
Do you need to support arbitrarily large textures? If not, switching to the HiDef profile will get you support for textures as large as 4096x4096.
If you do need to stick with your current technique, you might want to check out this answer regarding how to read image sizes without loading the entire file.
Say i have this old manuscript ..What am trying to do is making the manuscript such that all the characters present in it can be perfectly recognized what are the things i should keep in mind ?
While approaching such a problem any methods for the same?
Please help thank you
Some graphics applications have macro recorders (e.g. Paint Shop Pro). They can record a sequence of operations applied to an image and store them as macro script. You can then run the macro in a batch process, in order to process all the images contained in a folder automatically. This might be a better option, than re-inventing the wheel.
I would start by playing around with the different functions manually, in order to see what they do to your image. There are an awful number of things you can try: Sharpening, smoothing and remove noise with a lot of different methods and options. You can work on the contrast in many different ways (stretch, gamma correction, expand, and so on).
In addition, if your image has a yellowish background, then working on the red or green channel alone would probably lead to better results, because then the blue channel has a bad contrast.
Do you mean that you want to make it easier for people to read the characters, or are you trying to improve image quality so that optical character recognition (OCR) software can read them?
I'd recommend that you select a specific goal for readability. For example, you might want readers to be able to read the text 20% faster if the image has been processed. If you're using OCR software to read the text, set a read rate you'd like to achieve. Having a concrete goal makes it easier to keep track of your progress.
The image processing book Digital Image Processing by Gonzalez and Woods (3rd edition) has a nice example showing how to convert an image like this to a black-on-white representation. Once you have black text on a white background, you can perform a few additional image processing steps to "clean up" the image and make it a little more readable.
Sample steps:
Convert the image to black and white (grayscale)
Apply a moving average threshold to the image. If the characters are usually about the same size in an image, then you shouldn't have much trouble selecting values for the two parameters of the moving average threshold algorithm.
Once the image has been converted to just black characters on a white background, try simple operations such as morphological "close" to fill in small gaps.
Present the original image and the cleaned image to adult readers, and time how long it takes for them to read each sample. This will give you some indication of the improvement in image quality.
A technique call Stroke Width Transform has been discussed on SO previously. It can be used to extract character strokes from even very complex backgrounds. The SWT would be harder to implement, but could work for quite a wide variety of images:
Stroke Width Transform (SWT) implementation (Java, C#...)
The texture in the paper could present a problem for many algorithms. However, there are technique for denoising images based on the Fast Fourier Transform (FFT), an algorithm that you can use to find 1D or 2D sinusoidal patterns in an image (e.g. grid patterns). About halfway down the following page you can see examples of FFT-based techniques for removing periodic noise:
http://www.fmwconcepts.com/misc_tests/FFT_tests/index.html
If you find a technique that works for the images you're testing, I'm sure a number of people would be interested to see the unprocessed and processed images.
I'm planning on writing a game using javascript / canvas and I just had 1 question: What kind of performance considerations should I think about in regards to loading images vs just drawing using canvas' methods. Because my game will be using very simple geometry for the art (circles, squares, lines), either method will be easy to use. I also plan to implement a simple particle engine in the game, so I want to be able to draw lots of small objects without much of a performance hit.
Thoughts?
If you're drawing simple shapes with solid fills then drawing them procedurally is the best method for you.
If you're drawing more detailed entities with strokes, gradient fills and other performance sensitive make-up you'd be better off using image sprites. Generating graphics procedurally is not always efficient.
It is possible to get away with a mix of both. Draw graphical entities procedurally on the canvas once as your application starts up. After that you can reuse the same sprites by painting copies of them instead of generating the same drop-shadow, gradient and strokes repeatedly.
If you do choose to draw sprites you should read some of the tips and optimization techniques on this thread.
My personal suggestion is to just draw shapes. I've learned that if you're going to use images instead, then the more you use the slower things get, and the more likely you'll end up needing to do off-screen rendering.
This article discusses the subject and has several tests to benchmark the differences.
Conculsions
In brief — Canvas likes small size of canvas and DOM likes working with few elements (although DOM in Firefox is so slow that it's not always true).
And if you are planing to use particles I thought that you might want to take a look to Doodle-js.
Image loading out of the cache is faster than generating it / loading it from the original resource. But then you have to preload the images, so they get into the cache.
It really depends on the type of graphics you'll use, so I suggest you implement the easiest solution and solve the performance problems as they appear.
Generally I would expect copying a bitmap (drawing an image) to get faster compared to recreating it from primitives, as the complexity of the image gets higher.
That is drawing a couple of squares per scene should need about the same time using either method, but a complex image will be faster to copy from a bitmap.
As with most gaming considerations, you may want to look at what you need to do, and use a mixture of both.
For example, if you are using a background image, then loading the bitmap makes sense, especially if you will crop it to fit in the canvas, but if you are making something that is dynamic then you will need to using the drawing API.
If you target IE9 and FF4, for example, then on Windows you should get some good performance from drawing as they are taking advantage of the graphics card, but, for more general browsers you will want to perhaps look at using sprites, which will either be images you draw as part of the initialization and move, or load bitmapped images.
It would help to know what type of game you are looking at, how dynamic the graphics will need to be, how large the bitmapped images would be, what type of framerate you are hoping for.
The landscape is changing with each browser release. I suggest following the HTML5 Games initiative that Facebook has started, and the jsGameBench test suite. They cover a wide range of approaches from Canvas to DOM to CSS transforms, and their performance pros and cons.
http://developers.facebook.com/blog/post/454
http://developers.facebook.com/blog/archive
https://github.com/facebook/jsgamebench
If you are just drawing simple geometry objects you can also use divs. They can be circles, squares and lines in a few CSS lines, you can position them wherever you want and almost all browser support the styles (you may have some problems with mobile devices using Opera Mini or old Android Browser versions and, of course with IE7-) but there wouldn't be almost any performance hit.