Fabricjs SBOX_FATAL_MEMORY_EXCEEDED - fabricjs2

I am working on a project that requires me to load huge SVG drawings and annotations on top of it from (fabric.js) . After storing the background, I can't load it out again using loadFromJSON. It gives me SBOX_FATAL_MEMORY_EXCEEDED. Is there any way I can bypass it ?
Update:
I realized that my issue is fabric.loadSVGFromURL . As long as i run this function it would crash my browser. Just for context I am loading out CAD drawings that are converted to SVG format and the end product have to be displayed exactly to real life size. For example 11cm in the diagram should be 11cm on the screen so I stick to SVG for the measurements and scaling

Related

D3 JS Exported Maps Image Quality Issue

We are using d3.js to create the maps in our application via reading .csv file that has latitude and longitude to plot the location but sue to huge data (~200k latitudes and longitudes in the .csv file) the maps creation is taking around 1 minute.
To avoid this much of delay we are exporting the maps into an image using SaveSvgAsPng.js library and once the image is created we are displaying the image on the app instead of creating the map every time the user logs into to the application (below is the code snippet from saveSvgAspng.js library)
Now with this approach we are seeing each time the image is exporting differently and there is drop in the image quality of the map (attached the image)
Raising this here to get some help in fixing this issue or if someone has faced this before as this is being the show stopper for our application. Happy to add more details if these are not sufficient.

What is the best way to display and apply filters to RAW images on macOS?

I am creating a simple photo catalogue application for macOS to see whether the latest APIs can significantly improve performance of loading directories with large numbers of images.
So far it looks pretty promising and loading around 600 45MB RAW image thumbnails using QLThumbnailGenerator and CGImageSourceCreateWithURL is super fast allowing thumbnail images and image metadata to be displayed almost instantly.
Displaying these images in a NSCollectionView using a CALayer in the NSCollectionViewItem's view also appears to be extremely fast and scrolling is very smooth.
I did find that QLThumbnailGeneratorseems to start failing after a few hundred images and starts returning error code 108 if I call the api in a continuous loop - I fixed that by calling CGImageSourceCopyPropertiesAtIndex immediately after the thumbnail generator api call - so maybe there is a timing issue or not enough file handles or something if the api is called to quickly and for too long.
However I am still having trouble rendering a full sized image to the display - here I am using a NSScrollView with a layer backed NSView documentView. Everything is super fast until the following call:
view.layer.contents = cgImage
And at this point the entire main thread hangs until the image has loaded - and this may take a few seconds.
Once it has loaded it's fine and zooming in and out by changing the documentView frame size is very fast - scrolling around the full size image is also super smooth without any of the typical hiccups.
Is there a way of loading these images without causing the UI to freeze ?
I've seen the recent WWDC2020 session where they demonstrate similar scrolling of large numbers of images but I haven't been able to find anything useful on loading large images other than CATiledLayer - but it's not really clear if that is the right answer for this problem.
The old Apple sample RawExpose seemed to be an option but most of that code is deprecated and it seems one has to use MetalKit not instead of GLKit - unfortunately there is no example of using MetaKit with Core Image that I can find.
FYI - I tried using some the new SwiftUI CollectionView and List but they seem to be significantly slower than AppKit and I found some of the collection view items never render - of course these could just be bugs in the macOS 11 beta.
OK - well I finally figured it out and it's complicated but simple. It's complicated because there are so many options to choose from and so many outdated sample apps to look at. In any event I think I have solved most if not all the issues related to using metal backed CALayers and rendering realtime updates of the images as CIFilter adjustments are applied. There are many pieces to the puzzle and happy to share if anyone is looking for help.
Some key pointers:
I am using CAMetalLayer and NSView
I override the CAMetalLayer.display(layer:) method and call the layer.setNeedsDisplay() when the user slides an adjustment slider.
I chain together all the CIFilters, including the RAW filter created with CIFilter(imageUrl:)
Most importantly I use the RAW filters scaleFactor parameter to size the image - encountered major performance issues using any other method to resize the image for the views size
Don't expect high performance if the image is zoomed right in - 50% is seems to be the limit for 45megapixel RAW imaged from Nikon D850.
A short video of the result is here https://youtu.be/5wp0CIWAoIM

Silverlight canvas freedraw underperforming

I'm making a silverlight website which includes paint-like features including freedraw. To achieve this I used the technique described on the following website: http://codeding.com/articles/freehand-drawing-in-silverlight .
The problem is, when I run the demo project it will start to lag extremely after just a few seconds of drawing. I realise that that is probably caused by the amount of shapes this techniue requires, however, and this is my main question:
How on earth does the demo on the website not lag nomatter howmuch I draw, while my local project which should have the EXACT same code lags right away?
I tried finding something about improving canvas performance overall, but the only thing I found was turning the drawing into a static image, which is not really ideal since I use undo/redo functionality.
The number of shapes added to the Canvas shouldn't be the reason for the lagging, there must be something else like converting the drawing into image for undo/redo functionality. For undo/redo, you can save the strokes-information instead of images. Creating & storing images during each undo/redo operation will consume too much memory.
A stroke is nothing but a set of points from the start (mousedown event) to end (mouseup event), and a set of strokes forms a complete drawing. You can always recreate the drawing using the saved strokes-information (just like you can recreate using images). You can use simple data-structures like List<List<Point>> to store a complete drawing, this is very memory efficient instead of creating & storing the image itself.

Windows Phone 7 Image Looping

I would like to loop through a sequence of images. I have tried using a Pivot control, but I don't like the blank space in between image transitions. I would prefer to use something that will animate between images smoothly. I also looked at the LoopingSelector control, but I can't seem to set the orientation to horizontal.
I'm assuming you're interested in a kind of image viewer like iOS offers, swiping right or left to navigate through the photos. If that's the case, I hate to say it but i think you're looking at building your own control.
I think to implement it properly these are the essential things you need to think about and address:
For performance' sake, load all the images you have into memorystream objects and store the binary data (you can get creative with this and only store the first 10-15 images, depending on how large the images are, doing this would enable your control to support thousands of images and still perform like a champ).
Once an image is about to be on-screen set the source of the image to the saved memorystream object that has the bytes loaded into it (this will minimize the work that the UI thread does, keeping the control performant and responsive)
Use Manipulation events to track the delta x of the motion someone uses when swiping left to right in order to actually perform the moving of the items
Move the images by changing their Canvas.Left property (you can go negative I think, otherwise just make your canvas the width of all the images you have combined)
Look up some of the available libraries to support momentum so you can have a natural smooth transition between images

How to Automatically Create ImageMaps of Grey Maps from Wikipedia?

I have a project using various members of Wikipedia's grey maps: http://en.wikipedia.org/wiki/Wikipedia:Blank_maps. I fill them in with colors depending on which countries, states or provinces a user selects by clicking on the shape or by checking a checkbox.
I would like to write a script that creates imagemaps automatically of each country, state or province by somehow getting the X and Y pixel location of the borders of a country, state or province albeit without the names of these entities, which I will fill in later. I have already done the World map by hand and found a open source US map image map demo. I would now like to create my maps more rapidly.
I use PHP and GD to floodfill the shapes, so I guess I could use one central pixel location of the shapes as well. Any suggestions? This script is a possibility but is still somewhat manual: http://abhinavsingh.com/blog/2009/03/using-image-maps-in-javascript-a-demo-application/. Also Mapedit, http://www.boutell.com/mapedit/, has a magic wand feature that works pretty well, but again I have a feeling this can be done automatically.
An almost perfect solution to this issue is by using SVG images and this translator of the svg code to imagmap area tags: http://www.electricfairground.com/polygonator/. The result is an appropriate image map, although the svg image may need to be resized, and the countries or provinces all seem to be offset and occasionally jumbled up. So this require opening a page generated with the SVG image or exported PNG copy of the SVG file, in a wysiwig editor that allows you to move imagemap elements.
I'm trying to figure out what the pattern of the offset is and if I do I'll post it here: http://wherehaveibeen.info/images/polye.html. The author of the "Polygonator" clued me into his service and using svg map images from his article here: http://www.electricfairground.com/2009/08/08/image-map-rollover-effects-using-jquerys-maphilight-plugin/. He advocates there, the tracing of png images into svg images via Inkscape. But since Wikipedia already has maps in SVG format, why not go straight to the code? It turns out that svg files basically already have the polygons separated and the border regions speciied, at least in the Wikipedia grey maps, http://en.wikipedia.org/wiki/Wikipedia:Blank_maps, they just need some cleaning up with the Polygonator.
I found if I opened up the SVG code in Notepad++ i could copy and paste in the entire contents of the SVG file and the polygonator will remove the unneeded code. A little clean up of the imagemap area tags is required afterwards but not much. The biggest problem is the mentioned generated area tags regions offests and the occasionl jumbled up overlapping locations of the imagemap areas in the generated code.
Well the real answer here appears to be that SVG files are almost imagemaps already and can be mildly processed to turn them into imagemaps, and Wikipedia certainly has plenty of SVG maps.
There are at least three projects that attempt to do this, with only some success at the moment. I'm kind of more interested in making an SVG file processing online image mapper service now that so might work on that project instead of just the map coloring one:
Polygonator - described here: http://www.electricfairground.com/2009/08/08/image-map-rollover-effects-using-jquerys-maphilight-plugin/ but the actual service is here: http://www.electricfairground.com/polygonator/index.html - is the simplest and best service or software so far I think. You have to manually dump the SVG XML text into the input field, but despite what the author says, I think you can dump the entire SVG file in the in field, not just the "M-z tag". The resulting area tags need editing to remove empty ones without coordinates and polygons with only two points.
Inkscapemap - http://sourceforge.net/projects/inkscapemap/ - chokes on complex SVG files such as those with shading. Also I couldn't get it to display as an HTML service even though I followed advice about using the main class of the jar file which I found described in the manifest file as well referring to the main jar file and the support file in an "archive" attribute.
http://davidlynch.org/blog/2008/03/creating-an-image-map-from-svg/ - very interesting project with many blog comments. The image maps again are not quite perfect and need editing.
I see I can use GD PHP's imagecolorat and cycle through all the pixels to find those that are black. This works:
<?php
$im = ImageCreateFromPNG("india.png");
$width = imagesx($im);
$height = imagesy($im);
for ($cy=0;$cy<$height;$cy++) {
echo '<p>';
for ($cx=0;$cx<$width;$cx++) {
$rgb = ImageColorAt($im, $cx, $cy);
$col = imagecolorsforindex($im, $rgb);
if ($col["red"] == 0 && $col["green"] == 0 && $col["blue"] ==0){
echo $cx.", ".$cy." ";
} else {echo "";}
}
}
?>
Can anybody suggest how to find the polygons in the huge multipolygon complex that results from running the above code on say a black and white 2 color map of India, where all the borders are black and the interior of the states and Indian Ocean is white??
Here is image of India: http://wherehaveibeen.info/images/india.png and the mess now of the coordinates for the imagemap that needs to be split up into separate polygons: http://wherehaveibeen.info/images/black.php

Resources