Simplify topojson via command line tool - topojson

I'm trying to simplify "world-110m.json" as mentioned in this thread...
Topojson: quantization VS simplification
Which also references the documentation...
https://github.com/mbostock/topojson/wiki/Command-Line-Reference
I've got the tool installed but am really struggling to find an example input that works for me (even with the mentioned documentation). For example, I'm trying to do something like...
"topojson -s 1e5 -o output.json --world-110m.json"
But it's just hanging.
The reasons I want to try simplifying world-110m.json are...
1) Sometimes I display a rotating d3 globe that's so tiny that it doesn't need detailed coordinates mapped (just a basic outline of the continents really) - and so the full world-110m.json file I'm using is unnecessarily draining.
2) Sometimes the globe is larger and works nicely on a desktop, but not via a mobile device so I want to see how much I can simplify/quantise the data to help with performance.
Hopefully I'm looking in the right place with the topojson command line tool, but either way I appreciate any thoughts!

You can use toposimplify library, by mbostock. Here's an example:
toposimplify -o output_file.json -P 0.5 original_file.json
The param number for -P specifies simplification threshold value as the minimum quantile of planar triangle areas. The closer this number is to 0, smaller will be the output file size.

Related

Matching photographed image with screenshot (or generated image based on data model)

first of all, I have to say I'm new to the field of computervision and I'm currently facing a problem, I tried to solve with opencv (Java Wrapper) without success.
Basicly I have a picture of a part from a Model taken by a camera (different angles, resoultions, rotations...) and I need to find the position of that part in the model.
Example Picture:
Model Picture:
So one question is: Where should I start/which algorithm should I use?
My first try was to use KeyPoint Matching with SURF as Detector, Descriptor and BF as Matcher.
It worked for about 2 pcitures out of 10. I used the default parameters and tried other detectors, without any improvements. (Maybe it's a question of the right parameters. But how to find out the right parameteres combined with the right algorithm?...)
Two examples:
My second try was to use the color to differentiate the certain elements in the model and to compare the structure with the model itself (In addition to the picture of the model I also have and xml representation of the model..).
Right now I extraxted the color red out of the image, adjusted h,s,v values manually to get the best detection for about 4 pictures, which fails for other pictures.
Two examples:
I also tried to use edge detection (canny, gray, with histogramm Equalization) to detect geometric structures. For some results I could imagine, that it will work, but using the same canny parameters for other pictures "fails". Two examples:
As I said I'm not familiar with computervision and just tried out some algorithms. I'm facing the problem, that I don't know which combination of algorithms and techniques is the best and in addition to that which parameters should I use. Testing it manually seems to be impossible.
Thanks in advance
gemorra
Your initial idea of using SURF features was actually very good, just try to understand how the parameters for this algorithm work and you should be able to register your images. A good starting point for your parameters would be varying only the Hessian treshold, and being fearles while doing so: your features are quite well defined, so try to use tresholds around 2000 and above (increasing in steps of 500-1000 till you get good results is totally ok).
Alternatively you can try to detect your ellipses and calculate an affine warp that normalizes them and run a cross-correlation to register them. This alternative does imply much more work, but is quite fascinating. Some ideas on that normalization using the covariance matrix and its choletsky decomposition here.

Correlation between two image(binary image)

I have two binary image like this. I have a data set with lots of picture like at the bottom but with differents signs.
and
I would like to compare them in order to know if it's the same figure or not (especially inside the triangle). I took a look in Sift and Surf feature but it's doesn't work well on this type of picture (it find matchning point whereas the two picture are different,especially inside).
I also hear about SVM but i don't know if i have to implement it for this type of problem.
Do you have an idea ?
Thank you
I think you should not use SURF features on the binary image as you have already discarded a lot of information at that stage with your edge detector.
You could also use the Linear or Circle Hough Transform that in this case could tell you a lot about image differences.
If you wat to find 2 exactly identical images, simply use hash functions like md5.
But if you want to find related ( not exatcly identical) images, you are running in trouble ;). look for artificial neural network libs...

Tools for 3D rendering

It's known that 3D rendering is computation expensive.
And I want to use Apache Hadoop for distributed 3D rendering (rendering images or videos) to reduce rendering time. So after learning about Hadoop, I understand that I need 2 things:
Data, which will be visualized (probably it some kind of file, which contains instructions (like draw rectangle, set coordinates, set color etc.))
Some Tool/Program/Utility to render file described above. I want to invoke it from my program, pointing it to file with data. (it's good if this program has a command line API).
But I don't know anything about 3D rendering, so I need your help in suggesting tools (open-source) for render 3D images/videos. Also I don't know anything about input data. So it will be nice if you suggest me render tool + file format to render.
I heard about using Hadoop with .rib file format as data to visualize, and rndr program to render this data. So I need some analogue.
Please note, my goal is to more deeper learn about Hadoop and distributed computation, not about 3D rendering, so please suggest me simplest solution.
Thanks.
KISS: Gnuplot
If you really only care about using Hadoop, and your only requirement of the rendering is that it takes an input file and makes an image so you can show a completed animation, then I would suggest gnuplot. It is actually a graph-drawing program, but it takes scripts and produces image files, and most usefully for you you can enter mathematical formulae to draw rather than constructing 3d worlds to render.
You would simply prepare n files which are all the same except for an offset value for the time since start, and gnuplot would produce the appropriate frame.
This is the simplest option, and lets you concentrate on the Hadoop side. To show you how simple, this would produce a frame for an animation of a 3-blade fan spinning:
set xrange[-1:1]
set yrange[-1:1]
set polar
unset key
unset border
unset tics
set terminal png size 1000,1000
set output "frame_$FRAME.png"
plot cos(3*t+$FRAME/5)
A great thing about Gnuplot is that you type in commands in an interactive prompt to manipulate the graph, and these are the same commands you put in the script. So once you have something you're happy with, you can either do save 'newscript.gpt' or copy out the commands you used. You can recreate the graph by just running gnuplot newscript.gpt at a prompt.
Incidentally, it is easier to simulate hard-to-render scenes than to actually construct them, so just put a sleep command in the gnuplot script to make it take 15 seconds or however long.
The whole banana: Blender
Blender is a 3d rendering system. I believe it is used as a rendering system for some mainstream animations on server farms in exactly the manner you describe. It has quite a learning curve, but I think you should be able to pick up a tutorial or other pre-existing animation project files. From there you would need to work out how to invoke the rendering engine as a command for a specific frame. I've only ever done static rendering in blender, so I can't take you further there. This would be more impressive visually, but no more impressive academically, and you'll waste more time on that side of things.
My choice
Personally I would go with gnuplot. You can make 3d plots with the splot command rather than the 2d plot, and although it's not 3d scene rendering as you might be imagining, it achieves the purpose of making a picture using a script. You can begin with something totally basic like the above until you have your setup going, then introduce more complexity; from an implementation perspective, running a gnuplot script is easier than running a script that also requires a data file, but it's still easy to pre-generate the data and have gnuplot read that when you want to do command + script + data instead of command + script. The key is incrementally increasing the difficulty, not running before you can walk.
If you ultimately find you have spare time at the end and change it to using blender, then that's all free win, and you haven't jeopardised your hadoop project making it pretty.

CLI : Is There A Script To Skew Image With A Curve

Looking for a script that will skew an image for me through the command line.
The skew needs be a curve; stead the angle skews offered by imagemagick or lunapic.
I want it to look like it is placed up in a celestial sphere.
Basically able to fit inside of these boxes.
Alsor, can CSS do it?
Thanks.
Got it using a program called POVRay and a little bit of script hacking... if interested, check it out the full explanation at linuxquestions.org: http://www.linuxquestions.org/questions/showthread.php?p=4764944

Object detection + segmentation

I 'm trying to find an efficient way of acceptable complexity to
detect an object in an image so I can isolate it from its surroundings
segment that object to its sub-parts and label them so I can then fetch them at will
It's been 3 weeks since I entered the image processing world and I've read about so many algorithms (sift, snakes, more snakes, fourier-related, etc.), and heuristics that I don't know where to start and which one is "best" for what I'm trying to achieve. Having in mind that the image dataset in interest is a pretty large one, I don't even know if I should use some algorithm implemented in OpenCV or if I should implement one my own.
Summarize:
Which methodology should I focus on? Why?
Should I use OpenCV for that kind of stuff or is there some other 'better' alternative?
Thank you in advance.
EDIT -- More info regarding the datasets
Each dataset consists of 80K images of products sharing the same
concept e.g. t-shirts, watches, shoes
size
orientation (90% of them)
background (95% of them)
All pictures in each datasets look almost identical apart from the product itself, apparently. To make things a little more clear, let's consider only the 'watch dataset':
All the pictures in the set look almost exactly like this:
(again, apart form the watch itself). I want to extract the strap and the dial. The thing is that there are lots of different watch styles and therefore shapes. From what I've read so far, I think I need a template algorithm that allows bending and stretching so as to be able to match straps and dials of different styles.
Instead of creating three distinct templates (upper part of strap, lower part of strap, dial), it would be reasonable to create only one and segment it into 3 parts. That way, I would be confident enough that each part was detected with respect to each other as intended to e.g. the dial would not be detected below the lower part of the strap.
From all the algorithms/methodologies I've encountered, active shape|appearance model seem to be the most promising ones. Unfortunately, I haven't managed to find a descent implementation and I'm not confident enough that that's the best approach so as to go ahead and write one myself.
If anyone could point out what I should be really looking for (algorithm/heuristic/library/etc.), I would be more than grateful. If again you think my description was a bit vague, feel free to ask for a more detailed one.
From what you've said, here are a few things that pop up at first glance:
Simplest thing to do it binarize the image and do Connected Components using OpenCV or CvBlob library. For simple images with non-complex background this usually yeilds objects
HOwever, looking at your sample image, texture-based segmentation techniques may work better - the watch dial, the straps and the background are wisely variant in texture/roughness, and this could be an ideal way to separate them.
The roughness of a portion can be easily found by the Eigen transform (explained a bit on SO, check the link to the research paper provided there), then the Mean Shift filter can be applied on the output of the Eigen transform. This will give regions clearly separated according to texture. Both the pyramidal Mean Shift and finding eigenvalues by SVD are implemented in OpenCV, so unless you can optimize your own code its better (and easier) to use inbuilt functions (if present) as far as speed and efficiency is concerned.
I think I would turn the problem around. Instead of hunting for the dial, I would use a set of robust features from the watch to 'stitch' the target image onto a template. The first watch has a set of squares in the dial that are white, the second watch has a number of white circles. I would per type of watch:
Segment out the squares or circles in the dial. Segmentation steps can be tricky as they are usually both scale and light dependent
Estimate the centers or corners of the above found feature areas. These are the new feature points.
Use the Hungarian algorithm to match features between the template watch and the target watch. Alternatively, one can take the surroundings of each feature point in the original image and match these using cross correlation
Use matching features between the template and the target to estimate scaling, rotation and translation
Stitch the image
As the image is now in a known form, one can extract the regions simply via pre set coordinates

Resources