Dragging a globe in D3 - d3.js

By far the most elegant way to rotate a globe in d3 that I've seen is Jason Davies' version: https://www.jasondavies.com/maps/rotate/
Unfortunately his code is minified, and even if I un-minify it, I can't make heads or tails of it with all the variables and functions named with single letters. (It doesn't help that the page includes code to implement "naive" rotation, and also the zooming, which I don't need.)
Anybody know of a demonstration of this technique that doesn't obfuscate the code?

Try looking here: http://bl.ocks.org/KoGor/5994804. The only thing is that you need to start dragging on a land and not on a water.
You may disregard text in Russian (the code has comments in English), but you may also consider translating it and the linked article.

Related

Removing skew/distortion based on known dimensions of a shape

I have an idea for an app that takes a printed page with four squares in each corner and allows you to measure objects on the paper given at least two squares are visible. I want to be able to have a user take a picture from less than perfect angles and still have the objects be measured accurately.
I'm unable to figure out exactly how to find information on this subject due to my lack of knowledge in the area. I've been able to find examples of opencv code that does some interesting transforms and the like but I've yet to figure out what I'm asking in simpler terms.
Does anyone know of papers or mathematical concepts I can lookup to get further into this project?
I'm not quite sure how or who to ask other than people on this forum, sorry for the somewhat vague question.
What you describe is very reminiscent of augmented reality marker tracking. Maybe you can start by searching these words on a search engine of your choice.
A single marker, if done correctly, can be used to identify it without confusing it with other markers AND to determine how the surface is placed in 3D space in front of the camera.
But that's all very difficult and advanced stuff, I'd greatly advise to NOT try and implement something like this, it would take years of research... The only way you have is to use a ready-made open source library that outputs the data you need for your app.
It may even not exist. In that case you'll have to buy one. Given the niché of your problem that would be perfectly plausible.
Here I give you only the programming aspect and if you want you can find out about the mathematical aspect from those examples. Most of the functions you need can be done using OpenCV. Here are some examples in python:
To detect the printed paper, you can use cv2.findContours function. The most outer contour is possibly the paper, but you need to test on actual images. https://docs.opencv.org/3.1.0/d4/d73/tutorial_py_contours_begin.html
In case of sloping (not in perfect angle), you can find the angle by cv2.minAreaRect which return the angle of the contour you found above. https://docs.opencv.org/3.1.0/dd/d49/tutorial_py_contour_features.html (part 7b).
If you want to rotate the paper, use cv2.warpAffine. https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_geometric_transformations/py_geometric_transformations.html
To detect the object in the paper, there are some methods. The easiest way is using the contours above. If the objects are in certain colors, you can detect it by using color filter. https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.html

Threejs edge helper

I'm new to Threejs and I have been using the EdgesHelper which achieves the look I want for now. But I have two questions...
What is the default edge width/how is it calculated?
Can the edge width be changed...
I have searched around and I'm pretty sure that due to some limitation (not of threejs of Windows I believe) that there is no simple method to change the thickness of the edges (?). Alot of the examples I found that have thicker edges would only work on a particular geometry (e.g. doesn't seem universal).
Perhaps I am wrong but I would have thought that this would be a very common requirement? Rather then spend hours/days/weeks trying to get what i want myself (which I'm not even sure I personally would be able to do), does anyone know of a way to have control over the edge thickness, an existing example or a library that someone has already done that can work on any shape (any imported obj for example)
Many thanks
Coming back to this as Wilt mentioned there are other threads on this. Basically you cannot change the thickness due to a limitation in ANGLE, there are some work around like the THREE.MeshLine (also mentioned in the link Wilt stated) but i found most work aroudns had some limitations for what I wanted
https://mattdesl.svbtle.com/drawing-lines-is-hard explains what is difficult to it in lines.
He also has a library called https://github.com/mattdesl/three-line-2d which should make lines easier to use.

Rendering CoreText within an irregular shape

I'm looking for guidance on implementing a view that renders an NSAttributedString within a polygon with holes, wrapping and reflowing text to fit the geometry. It's not CoreText that's the issue, but the general problem of partitioning an irregular shape into an ordered sequence of squat rectangles.
Similar questions haven't been answered fully:
How to fill a shape with text in Javascript
https://stackoverflow.com/q/3048305
CoreText's CTFramesetter does not support rendering into a CGPath
https://stackoverflow.com/q/3813318
CoreText handles an unbelievable amount of the grunt work associated with text layout and display, so I can't help but suspect that I'm reinventing a wheel. For the purposes of this question, please assume that I can check the substring that fits within a given rectangle, taking into account word wrap and hyphenation.
Edit: I've since decided to just sweep left-to-right drawing as much as fits between boundaries. It looks a bit haphazard even though I'm breaking at natural word boundaries, so I'd still appreciate guidance on how other applications wrap text.
Edit #2: It looks decent now that it supports basic word wrap and avoids rendering very short lines. My question must have been too vague. Thanks for looking.
Edit #3: Amorya points out that CTFramesetter now accepts any CGPath.
I wrote a blog post about achieving text wrap with Core Text:
http://blog.amyworrall.com/post/11098565269/text-wrap-with-core-text
The feature is new in iOS 4.3 and MacOS X Lion. You can now firstly draw inside non-rectangular paths, and secondly pass in other paths to mask the flow (i.e. be the holes you wrap around).

How does Content-Aware fill work?

In the upcoming version of Photoshop there is a feature called Content-Aware fill.
This feature will fill a selection of an image based on the surrounding image - to the point it can generate bushes and clouds while being seamless with the surrounding image.
See http://www.youtube.com/watch?v=NH0aEp1oDOI for a preview of the Photoshop feature I'm talking about.
My question is:
How does this feature work algorithmically?
I am a co-author of the PatchMatch paper previously mentioned here, and I led the development of the original Content-Aware Fill feature in Photoshop, along with Ivan Cavero Belaunde and Eli Shechtman in the Creative Technologies Lab, and Jeff Chien on the Photoshop team.
Photoshop's Content-Aware Fill uses a highly optimized, multithreaded variation of the algorithm described in the PatchMatch paper, and an older method called "SpaceTime Video Completion." Both papers are cited on the following technology page for this feature:
http://www.adobe.com/technology/projects/content-aware-fill.html
You can find out more about us on the Adobe Research web pages.
I'm guessing that for the smaller holes they are grabbing similarly textured patches surrounding the area to fill it in. This is described in a paper entitled "PatchMatch: A Randomized Correspondence Algorithm for Structural Image Editing" by Connelly Barnes and others in SIGGRAPH 2009. For larger holes they can exploit a large database of pictures with similar global statistics or texture, as describe in "Scene Completion Using Millions of Photographs". If they somehow could fused the two together I think it should work like in the video.
There is very similar algorithm for GIMP for a quite long time. It is called resynthesizer and probably you should be able to find a source for it (maybe at the project site)
EDIT
There is also source available at the ubuntu repository
And here you can see processing the same images with GIMP: http://www.youtube.com/watch?v=0AoobQQBeVc&feature=related
Well, they are not going to tell for the obvious reasons. The general name for the technique is "inpainting", you can look this up.
Specifically, if you look at what Criminisi did while in Microsoft http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.67.9407 and what Todor Georgiev does now at Adobe http://www.tgeorgiev.net/Inpainting.html, you'll be able to make a very good guess. A 90% guess, I'd say, which should be good enough.
I work on a similar problem. From what i read they use "PatchMatch" or "non-parametric patch sampling" in general.
PatchMatch: A Randomized Correspondence Algorithm
for Structural Image Editing
As a guess (and that's all that it would be) I'd expect that it does some frequency analysis (some like a Fourier transform) of the image. By looking only at the image at the edge of the selection and ignoring the middle, it could then extrapolate back into the middle. If the designers choose the correct color plains and what not, they should be able to generate a texture that seamlessly blends into the image at the edges.
edit: looking at the last example in the video; if you look at the top of the original image on either edge you see that the selection line runs right down a "gap" in the clouds and that right in the middle there is a "bump". These are the kind of artifacts I'd expect to see if my guess is correct. (OTOH, I'd also expect to see them is it was using some kind of sudo-mirroring across the selection boundary.)
The general approach is either content-aware fill or seam-carving. Ariel Shamir's group is responsible for the seminal work here, which was presented in SIGGRAPH 2007. See:
http://www.faculty.idc.ac.il/arik/site/subject-seam-carve.asp
Edit: Please see answer from the co-author of Content-Aware fill. I will be deleting this soon.

What's the best way to "smudge" an image programmatically?

I'm messing around with image manipulation, mostly using Python. I'm not too worried about performance right now, as I'm just doing this for fun. Thus far, I can load bitmaps, merge them (according to some function), and do some REALLY crude analysis (find the brightest/darkest points, that kind of thing).
I'd like to be able to take an image, generate a set of control points (which I can more or less do now), and then smudge the image, starting at a control point and moving in a particular direction. What I'm not sure of is the process of smudging itself. What's a good algorithm for this?
This question is pretty old but I've recently gotten interested in this very subject so maybe this might be helpful to someone. I implemented a 'smudge' brush using Imagick for PHP which is roughly based on the smudging technique described in this paper. If you want to inspect the code feel free to have a look at the project: Magickpaint
Try PythonMagick (ImageMagick library bindings for Python). If you can't find it on your distribution's repositories, get it here: http://www.imagemagick.org/download/python/
It has more effect functions than you can shake a stick at.
One method would be to apply a Gaussian blur (or some other type of blur) to each point in the region defined by your control points.
One method would be to create a grid that your control points moves and then use texture mapping techniques to map the image back onto the distorted grid.
I can vouch for a Gaussian Blur mentioned above, it is quite simple to implement and provides a fairly decent blur result.
James

Resources