I stumbled across the following novelty website for generating "rotation anagrams" of a given input text called Sokumenzu / Side View Generator which produces animated versions of results such as the following:
I'm really keen to understand the algorithm behind it. I've tried seeing if any of the publicly exposed javascript can help me do so but it is a mess of obfuscation and my grasp of the language isn't so firm that I can't be sure that the real work isn't being done server side.
I have the rough outline of how I think a similar such system could be built but it would have its own short comings (and maybe a small advantage if the real approach is hard-coded):
Pre-computation
Define an nxnxn cube composed of equally sized sub-cubes
Each sub-cube may either contain a sphere or not
Create a virtual camera orthogonal to one of the cube's faces a fixed distance away
For each of the possible states of the cube:
Cast rays from the camera and build up an nxn matrix of which cells appear occupied from the camera's point of view.
Input this matrix into a neural network / other recognizer which has been pre-trained on the latin alphabet.
If the recognizer matches a character:
Add the state which triggered recognition to a hashtable indexed on the character it recognized.
Handle collisions (there should be many) by keeping the highest confidence recognition
For every key in the hashtable
Rotate the corresponding state in fixed increments recognizing characters as before
If a character other than the current key is recognized:
Add that character and the amount of rotation performed to a tuple in a list.
Store each of these lists in the hashtable indexed on the current key.
Query
Generate all of the permutations achieved by substituting each of the characters linked in the list associated with input character at that position.
Find the first dictionary word in the list of permutations
Visualize using the rotation information stored for each character
Obviously this isn't the same algorithm as is used since this operates on a character by character basis. I suppose that you could use a similar approach on a word by word basis taking the face of the entire volume as input to a text recognizer but I'm sure that they have probably done something simpler, cleverer, and far far more efficient.
The one advantage of this terrible terrible idea is that by retraining the recognizer you could support other character sets.
Anyone know how this actually works?
I think it is way simpler than that.
For each pair of letters (which are 2d objects), you can try to find a 3d object which will project to one or the other depending if it is seen from a 0° angle or a 90° one.
Finding a set of 3d points on a 3d-grid that project to two given sets of points in 2d depending on the projection looks like a problem of discrete tomography, which you can read about here on Wikipedia : https://en.wikipedia.org/wiki/Discrete_tomography
Note that you can process the 3d shape line by line, and actually only solve 2d instances.
Once that pre-computation is done, and you have a graph of letters where two letters are linked if there is a 3d shape that produces the two of them from a different angle, I suspect the way the algorithm works is as follows :
Compute the set of letters of the original word. Then, explore all the set of letters you can have by changing a letter of the input to one that it is linked to. When you find a set of letters that can make a word, stop. (there is probably a pre-computed dictionary that does a matching between words and sets of letters)
If a 3d shape needs to project in a different part of the word (that is, you need to have a shape that project in position 2 or 4 of the word depending if it is the original word or the other one, like u and v in D(u)ncan - Une(v)en ) you compute the appropriate permutation matrix. Like this for your name :
DU__________
______uv____
__nn________
____ce______
________ae__
__________nn
(first letter of each pair is the letter projected to the left, second letter the one projected to the bottom)
It is computed from the permutation matrix :
100000
000100
010000
001000
000010
000001
And the matching of the letters. (D-U, c-e ...).
The linked demo appears to work with precomputed or maybe even handcrafted letter pairs. A.N. has pointed you on how to rearrange the letter pairs. Note that the Sokumenzu sometimes fails to generate another word, for example with "Lydia". It then maps the word to itself. It also tries an all uppercase version when it doesn't find a match with the given case.
If you want a more general solution, you can combine arbitrary bitmaps of the same height provided that each row that has at least one pixel in the first bitmap also has a pixel in the second bitmap. (So you can't have an i on one face and an l on the other, because there's will be a pixel in the row between the tittle and the stem of the i.)
You can create the layers of your cube independently. Your aim is to have at least one pixel that projects to every pixel of the two bitmaps.
Create a list of the positions of the filled pixels of each bitmap. If one of the lists is empty, there is no solution. Create pairs out of the lists so that all pixels from the longer list are used and each pixel from the shorter list is used at least once.
Depending on the top-down appearence you want to achieve, you can use various algorithms. If you want to create a diagonal, you can use Bresenham arithmetic. You can also create a scattered appearance by assigning the pairs randomly, as long as you use every pixel at least once.
To illustrate:
· · · · · · · · · ·
· · ● · · · · · ○ ○ #
· · · · · · · · · ·
· ● · ○ · · ○ · ● · #
· ○ ○ ● · · ● · · ● #
# # # # # #
Both the discs and the circles will produce the hash patterns, although the circles will look tidier when seen from above and will probably also work better inn perspective.
Related
I understand given tabular data how to create a decision tree from scratch. I program in python, we create a class called node inside the class we can add functions making child, root, split etc.
May be videos like this explain the process.
https://www.youtube.com/watch?v=sgQAhG5Q7iY&ab_channel=NormalizedNerd
I want to write a class for decision space. That represents the data points in the tabular data in a plane or higher dimension space. Then create a partition based on the decision tree.
So let say we have the decision tree. given in the following picture. The data set has two features age and degree. At the age =4 there is split and then degree has a split at at degree =2. So I want to define a class that return me right hand side in the picture.
Decision space is more general than decision tree. So is there is an implementation in python from scratch ? So I am guessing we convert a decision tree to rule set and then from rule set we can from rule space?
Definition 1. A decision space is an m-dimensional (product) space $Dattr_1$ × · · · × Dattr_i × · · · × Dattr_m where m is the
number of attributes and Dattr_i is the space covered by the i-th attribute specified as a bounded poset (i.e., a partially
ordered set with a least and the greatest element).
Sorry for choosing a title that isn't really expressive but I couldn't come up with anything better.
I'm looking for an algorithmic way to find combinations with repetition and order them spatially in the following way:
order 2 (2x2):
AA AB
AD AC/BD
order 4 (4x4):
AAAA AAAB AABB ABBB
AAAD AAAC/AABD AABC/ABBD ABBC
AADD AACD/ABDC ABCD/AACC ABCC
ADDD ACDD ACCD ACCC
Especially those two-option fields (e.g. AAAC/AABD) are giving me headaches. The combinations will be used to compute correlations between real pixels. I.e. for instance the combination AA would be the autocorrelation of A while AB would be the cross correlation of A and B.
This scheme is required for a microscopy algorithm I'm currently trying to implement (see the section on cross cumulants):
http://en.wikipedia.org/wiki/Super-resolution_optical_fluctuation_imaging
On top of that, I also need to do this in a most efficient manner as possible as I would typically have to perform the operation on thousands of images with each being at least 512x512 (I'm doing this in CUDA). EDIT: well I just realized that high efficiency is not really required as the pattern won't change once an order has been chosen.
This image may help:
EDIT: I found a MATLAB toolbox for Balanced SOFI analysis (under GNU public license) on this page: http://documents.epfl.ch/users/l/le/leuteneg/www/BalancedSOFI/index.html -- even if you can't use MATLAB code, it may be possible to convert it to your programming language.
Of the eight references of the Wikipedia article, some are behind paywalls, but figures (or thumbnails of them) freely viewable for all but t he two oldest articles. Only one article gives free access to a figure resembling the one from Wikipedia (with AAAB, AABB, etc.):
http://www.opticsinfobase.org/boe/abstract.cfm?uri=boe-2-3-408
If I were in your position, I would attempt to contact the authors of that paper directly and ask them what the figure means, because I did not see how they decided to use these combinations. Within the 3x3 set of interpolated pixels that are bounded symmetrically by pixes A, B, C, D at their four corners, why is A the only letter that is required to occur in every single combination? If I understand what the letters signify (relative weightings of the real pixels in each interpolated pixel's value), then the pixel on the intersection of the diagonals, labeled ABCD AACC, gives twice as much weight to A and C as to B and D; in fact, on average that entire 3x3 block of pixels gives A and C twice as much weight as B and D. That does not seem consistent with the underlying symmetries of the problem.
The mathematics in the wiki link you provided (about Super-resolution optical fluctuation imaging) may require quite a bit of study for me to understand, but I observe at least one possible way to think of diagram A:
I think of A as an anchor from which vectors extend to B, C, and D. The direct vectors are probably fairly obvious - as we move from A directly toward D, the Ds accumulate (i.e., AAAD, AADD, ADDD); as with A to B and A to C.
I see the middle combinations as vectors extending from the border either horizontally or vertically. For example, going down from AAAB, we add one D, then another; going to the right from AADD, we add one C, then reach the middle, ABCD.
I think any one combination in the diagram is sorted alphabetically, which would account for the apparent discrepancy in the positions of some accumulations, for example, the position of the accumulating C when moving from ADDD to ACCC, compared with the position of the accumulating `C' when moving from ABBB to ACCC.
lets say I have a very big matrix with 10000x10000 elements all having the value '0'. Lets say there are some big 'nests' of '1's. Those areas might even be connected, but very weekly connected by a 'pipe' of '1's.
I want to get an algorithm that very quickly (and dirty if necessary) finds these 'nests' of '1's. Here it shouldn't 'cut apart' two weekly connected 'nests'.
Any idea how I should do such an algorithm?
Maybe a pathfinding algorithm like A* (or something simpler like a BFS or DFS) may work in this case..
You can:
search starting point for your searches by finding small nests (ignoring pipes).. so at least a 3x3 block of 1's
then you should pathfind from there going through 1's until you end your "connected component" (poetic license) inside the matrix
repeat starting from another small 1's block
I would say it depends on how the data is needed. If, given two points, you need to check if they are in the same block of 1's, I think #Jack's answer is best. This is also true if you have some knowledge of where blocks are initially, as you can use those as starting points for your algorithm.
If you don't have any other information, maybe one of these would be a possibility:
If given a point, you wish to find all elements in the same block, a flood fill would be appropriate. Then you could cache each nest as you find it, and when you get another point first see if it's in a known nest, and if it isn't do a flood fill to find this nest then add it to the cache.
As an implementation detail, as you traverse the matrix each row should have available the set of nests present on the previous row. Then you would only need to check new points against those nests, rather than the complete set, to determine if a new point is in a known set or not.
Be sure that you use a set implementation with a very low lookup cost such as a hashtable or possibly a Bloom filter if you can deal with the probabilistic effects.
Turn the matrix into a black&white bitmap
Scale the matrix so that nests of size N become a single pixel (so if you look for 10x10 nests, scale by a factor of N=10).
Use the remaining pixels of the output to locate the nests. Use the center coordinate (multiplied by the factor above) to locate the same nest in the matrix.
Use a low-pass filter to get rid of all "pipes" that connect the nests.
Find the border of the nest with a contrast filter on the bitmap.
Create a bitmap which doesn't contain the nests (i.e. set all pixels of the nests to 0).
Use a filter that widens single pixels to grow the outline of the nests.
Bitwise AND the output of 7 and 5 to get the connection points of all pipes.
Follow the pipes to see how they connect the nests
I'm displaying a small Google map on a web page using the Google Maps Static API.
I have a set of 15 co-ordinates, which I'd like to represent as points on the map.
Due to the map being fairly small (184 x 90 pixels) and the upper limit of 2000 characters on a Google Maps URL, I can't represent every point on the map.
So instead I'd like to generate a small list of co-ordinates that represents an average of the big list.
So instead of having 15 sets, I'd end up with 5 sets, who's positions approximate the positions of the 15. Say there are 3 points that are in closer proximity to each-other than to any other point on the map, those points will be collapsed into 1 point.
So I guess I'm looking for an algorithm that can do this.
Not asking anyone to spell out every step, but perhaps point me in the direction of a mathematical principle or general-purpose function for this kind of thing?
I'm sure a similar function is used in, say, graphics software, when pixellating an image.
(If I solve this I'll be sure to post my results.)
I recommend K-means clustering when you need to cluster N objects into a known number K < N of clusters, which seems to be your case. Note that one cluster may end up with a single outlier point and another with say 5 points very close to each other: that's OK, it will look closer to your original set than if you forced exactly 3 points into every cluster!-)
If you are searching for such functions/classes, have a look at MarkerClusterer and MarkerManager utility classes. MarkerClusterer closely matches the described functionality, as seen in this demo.
In general I think the area you need to search around in is "Vector Quantization". I've got an old book title Vector Quantization and Signal Compression by Allen Gersho and Robert M. Gray which provides a bunch of examples.
From memory, the Lloyd Iteration was a good algorithm for this sort of thing. It can take the input set and reduce it to a fixed sized set of points. Basically, uniformly or randomly distribute your points around the space. Map each of your inputs to the nearest quantized point. Then compute the error (e.g. sum of distances or Root-Mean-Squared). Then, for each output point, set it to the center of the set that maps to it. This will move the point and possibly even change the set that maps to it. Perform this iteratively until no changes are detected from one iteration to the next.
Hope this helps.
I have a huge set of arbitrary natural language strings. For my tool to analyze them I need to convert each string to unique color value (RGB or other). I need color contrast to depend on string similarity (the more string is different from other, the more their respective colors should be different). Would be perfect if I would always get same color value for the same string.
Any advice on how to approach this problem?
Update on distance between strings
I probably need "similarity" defined as a Levenstein-like distance. No natural language parsing is required.
That is:
"I am going to the store" and
"We are going to the store"
Similar.
"I am going to the store" and
"I am going to the store today"
Similar as well (but slightly less).
"I am going to the store" and
"J bn hpjoh up uif tupsf"
Quite not similar.
(Thanks, Welbog!)
I probably would know exactly what distance function I need only when I'll see program output. So lets start from simpler things.
Update on task simplification
I've removed my own suggestion to split task into two — absolute distance calculation and color distribution. This would not work well as at first we're reducing dimensional information to a single dimension, and then trying to synthesize it up to three dimensions.
You need to elaborate more on what you mean by "similar strings" in order to come up with an appropriate conversion function. Are the strings
"I am going to the store" and
"We are going to the store"
considered similar? What about the strings
"I am going to the store" and
"J bn hpjoh up uif tupsf"
(all of the letters in the original +1), or
"I am going to the store" and
"I am going to the store today"
? Based on what you mean by "similar", you might consider different functions.
If the difference can be based solely on the values of the characters (in Unicode or whatever space they are from), then you can try summing the values up and using the result as a hue for HSV space. If having a longer string should cause the colours to be more different, you might consider weighing characters by their position in the string.
If the difference is more complex, such as by the occurrences of certain letters or words, then you need to identify this. Maybe you can decide red, green and blue values based on the number of Es, Ss and Rs in a string, if your domain has a lot of these. Or pick a hue based on the ratio of vowels to consonents, or words to syllables.
There are many, many different ways to approach this, but the best one really depends on what you mean by "similar" strings.
It sounds like you want a hash of some sort. It doesn't need to be secure (so nothing as complicated as MD5 or SHA) but something along the lines of:
char1 + char2 + char3 + ... + charN % MAX_COLOUR_VALUE
would work as a simple first step. You could also do fancier things along the lines of having each character act as an 'amplitude' for R,G and B (e could be +1R, +2G and -4B, etc.) and then simply add up all the values in a string... clamp them at the end and you have a method of turning arbitrary length strings into colours as a 'colour hash' sort of process.
First, you'll need to pick a way to measure string similarity. Minimal edit distance is traditional, but is not sufficient to well-order the strings, which is what you will need if you want to allocate the same colours to the same strings every time - perhaps you could weight the edit costs by alphabetic distance. Also minimal edit distance by itself may not be very useful if what you are after is similarity in speech rather than in written form (if so, consider a stemming/soundex pass first), or some other sense of "similarity".
Then you need to pick a way of traversing the visible colour space based on that metric. It may be helpful to consider using HSL or HSV colour representation - the algorithm could then become as simple as picking a starting hue and walking the sorted corpus, assigning the current hue to each string before offsetting it by the string's difference from the previous one.
How important is it that you never end up with two dissimilar strings having the same colour?
If it's not that important then maybe this could work?
You could pick a 1 dimensional color space that is "homotopic" to the circle: Say the color function c(x) is defined for x between 0 and 1. Then you'd want c(0) == c(1).
Now you take the sum of all character values modulo some scaling factor and wrap this back to the color space:
c( (SumOfCharValues(word) modulo ScalingFactor) / ScalingFactor )
This might work even better if you defined a "wrapping" color space of higher dimensions and for each dimension pick different SumOfCharValues function; someone suggested alternating sum and length.
Just a thought... HTH
Here is my suggestion (I think there is a general name for this algorithm, but I'm too tired to remember it):
You want to transform each string to a 3D point node(r, g, b) (you can scale the values so that they fit your range) such that the following error is minimized:
Error = \sum_i{\sum_j{(dist(node_i, node_j) - dist(str_i, str_j))^2}}
You can do this:
First assign each string a random color (r, g, b)
Repeat until you see fit (eg. error is adjusted less than \epsilon = 0.0001):
Pick a random node
Adjust it's position (r, g, b) such that the error is minimized
Scale the coordinate system such that each nodes coordinates are in the range [0., 1.) or [0, 256]
You can use something like MinHash or some other LSH method and define similarity as intersection between sets of shingles measured by Jaccard coefficient.
There is a good description in Mining of Massive data sets, Ch.3 by Rajaraman and Ullman.
I would maybe define some delta between two strings. I don't know what you define as the difference (or "unequality") of two strings, but the most obvious thing I could think about would be string length and the number of occurences of particular letters (and their index in the string). It should not be tricky to implement it such that it returns the same color code in equal strings (if you do an equal first, and return before further comparison).
When it comes to the actual RGB value, I would try to convert the string data into 4 bytes (RGBA), or 3 bytes if you only use the RGB. I don't know if every string would fit into them (as that may be language specific?).
Sorry, but you can't do what you're looking for with levenshtein distance or similar. RGB and HSV are 3-dimensional geometric spaces, but levenshtein distance describes a metric space - a much looser set of contstraints with no fixed number of dimensions. There's no way to map a metric space into a fixed number of dimensions while always preserving locality.
As far as approximations go, though, for single terms you could use a modification of an algorithm like soundex or metaphone to pick a color; for multiple terms, you could, for example, apply soundex or metaphone to each word individually, then sum them up (with overflow).