I've upgraded to the latest version and noticed that some things have changed.
The renderer doesn't seem to support indexed geometries with more than ~65k indices and the immidiateRenderCallback doesn't work anymore.
So how can I add a custom object to the scene?
In previous versions I inherited from THREE.ImmediateRenderObject and put everything in the rendercallback function.
if you have more than 65k vertices and something is wrong i would suspect that the index is in a
Uint16Array
make sure you have your index in a
Uint32Array
with a Uint16Array every integer there is represented by 16 bits -> so possible values range from 0 to (2^16 -1)=65535 therefore the index is unable to reference vertices higher than that, with 32bit integers you can go up to 4294967296 and that should suffice
Related
I'm attempting to retrieve H3 index keys directly adjacent to my current location. I'm wondering if this can be done by mutating/calculating the coordinate directly or if I have to use the library bindings to do this?
Take this example:
./bin/geoToH3 --resolution 6 --latitude 43.6533055 --longitude -79.4018915
This would return the key 862b9bc77ffffff. I now want to retrieve all relevant 6 neighbors keys (the values of the kRing I believe is how to describe it?).
A tangent though equally curious question might render the above irrelevant: if I were attempting to query entries that have all 7 indexes is there a better way than using an OR statement seeking all 7 values out? Since the index is numeric I'm wondering if I could just check for a range within the numeric representation?
The short answer is that you need to use kRing (either through the bindings or the command-line tools) to get the neighbors. While there are some limited cases where you could get the neighbors through bit manipulation of the index, in many cases the numeric index of a neighbor might be distant. The basic rule is that while indexes that are numerically close are geographically close, the reverse is not necessarily true.
For the same reason, you generally can't use a range query to look for nearby hexagons. The general lookup pattern is to find the neighboring cells of interest in code, using kRing, then query for all of them in your database.
I have a 2D matrix where I want to modify every value by applying a function that is only dependent on the coordinates in the matrix and values set at compile-time. Since no synchronization is necessary between each such calculation, it seems to me like the work group size could really be 1, and the number of work groups equal to the number of elements in the matrix.
My question is whether this will actually yield the desired result, or whether other forces are at play here that might make a different setting for these values better?
My recomendation: Just set global size to your 2D matrix size, and local size to NULL. This will make the compiler select for you an optimal local size.
In your specific case, the local size does not need to hav any shape. In fact, any value value will do the work, but the performance may differ. You can tune it manually for different HW. But it is easyer to let the compiler do this job for you. And it is even more portable.
Abstract Description:
I have a set of strings, call it the "active set", and a set of sets of strings - call that the "possible set". When a new string is added to the active set, sets from the possible set may suddenly be subsets of the active set because the active set lacked only that string to be a superset of one of the possibles. I need an algorithm to efficiently find these when I add a new string to the active set. Bonus points if the same data structure allows me to efficiently find which of these possible sets are invalidated (no longer a subset) when a string is removed from the active set.
(The reason I framed the problem described below in terms of sets and subsets of strings in the abstract above is that the language I'm writing this in (Io) is dynamically typed. Objects do have a "type" field but it is a string with the name of the object type in it.)
Background:
In my game engine I have GameObjects which can have several types of Representation objects added to them. For instance if a GameObject has physical presence it might have a PhysicsRepresentation added to it (or not if it's not a solid object). It might have various kinds of GraphicsRepresentations added to it, such as a mesh or particle effect (and you can have more than one if you have multiple visual effects attached to the same game object).
The point of this is to separate subsystems, but you can't completely separate everything: for instance when a GameObject has both a PhysicsRepresentation and a GraphicsRepresentation, something needs to create a 3rd object which connects the position of the GraphicsRepresentation to the location of the PhysicsRepresentation. To serve this purpose while still keeping all the components separate, I have Interaction objects. The Interaction object encapsulates the cross-cutting knowledge about how two system components have to interact.
But in order to protect GameObject from having to know too much about Representations and Interactions, GameObject just provides a generic registry where Interaction prototype objects can register to be called when a particular combination of Representations is present in the GameObject. When a new Representation is added to the GameObject, GameObject should look in it's registry and activate just those Interaction objects which are newly enabled by the presence of the new Representation, plus the existing Representations.
I'm just stuck on what data structure should be used for this registry and how to search it.
Errata:
The sets of strings are not necessarily sorted, but I can choose to store them sorted.
Although an Interaction most commonly will be between two Representations, I do not want to limit it to that; I should be able to have Interactions that trigger with 3 or more different representations, or even interactions that trigger based on just 1 representation.
I want to optimize this for the case of making it as fast as possible to add/remove representations.
I will have many active sets (each game object has an active set), but I have only one possible set (the set of all registered interaction types). So I don't care how long it takes to build the data structure that represents the possible set, because it only needs to be done once provided the algorithm for comparing different active sets is non-destructive of the possible set data structure.
If your sets are really small, the best representation is using bit sets. First, you build a map from strings to consecutive integers 0..N, where N is the number of distinct strings. Then you build your sets by bitwise OR-ing of 1<<k into a number. This lets you turn your set operations into bitwise operations, which are extremely fast (an intersection is an &; a union is an |, and so on).
Here is an example: Let's say you have two sets, A={quick, brown, fox} and B={brown, lazy, dog}. First, you build a string-to-number map, like this:
quick - 0
brown - 1
fox - 2
lazy - 3
dog - 4
Then your sets would become A=00111b and B=11010b. Their intersection is A&B = 00010b, and their union is A|B = 11111b. You know a set X is a subset of set Y if X == X&Y.
One way to do this would be to keep, for each subset, a count of how many of its strings were not in the main set, and a map from strings to lists of subsets containing that string, so that you can update the counts when you add or remove a new string to the active set, and notice when a count goes down to zero.
This problem reminds me of firing rules in a rule-based system when a fact becomes true, which corresponds to a new string being added to the active set. Many of these systems use http://en.wikipedia.org/wiki/Rete_algorithm. http://www.jboss.org/drools/drools-expert.html is an open source rule-based system - although it looks like there is a lot of enterprise system wrapping round it these days.
I have a data structure that stores amongst others a 24-bit wide value. I have a lot of these objects.
To minimize storage cost, I calculated the 2^7 most important values out of the 2^24 possible values and stored them in a static array. Thus I only have to save a 7-bit index to that array in my data structure.
The problem is: I get these 24-bit values and I have to convert them to my 7-bit index on the fly (no preprocessing possible). The computation is basically a search which one out of 2^7 values fits best. Obviously, this takes some time for a big number of objects.
An obvious solution would be to create a simple mapping array of bytes with the length 2^24. But this would take 16 MB of RAM. Too much.
One observation of the 16 MB array: On average 31 consecutive values are the same. Unfortunately there are also a number of consecutive values that are different.
How would you implement this conversion from a 24-bit value to a 7-bit index saving as much CPU and memory as possible?
Hard to say without knowing what the definition is of "best fit". Perhaps a kd-tree would allow a suitable search based on proximity by some metric or other, so that you quickly rule out most candidates, and only have to actually test a few of the 2^7 to see which is best?
This sounds similar to the problem that an image processor has when reducing to a smaller colour palette. I don't actually know what algorithms/structures are used for that, but I'm sure they're look-up-able, and might help.
As an idea...
Up the index table to 8 bits, then xor all 3 bytes of the 24 bit word into it.
then your table would consist of this 8 bit hash value, plus the index back to the original 24 bit value.
Since your data is RGB like, a more sophisticated hashing method may be needed.
bit24var & 0x000f gives you the right hand most char.
(bit24var >> 8) & 0x000f gives you the one beside it.
(bit24var >> 16) & 0x000f gives you the one beside that.
Yes, you are thinking correctly. It is quite likely that one or more of the 24 bit values will hash to the same index, due to the pigeon hole principal.
One method of resolving a hash clash is to use some sort of chaining.
Another idea would be to put your important values is a different array, then simply search it first. If you don't find an acceptable answer there, then you can, shudder, search the larger array.
How many 2^24 haves do you have? Can you sort these values and count them by counting the number of consecutive values.
Since you already know which of the 2^24 values you need to keep (i.e. the 2^7 values you have determined to be important), we can simply just filter incoming data and assign a value, starting from 0 and up to 2^7-1, to these values as we encounter them. Of course, we would need some way of keeping track of which of the important values we have already seen and assigned a label in [0,2^7) already. For that we can use some sort of tree or hashtable based dictionary implementation (e.g. std::map in C++, HashMap or TreeMap in Java, or dict in Python).
The code might look something like this (I'm using a much smaller range of values):
import random
def make_mapping(data, important):
mapping=dict() # dictionary to hold the final mapping
next_index=0 # the next free label that can be assigned to an incoming value
for elem in data:
if elem in important: #check that the element is important
if elem not in mapping: # check that this element hasn't been assigned a label yet
mapping[elem]=next_index
next_index+=1 # this label is assigned, the next new important value will get the next label
return mapping
if __name__=='__main__':
important_values=[1,5,200000,6,24,33]
data=range(0,300000)
random.shuffle(data)
answer=make_mapping(data,important_values)
print answer
You can make the search much faster by using hash/tree based set data structure for the set of important values. That would make the entire procedure O(n*log(k)) (or O(n) if its is a hashtable) where n is the size of input and k is the set of important values.
Another idea is to represent the 24BitValue array in a bit map. A nice unsigned char can hold 8 bits, so one would need 2^16 array elements. Thats 65536. If the corresponding bit is set, then you know that that specific 24BitValue is present in the array, and needs to be checked.
One would need an iterator, to walk through the array and find the next set bit. Some machines actually provide a "find first bit" operation in their instruction set.
Good luck on your quest.
Let us know how things turn out.
Evil.
I would like to quickly retrieve the median value from a boost multi_index container with an ordered_unique index, however the index iterators aren't random access (I don't understand why they can't be, though this is consistent with std::set...).
Is there a faster/neater way to do this other than incrementing an iterator container.size() / 2 times?
Boost.MultiIndex provide random access indexes, but these index don't take care directly of any order. You can however sort these index, using the sort member function, after inserting a new element, so you will be able to get the median efficiently.
It seems you should make a request to Boost.MultiIndex so the insertion can be done using an order directly, as this should be much more efficient.
I ran into the same problem in a different context. It seems that the STL and Boost don't provide an ordered container that has random access to make use of the ordering (e.g. for comparing).
My (not so pretty) solution was to use a Class that performed the input and "filtered" it in a set. After the input operation was finished it just copied all iterators of the set to a vector and used this for random access.
This solution only works in a very limited context: You perform input on the container once. If you change add to the container again all iterators would have to be copied again. It really was very clumsy to use but worked.