I am wondering if it is possible to have data from elasticsearch indices without timestamp attached to them.
I need a list of two columns as a drop down. This list is cross checked against another index to generate maps but if I zoom into the graph breaks cause the drop down list exists from time a to be but not from c to d. (lol)
My macgyver solution to this is to just add the list every few minutes into the index so on the graph, the data is reasonably dense. This allows the user to zoom in pretty well into different parts of the graph. But overtime this is going to make my index unreasonably large.
Related
I'm wondering what would be an efficient way to detect the last modified timestamp of an index in Elastic Search. I have read posts of adding a timestamp fields in pipeline but this solution has limitations (e.g. only newly created index supports timestamp update?)
If only a handful of indices are required to track their last modify time, what would be the most efficient way? Would periodic query and compare result between queries give us an approx. last modify time? any other ways to track ES events?
there is a creation_date setting, but not a comparable update_date one. the reasoning behind this is that updating this for every indexing event would be very expensive, even more so in a distributed environment
you could use something like meta, but it has the same limitation as adding a timestamp to individual documents
I am trying to design a datamodel on geodetic data which performs decently as the data grow over time. However all ideas I got have hit a couple of limitations by Oracle.
These are the requirements:
the data to store are 2-dimensional points with latitude and longitude (being able to handle any geometry would be a plus).
New data are added on a monthly basis. This conceptually updates the position of the points, deletes old points or creates new ones. New data for new time-instants come in batches, and as such are conceptually ordered and labelled. Say: t1, t3, t4. (it is not a form of asset tracking, it is more of an evolving snapshot of data).
SELECTs on current or historical data must execute in real time (e.g. to be be depicted on an interactive map). SELECT statements will query data like "return all points belonging to a given region, as of t3", expecting to return an image of the initial data with the changes applied at times t1, t3
It is not known in advance what proportion of the original points are left unchanged. So for instance 100% of the points might be altered when receiving a new batch of data, or just 0%.
The data are conceptually tuples of (t, geometry), and the main problem is that you cannot create a spatial index on t and geometry, but only on geometry.
Overall, the point is that indexing geographical data that are conceptually "sliced", by another column, is apparently not supported.
By the way, in spite of appearance, saving all data as complete snapshots, or saving the changes as incremental deltas is not really the point of the whole matter, and makes no real difference.
Below are my failed attempts to solve the problem. If anyone has good ideas on how to efficiently model the data, just skip the remainder of this post (otherwise I would appreciate if one could elaborate on the options I have tried so far.)
First unsuccessful attempt - multi-column spatial index
I would save complete "snapshots" of the data at times t1,t3,t4, and the data would be identified by two columns, the geometry and an identifier of the slice of data:
create table geocoded_data (
geom SDO_GEOMETRY,
snapshot_id number(5,0)
);
Of course, geometries require a spatial index to efficiently be operated on, and the obvious choice would have been a two-columns index. This is where the idea fails, because multi-column index cannot be spatially indexed (such as a spatial one):
CREATE INDEX my_index ON geocoded_data(snapshot_id, geom)
INDEXTYPE IS mdsys.spatial_index;
SQL Error: ORA-29851: "cannot build a domain index on more than one column"
Second unsuccessful attempt - adding another dimension to the geometries
Another option would be to model the column snapshot_id as a third dimension embedded in the geom field. However unless documented somewhere, one cannot assume that the resulting index will work properly on such a data structure.
After all, the third-dimension would be just a marker, with no geometrical significance, so potentially hindering the performance of the index.
This option would be conceptually similar to using LSR (Linear Referencing Systems) on points.
And in fact It might be no coincidence that the docs about LSR indexing state:
Do not include the measure dimension in a spatial index, because this causes additional processing overhead and produces no benefit
Third unsuccessful attempt - interval partitioning
A third way to go would be partitioning the data according to the column snapshot_id, and creating a local spatial index. In this case, hopefully, a partition elimination would help using the relevant portion of the spatial index, disregarding the data in other snapshots.
The partition would be an "interval partition" (a new partition would be nicely and automatically created upon receiving a new batch of data). However this is what I get when I try to create a spatial index on an interval-partitioned table:
SQL Error: ORA-14762: "Domain index creation on interval partitioned tables is not permitted"
That's true, intended and documented: partitions are generally ok except that interval partitions are not supported by spatial indexes. Indeed Using Partitioned Spatial Indexes explicitly says:
Only range partitioning is supported on the underlying table. All other kinds of partitioning are not currently supported for partitioned spatial indexes.
So, I should make do with a range partition. However I'd rather exclude this option because it would entail in some amount of "system" maintenance (creating new partitions manually, or as part of the application logic, which would be awkward).
Ideally I wanted a new partition for each snapshot, and I'd like to have the partition created automatically whenever a new snapshot is introduced.
Fourth unsuccessful attempt - representing the data in an incremental fashion
The last option, which is the most CPU intensive, would persist the initial snapshot of data, while saving the new batches in form of deltas.
However, as a matter of fact, this wouldn't address the fundamental issue at the hearth of the problem (unability to spatially index geometries discriminated by the content of another column).
For instance, when the application had to reconstruct the content of a given portion of the map up to t2, it would have to retrieve all data that are related to that map portion, up to the deltas belonging to t2.
Unfortunately the spatial would fetch all deltas in the relevant portion of the map, including those that are irrelevant because were added later than t2. For instance the index would identify:
point A, OK: part of the initial snapshot
point B, OK: was added on the same map portion by t2
point C, D, E, F: KO!... were added to the same map portion but by a later t3, so will be excluded only by a predicate, not by the index.
On the other hand, even if the index returned only the records we need, this wouldn't be sustainable, because changes would add up over time and the cost of returning the current image after, say, 10 years of changes might be outrageous (to get the right image, one should combine all deltas that were introduced after the initial snapshot).
In KNN like algorithm we need to load model Data into cache for predicting the records.
Here is the example for KNN.
So if the model will be a large file say1 or 2 GB we will be able to load them into Distributed cache.
Example:
Inorder to predict 1 otcome, we need to find the distnce between that single record with all the records in model result and find the min distance. So we need to get the model result in our hands. And if it is large file it cannot be loaded into Distributed cache for finding distance.
The one way is to split/partition the model Result into some files and perform the distance calculation for all records in that file and then find the min ditance and max occurance of classlabel and predict the outcome.
How can we parttion the file and perform the operation on these partition ?
ie 1 record <Distance> file1,file2,....filen
2nd record <Distance> file1,file2,...filen
This is what came to my thought.
Is there any further way.
Any pointers would help me.
I think the way you partitionin the data mainly depends on your data itself.
Being that you have a model with a bunch of rows, and that you want to find the k closes ones to the data on your input, the trivial solution is to compare them one by one. This can be slow because of going through 1-2GB of data millions of times (I assume you have large numbers of records that you want to classify, otherwise you don't need hadoop).
That is why you need to prune your model efficiently (your partitioning) so that you can compare only those rows that are most likely to be the closest. This is a hard problem and requires knowledge of the data you operate on.
Additional tricks that you can use to fish out performance are:
Pre-sorting the input data so that the input items that will be compared from the same partition come together. Again depends on the data you operate on.
Use random access indexed files (like Hadoop's Map files) to find the data faster and cache it.
In the end it may actually be easier for your model to be stored in lucene index, so you can achieve effects of partitioning by looking up the index. Pre-sorting the data is still helpful there.
I have a very large and very sparse matrix, composed of only 0s and 1s. I then basically handle (row-column) pairs. I have at most 10k pairs per row/column.
My needs are the following:
Parallel insertion of (row-column) pairs
Quick retrieval of an entire row or column
Quick querying the existence of a (row-column) pair
A Ruby client if possible
Are there existing databases adapted for these kind of constraints?
If not, what would get me the best performance :
A SQL database, with a table like this:
row(indexed) | column(indexed) (but the indexes would have to be constantly refreshed)
A NoSQL key-value store, with two tables like this:
row => columns ordered list
column => rows ordered list
(but with parallel insertion of elements to the lists)
Something else
Thanks for your help!
A sparse 0/1 matrix sounds to me like an adjacency matrix, which is used to represent a graph. Based on that, it is possible that you are trying to solve some graph problem and a graph database would suit your needs.
Graph databases, like Neo4J, are very good for fast traversal of the graph, because retrieving the neighbors of an vertex takes O(number of neighbors of a given vertex), so it is not related to the number of vertices in the whole graph. Neo4J is also transactional, so parallel insertion is not a problem. You can use the REST API wrapper in MRI Ruby, or a JRuby library for more seamless integration.
On the other hand, if you are trying to analyze the connections in the graph, and it would be enough to do that analysis once in a while and just make the results available, you could try your luck with a framework for graph processing based on Google Pregel. It's a little bit like Map-Reduce, but aimed toward graph processing. There are already several open source implementations of that paper.
However, if a graph database, or graph processing framework does not suit your needs, I recommend taking a look at HBase, which is an open-source, column-oriented data store based on Google BigTable. It's data model is in fact very similar to what you described (a sparse matrix), it has row-level transactions, and does not require you to retrieve the whole row, just to check if a certain pair exists. There are some Ruby libraries for that database, but I imagine that it would be safer to use JRuby instead of MRI for interacting with it.
If your matrix is really sparse (i.e. the nodes only have a few interconnections) then you would get reasonably efficient storage from a RDBMS such as Oracle, PostgreSQL or SQL Server. Essentially you would have a table with two fields (row, col) and an index or key each way.
Set up the primary key one way round (depending on whether you mostly query by row or column) and make another index on the fields the other way round. This will only store data where a connection exists, and it will be proportional to the number ot edges in the graph.
The indexes will allow you to efficiently retrieve either a row or column, and will always be in sync.
If you have 10,000 nodes and 10 connections per node the database will only have 100,000 entries. 100 ednges per node will have 1,000,000 entries and so on. For sparse connectivity this should be fairly efficient.
A back-of-fag-packet estimate
This table will essentially have a row and column field. If the clustered index goes (row, column, value) then the other covering index would go (column, row, value). If the additions and deletions were random (i.e. not batched by row or column), the I/O would be approximatley double that for just the table.
If you batched the inserts by row or column then you would get less I/O on one of the indexes as the records are physically located together in one of the indexes. If the matrix really is sparse then this adjacency list representation is by far the most compact way to store it, which will be much faster than storing it as a 2D array.
A 10,000 x 10,000 matrix with a 64 bit value would take 800MB plus the row index. Updating one value would require a write of at least 80k for each write (writing out the whole row). You could optimise writes by rows if your data can be grouped by rows on inserts. If the inserts are realtime and random, then you will write out an 80k row for each insert.
In practice, these writes would have some efficiency because the would all be written out in a mostly contiguous area, depending on how your NoSQL platform physically stored its data.
I don't know how sparse your connectivity is, but if each node had an average of 100 connections, then you would have 1,000,000 records. This would be approximately 16 bytes per row (Int4 row, Int4 column, Double value) plus a few bytes overhead for both the clustered table and covering index. This structure would take around 32MB + a little overhead to store.
Updating a single record on a row or column would cause two single disk block writes (8k, in practice a segment) for random access, assuming the inserts aren't row or column ordered.
Adding 1 million randomly ordered entries to the array representation would result in approximately 80GB of writes + a little overhead. Adding 1m entries to the adjacency list representation would result in approximately 32MB of writes (16GB in practice because the whole block will be written for each index leaf node), plus a little overhead.
For that level of connectivity (10,000 nodes, 100 edges per node) the adjacency list will
be more efficient in storage space, and probably in I/O as well. You will get some optimisation from the platform, so some sort of benchmark might be appropriate to see which is faster in practice.
This is applicable to Google App Engine, but not necessarily constrained for it.
On Google App Engine, the database isn't relational, so no aggregate functions (such as sum, average etc) can be implemented. Each row is independent of each other. To calculate sum and average, the app simply has to amortize its calculation by recalculating for each individual new write to the database so that it's always up to date.
How would one go about calculating percentile and frequency distribution (i.e. density)? I'd like to make a graph of the density of a field of values, and this set of values is probably on the order of millions. It may be feasible to loop through the whole dataset (the limit for each query is 1000 rows returned), and calculate based on that, but I'd rather do some smart approach.
Is there some algorithm to calculate or approximate density/frequency/percentile distribution that can be calculated over a period of time?
By the way, the data is indeterminate in that the maximum and minimum may be all over the place. So the distribution would have to take approximately 95% of the data and only do a density based on that.
Getting the whole row (with that limit of 1000 at a time...) over and over again in order to get a single number per row is sure unappealing. So denormalize the data by recording that single number in a separate entity that holds a list of numbers (to a limit of I believe 1 MB per query, so with 4-byte numbers no more than 250,000 numbers per list).
So when adding a number also fetch the latest "added data values list" entity, if full make a new one instead, append the new number, save it. Probably no need to be transactional if a tiny error in the statistics is no killer, as you appear to imply.
If the data for an item can be changed have separate entities of the same kind recording the "deleted" data values; to change one item's value from 23 to 45, add 23 to the latest "deleted values" list, and 45 to the latest "added values" one -- this covers item deletion as well.
It may be feasible to loop through the whole dataset (the limit for each query is 1000 rows returned), and calculate based on that, but I'd rather do some smart approach.
This is the most obvious approach to me, why are you are you trying to avoid it?