I have a requirement in which I just need a single row to be returned while querying a table in Dynamodb.
I can see a parameter in aws-cli named 'max-items' which apparently limits the result size of the query. Here is the sample query:
aws dynamodb query --table-name testTable --key-condition-expression "CompositePartitionKey = :pk"
--expression-attribute-values '{ ":pk": { "S": "1234_125" }, ":ps": { "S": "SOME_STATE" }
}'
--filter-expression 'StateAttribute IN (:ps) AND attribute_not_exists(AnotherAttribute)'
--index-name GSI_PK_SK --endpoint-url http://localhost:8000 --max-items 1
But I am not able to figure out any similar keyword/attribute in Go.
Here is what I could find relevant: How to set limit of matching items returned by DynamoDB using Java?
As you can see in DynamoDB's official description of the "Query" operation - https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html - the parameter you are looking for is "Limit". Please consult your Go library's documentation on how exactly to pass this parameter to the query.
By the way, note that Limit doesn't quite limit the number of returned results, but rather the number of rows read at the server side. If your query has a filter, it can return fewer than Limit results. I don't know whether this matters to you or not.
You might want to look into pagination. You can use page-size to control how much you can get in each query.
More details on pagination can be found on Paginating Table Query Results.
You need to paginate through DynamoDB's responses until your "user limit" is fulfilled.
Related
I have a few records in my elasticsearch collection and i want to use a GroupBy aggregation in elasticsearch querystring.
I want to know if it is possible, because i tried to google it always give result about this
i want to use this something like this in the query string , which can
give me records in the group.
For i.e.
http://localhost:9200/_all/tweets/_count?q=user:Pu*+user:Kim*
This will give me count of all the records which has name starts from Pu and Kim,
But i want to know that how many records are there has name starting with Pu
and Kim,
aggregations need to be specified in addition in the search request, you cannot specify them as part of a query string query.
You could also just execute two queries to find out this particular requirement...
I'm working on a simple side project, and have a tech stack that involves both a SQL database and ElasticSearch. I only have ElasticSearch because I assumed that as my project grows, my full text searching would be most efficiently performed by ES. My ES schema is very simple - documents that I insert into ES have 2 fields, one being the id and the other being the field with the body of text to search. The id being inserted into ES corresponds to that document's primary key id from the SQL database.
insert record into SQL -> insert record into ES using PK from SQL
Searching would be the reverse of that. Query ES and grab all the matching ids, and then turn around and use those ids to get records from SQL.
search ES can get all PK ids -> use those ids to get documents from SQL
The problem that I am facing though, is that ES can only return documents in a paginated manner. This is a problem because I also have a WHERE clause on my SQL query, beyond just the ids. My SQL query might look like this ...
SELECT * FROM foo WHERE id IN (1,2,3,4,5) AND bar != 'baz'
Well, with ES paginating the results, my WHERE clause will always only be querying a subset of the full results from ES. Even if I utilize ES' skip and take, I'm still only querying SQL using a subset of document ids.
Is there a way to get Elastic Search to only return the entire list of matching document ids? I realize this is here to not allow me to shoot myself in the foot, because doing this across all shards and many many documents is not efficient. Is there no way, though?
After putting in some hours on this project, I've only now realized that I've poorly engineered this, unless I can get all of these ids from ES. Some alternative implementations that I've thought of would be to store the things that I'm filtering on, in SQL, in ES as well. A problem there is that I'd have to update the ES document every time I update the document in SQL. This would require a pretty big rewrite to some of my data access code. I could scrap ElasticSearch all together and just perform searching in Postgres, for now, until I can think of a better way to structure this.
The elasticsearch not support return each and every doc match to you queries. Because it Ll overload the system. Instead of this.. Use scroll concept in elasticsearch.. It's lik cursor concept in db's..
http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/scan-scroll.html
For more examples refer the Github repo. https://github.com/sidharthancr/elasticsearch-java-client
Hope it helps..
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-request-fields.html
please have a look into the elastic search document where you can specify only particular fields that return from the match documents
hope this resolves your problem
{
"fields" : ["user", "postDate"],
"query" : {
"term" : { "user" : "kimchy" }
}
}
Is it possible to query for a distinct/unique count of a field using Kibana? I am using elastic search as my backend to Kibana.
If so, what is the syntax of the query? Heres a link to the Kibana interface I would like to make my query: http://demo.kibana.org/#/dashboard
I am parsing nginx access logs with logstash and storing the data into elastic search. Then, I use Kibana to run queries and visualize my data in charts. Specifically, I want to know the count of unique IP addresses for a specific time frame using Kibana.
For Kibana 4 go to this answer
This is easy to do with a terms panel:
If you want to select the count of distinct IP that are in your logs, you should specify in the field clientip, you should put a big enough number in length (otherwise, it will join different IP under the same group) and specify in the style table. After adding the panel, you will have a table with IP, and the count of that IP:
Now Kibana 4 allows you to use aggregations. Apart from building a panel like the one that was explained in this answer for Kibana 3, now we can see the number of unique IPs in different periods, that was (IMO) what the OP wanted at the first place.
To build a dashboard like this you should go to Visualize -> Select your Index -> Select a Vertical Bar chart and then in the visualize panel:
In the Y axis we want the unique count of IPs (select the field where you stored the IP) and in the X axis we want a date histogram with our timefield.
After pressing the Apply button, we should have a graph that shows the unique count of IP distributed on time. We can change the time interval on the X axis to see the unique IPs hourly/daily...
Just take into account that the unique counts are approximate. For more information check also this answer.
Be aware with Unique count you are using 'cardinality' metric, which does not always guarantee exact unique count. :-)
the cardinality metric is an approximate algorithm. It is based on the
HyperLogLog++ (HLL) algorithm. HLL works by hashing your input and
using the bits from the hash to make probabilistic estimations on the
cardinality.
Depending on amount of data I can get differences of 700+ entries missing in a 300k dataset via Unique Count in Elastic which are otherwise really unique.
Read more here: https://www.elastic.co/guide/en/elasticsearch/guide/current/cardinality.html
Create "topN" query on "clientip" and then histogram with count on "clientip" and set "topN" query as source. Then you will see count of different ips per time.
Unique counts of field values are achieved by using facets. See ES documentation for the full story, but the gist is that you will create a query and then ask ES to prepare facets on the results for counting values found in fields. It's up to you to customize the fields used and even describe how you want the values returned. The most basic of facet types is just to group by terms, which would be like an IP address above. You can get pretty complex with these, even requiring a query within your facet!
{
"query": {
"match_all": {}
},
"facets": {
"terms": {
"field": "ip_address"
}
}
}
Using Aggs u can easily do that.
Writing down query for now.
GET index/_search
{
"size":0,
"aggs": {
"source": {
"terms": {
"field": "field",
"size": 100000
}
}
}
}
This would return the different values of field with there doc counts.
For Kibana 7.x, Unique Count is available in most visualizations.
For example, in Lens:
In aggregation based visualizations:
And even in TSVB (supporting normal fields as well as Runtime Fields, Scripted Fields are not supported):
We're running Solr 3.6 and are trying to apply a conditional sort on the result set. To clarify, the data is a set of bids, and we want to add the option to sort by the current user's bid, so it can't function as a regular sort (as the bid will be different for each user that runs the query).
The documents in the result set include a "CurrentUserId" and "CurrentBid" field, so I think we need something like the following to sort:
sort=((CurrentUserId = 12345) ? CurrentBid : 0) desc
This is just pseudocode, but the idea is that if the currentUserId in Solr matches the user Id (12345 in this example), then sort by CurrentBid, otherwise, just use 0.
It seems like doing a sort by query might be the way to go with achieving this (or at least form part of the solution), using something like the following query:
http://localhost:8080/solr/select/?q=:&sort=query(CurrentUserId:10330 AND CurrentBid:[1 TO *])+desc
This doesn't seem to be working for me though, and results in the following error:
sort param could not be parsed as a query, and is not a field that exists in the index: ...
The Solr documentation indicates that the query function can be used as a sort parameter from Solr 1.4 onwards, so this seems like it should work.
Any advice on how to go about achieving this would be greatly appreciated.
According to the Solr Documentation link you provided,
Any type of subquery is supported through either parameter dereferencing $otherparam or direct specification of the query string in the LocalParams via "v".
So based on the examples and your query, I think one or both of the following should work:
http://localhost:8080/solr/select/?q=:&sort=query($qq)+desc&qq=(CurrentUserId:10330 AND CurrentBid:[1 TO *])
http://localhost:8080/solr/select/?q=:&sort=query({v='CurrentUserId:10330 AND CurrentBid:[1 TO *]'})+desc
In MySQL I can do something like:
SELECT id FROM table WHERE field = 'foo' LIMIT 5
If the table has 10,000 rows, then this query is way way faster than if I left out the LIMIT part.
In ElasticSearch, I've got the following:
{
"query":{
"fuzzy_like_this_field":{
"body":{
"like_text":"REALLY LONG (snip) TEXT HERE",
"max_query_terms":1,
"min_similarity":0.95,
"ignore_tf":true
}
}
}
}
When I run this search, it takes a few seconds, whereas mysql can return results for the same query in far, far less time.
If I pass in the size parameter (set to 1), it successfully only returns 1 result, but the query itself isn't any faster than if I had set the size to unlimited and returned all the results. I suspect the query is being run in its entirety and only 1 result is being returned after the query is done processing. This means the "size" attribute is useless for my purposes.
Is there any way to have my search stop searching as soon as it finds a single record that matches the fuzzy search, rather than processing every record in the index before returning a response? Am I misunderstanding something more fundamental about this?
Thanks in advance.
You are correct the query is being ran entirely. Queries by default return data sorted by score, so your query is going to score each document. The docs state that the fuzzy query isn't going to scale well, so might want to consider other queries.
A limit filter might give you similar behavior to what your looking for.
A limit filter limits the number of documents (per shard) to execute
on
To replicate mysql field='foo' try using a term filter. You should use filters when you don't care about scoring, they are faster and cache-able.