I could have been in a struggle that to derive a facet search in the mongodb with c# driver. I have verified many tutorials but didn't get the suitable solution.
My document/collection will be as follows.
db.products.insert([
{"product_name": "Product 1", "year":2014,"Manufacturer":"manufacturer1"},
{"product_name": "Product 2", "year":2015,"Manufacturer":"manufacturer2"},
{"product_name": "Product 3", "year":2014,"Manufacturer":"manufacturer1"},
{"product_name": "Product 4", "year":2015,"Manufacturer":"manufacturer2"},
{"product_name": "Product 5", "year":2014,"Manufacturer":"manufacturer1"}
])
I want the output like
Year:
2014 : 3
2015 : 2
Manufacturer
Manufacturer1:3
Manufacturer1:2
Could any one please help me to solve the above problem using c# driver.
Using mongodb shell this can be done using $group in two phases:
db.products.aggregate([{$group:{_id:"$year",count:{$sum:1}}}])
db.products.aggregate([{$group:{_id:"$Manufacturer",count:{$sum:1}}}])
You can put multiple pipelines inside a faceted query. But remember that you cannot pass the output of one pipe to another pipe.
Each sub-pipeline within $facet is passed the exact same set of input documents. These sub-pipelines are completely independent of one another and the document array output by each is stored in separate fields in the output document. The output of one sub-pipeline can not be used as the input for a different sub-pipeline within the same $facet stage. If further aggregations are required, add additional stages after $facet and specify the field name, , of the desired sub-pipeline output.
You can try out the following query to get the desired result.
db.products.aggregate([
{
$facet : {
year : {
$group : {
_id : '$year',
count : {
$sum : 1
}
}
},
manufacturer : {
$group : {
_id : '$Manufacturer',
count : {
$sum : 1
}
}
}
}
}
])
Related
I am using elasticsearch to get relevant blog articles from a database of articles. I want results that contain particular words to be given higher score than the search results who do not have them.
I have tried adding stop words and given more to other fields but the results are not quite as expected. I am using developer mode of the Kibana interface of elasticsearch
"""
GET blog-desc/_search
{
"query": {
"more_like_this" : {
"fields" : ["Meta description","Title^5",
"Short title^0.5"],
"like" : "Harry had a silver wand he likes to play with! Among his friends he has the most expensive one. The only difference between his wand and his sister's is that in the color",
"min_term_freq" : 1,
"max_query_terms" : 12,
"minimum_should_match": "30%",
"stop_words": ["difference", "play", "among"]
, "boost_terms": 1
}
}
}
"""
In the sample code above, I would want search results having "silver" as a word in them given more score than other articles who do not that word.
I have a production_order document_type
i.e.
{
part_number: "abc123",
start_date: "2018-01-20"
},
{
part_number: "1234",
start_date: "2018-04-16"
}
I want to create a commodity document type
i.e.
{
part_number: "abc123",
commodity: "1 meter machining"
},
{
part_number: "1234",
commodity: "small flat & form"
}
Production orders are datawarehoused every week and are immutable.
Commodities on the other hand could change over time. i.e abc123 could change from 1 meter machining to 5 meter machining, so I don't want to store this data with the production_order records.
If a user searches for "small flat & form" in the commodity document type, I want to pull all matching records from the production_order document type, the match being between part number.
Obviously I can do this in a relational database with a join. Is it possible to do the same in elasticsearch?
If it helps, we have about 500k part numbers that will be commoditized and our production order data warehouse currently holds 20 million records.
I have found that you can indeed now query between indexs in elasticsearch, however you have to ensure your data stored correctly. Here is an example from the 6.3 elasticsearch docs
Terms lookup twitter example At first we index the information for
user with id 2, specifically, its followers, then index a tweet from
user with id 1. Finally we search on all the tweets that match the
followers of user 2.
PUT /users/user/2
{
"followers" : ["1", "3"]
}
PUT /tweets/tweet/1
{
"user" : "1"
}
GET /tweets/_search
{
"query" : {
"terms" : {
"user" : {
"index" : "users",
"type" : "user",
"id" : "2",
"path" : "followers"
}
}
}
}
Here is the link to the original page
https://www.elastic.co/guide/en/elasticsearch/reference/6.1/query-dsl-terms-query.html
In my case above I need to setup my storage so that commodity is a field and it's values are an array of part numbers.
i.e.
{
"1 meter machining": ["abc1234", "1234"]
}
I can then look up the 1 meter machining part numbers against my production_order documents
I have tested and it works.
There is no joins supported in elasticsearch.
You can query twice first by getting all the partnumbers using "small flat & form" and then using all the partnumbers to query the other index.
Else try to find a way to merge these into a single index. That would be better. Updating the Commodities would not cause you any problem by combining the both.
I am very new to Mongo but I have SQL experience so I am trying to wrap my head around this concept. I am attempting to remove a whole document based on the result of a subdocument.
The document/row looks close to the following:
{
"_id" : ObjectId("5a7e04e3809303035bf6437a"),
"receivedTime" : ISODate("2018-02-09T20:30:27.118Z"),
"status" : "NORMALIZED",
"originalHeaders" : {
"name" : "My Alert Name",
"description" : null,
"version" : 0,
"severity" : 3
},
"partOfIncident" : false
}
I want to remove all documents that have the name = "My Alert Name". I have been trying something like the following by calling it from a bash script. This is the command after variable substitution has been performed:
++ mongo admin -u admin -p password --eval 'db.getSiblingDB("database_name").collection.deleteMany({originalHeaders: {name: "I ALERT EVERYTHING"} })'
After calling it, nothing is removed. Any pointers on how to accomplish my end goal would be greatly appreciated. I suppose it is possible to run through a find and save all of the node _id to run for deletion but that sounds terribly inefficient.
When accessing a nested field you need to use dot notation.
db.collection_name.deleteMany( { "originalHeaders.name" : "My Alert Name" } )
This will delete all documents where originalHeaders.name = "My Alert Name"
If I have a document in elasticsearch that looks like the following:
{
"_id" : 1
"sentences" : [
"The cat lives in Chicago",
"The dog lives in Milan",
"The pig lives in Mexico"
]
}
How can I perform a search / query which will only match if all conditions are met in the same sentence?
I would like to search sentences:(+Chicago +cat) I would get a match, but if I searched sentences:(+Mexico +dog) I want to get no match.
I'm implementing a grouped search in Solr. I'm looking for a way of summing one field and sort the results by this sum. With the following data example I hope it will be clearer.
{
[
{
"id" : 1,
"parent_id" : 22,
"valueToBeSummed": 3
},
{
"id" : 2,
"parent_id" : 22,
"valueToBeSummed": 1
},
{
"id" : 3,
"parent_id" : 33,
"valueToBeSummed": 1
},
{
"id" : 4,
"parent_id" : 5,
"valueToBeSummed": 21
}
]
}
If the search is made over this data I'd like to obtain
{
[
{
"numFound": 1,
"summedValue" : 21,
"parent_id" : 5
},
{
"numFound": 2,
"summedValue" : 4,
"parent_id" : 22
},
{
"numFound": 1,
"summedValue" : 1,
"parent_id" : 33
}
]
}
Do you have any advice on this ?
Solr 5.1+ (and 5.3) introduces Solr Facet functions to solve this exact issue.
From Yonik's introduction of the feature:
$ curl http://localhost:8983/solr/query -d 'q=*:*&
json.facet={
categories:{
type : terms,
field : cat,
sort : "x desc", // can also use sort:{x:desc}
facet:{
x : "avg(price)",
y : "sum(price)"
}
}
}
'
So the suggestion would be to upgrade to the newest version of Solr (the most recent version is currently 5.2.1, be advised that some of the syntax that's on the above link will be landed in 5.3 - the current release target).
So you want to group your results on the field parent_id and inside each group you want to sum up the fields valueToBeSummed and then you want to sort the entire results (the groups) by this new summedvalue field. That is a very interesting use case...
Unfortunately, I don't think there is a built in way of doing what you have asked.
There are function queries which you can use to sort, there is a group.func parameter also, but they will not do what you have asked.
Have you already indexed this data? Or are you still in the process of charting out how to store this data? If its the latter then one possible way would be to have a summedvalue field for each documents and calculate this as and when a document gets indexed. For example, given the sample documents in your question, the first document will be indexed as
{
"id" : 1,
"parent_id" : 22,
"valueToBeSummed": 3
"summedvalue": 3
"timestamp": current-timestamp
},
Before indexing the second document id:2 with parent_id:22 you will run a solr query to get the last indexed document with parent_id:22
Solr Query q=parent_id:22&sort=timestamp desc&rows=1
and add the summedvalue of id:1 with valueToBeSummed of id:2
So the next document will be indexed as
{
"id" : 2,
"parent_id" : 22,
"valueToBeSummed": 1
"summedvalue": 4
"timestamp": current-timestamp
}
and so on.
Once you have documents indexed this way, you can run a regular solr query with &group=true&group.field=parent_id&sort=summedValue.
Please do let us know how you decide to implement it. Like I said its a very interesting use case! :)
You can add the below query
select?q=*:*&stats=true&stats.field={!tag=piv1 sum=true}valueToBeSummed&facet=true&facet.pivot={!stats=piv1 facet.sort=index}parent_id&wt=json&indent=true
You need to use Stats Component for the requirement. You can get more information here. The idea is first define on what you need to have stats on. Here it is valueToBeSummed, and then we need to group on parent_id. We use facet.pivot for this functionality.
Regarding sort, when we do grouping, the default sorting order is based on count in each group. We can define based on the value too. I have done this above using facet.sort=index. So it sorted on parent_id which is the one we used for grouping. But your requirement is to sort on valueToBeSummed which is different from the grouping attribute.
As of now not sure, if we can achieve that. But will look into it and let you know.
In short, you got the grouping, you got the sum above. Just sort is pending