Below is the url which will hit the middle layer then middle layer will form the solr query and fire it.
listing?limit=20&q=Chennai~Tambaram~450000~25000000~24
Chennai-->city
Tambaram-->locality
450000~25000000-->price_min and max
24--lux_ amenities
here other than lux_amenities I am using fq for all other things,so the problem here is sorting.
I am sorting the results using bscore and relscore likebelow,
$bscore desc,$relscore desc
first time it works fine.
above bscore and relscore will change based on lux_amenities but lux_amenities is neither part of fq nor q.So if second time we are changing the lux_amenities alone and firing the query means it is giving the result in same order as first query even the bscore and relscore are different.
So I disabled the queryResultCache,
-->
Now it is working fine. But I need a better solution than disabling this for all queries.For eg, I want to disable this for few queries alone not for all.
Could someone help me please..
Related
I have a large dataset to query and display in website on an array.
I made a pagination system with a scroll but i can only display a maximum of 100 items at a time so i'm facing issue when i want to display data of page 200 and more because i have to scroll until them and it take too long.
I have check other parts of my code and i didn't find other perf issue, is just the scroll queries which make my api call too long. I tried setting the request size from 100 to 10000 but it doesn't change anything.
I don't think sliced scroll can be a solution or then I didn't understand the functionality.
I'm desperately searching a way to skip the scroll queries before datas that i'm searching even it's not a precise method.
Hoping someone has a solution or at least a clue.
Edit:
More details about what i'm trying to achieve.
I log some actions of my users like calls in Elasticsearch indexes. They do millions of actions per month so Elasticsearch seems like a good option to store them knowing that i don't have to update them after they are stored .
I'm creating a page where my users can search for actions they've performed, but they're doing the "query" themselves. I mean they can select the period and many other parameters, order them by many parameters, etc. The number of result can be 1 or 100,000 items, but I can't show 100,000 items on my page for UI reasons, so I have to manage a pagination and send only part of the result to the page.
I made a scroll query to do it for now with a size of 1000, and i scroll until i'm in the current page of my pagination. I tried to vary the size but it's not really concluent because I can't know the number of result before the query is made.
And the deeper my user go in the pagination, the longer the query take.
I could increase the index.max_result_window with an unreachable number (but I don't know what that implies) make a simple query with a from and a second scroll query for export case but I wonder if they are a way to skip some step in a scroll when i know i'm going to take 100 items after the 1 000 000th item ?
Edit: I watched how google design its pagination and i notice that if you want to go deep in search results you can't unless you go step by step. You can't go directly to the 500th page.
This is how I done mine
So I just redesign my pagination to do the same as Google and force my users to use more precise filters to get less result. Thank you #Val for getting me to ask the right questions :)
I have a simple task:
Get all the items from an elastic index with status 'paid' or 'done' for the last week.
What I tried is this:
GET /my_index/_search?q=((status:paid or status:done) and (created_at > "now-7d/d"))
The interesting part is, if I do
GET /my_index/_search?q=((status:paid or status:done)
I get around 4k results, but if I do the whole query, I get 600k. It appears, that if I add the 2nd part, something stops to work properly.
I have tested the query in the discover tab of Kibana and it is working properly there, but for some reason, it does not with the API. Any help will be appreciated.
PS: I cannot do the query in the body, as there are additional aggregate filters there, that I at least haven't found a way to combine with the above filters.
You're on the right track but you have three tiny syntax mistakes that make the query not work as intended.
Change (created_at > "now-7d/d") to (created_at :> "now-7d/d")
Change the and into AND, Currently (x and y) are being parsed into x OR and OR y which is why you're getting so many results.
Change the or into OR, Same concept you're getting false matches due to it.
To summarize change your query into this:
GET /my_index/_search?q=((status:paid OR status:done) AND (created_at :> "now-7d/d"))
I use OracleBI 12c. We had to create a analysis. Let's say we need to show all the clients with account code starting with '202%' and at the same time we should check if that client also has account_code starting with '226%' . If they have then show them, else don't.
It may sound simple, but all filter options I tried didn't give the wanted output. Also I tried to use selection steps but didn't succeed either.I could have done that using FILTER in Column formula, but since the column is not a measure attribute it throws error 'Filter funciton requires at least one measure attribute in its first argument'.
Have somebody had this kind of issue?
I was hoping to create a sort of 'time tigger' using RethinkDB changefeeds:
return r.
Table("Checks").
Filter(r.Row.Field("ScheduledFor").Le(r.Now())).
Changes(r.ChangesOpts{
IncludeInitial: true,
}).Run(db)
However, while it picks up things that initially fulfill the Filter predicate, it does not appear to pick up records where ScheduledFor goes from being in the future to being in the past.
i.e. r.Now() appears to be evaluated upon being received by the server and never again.
Is there any way to make the Now() term dynamically evaluated? Or should I just do a table scan?
Currently r.now always evaluates to the time the server received the query. It's probably best to repeatedly do a table scan for any documents scheduled between the last table scan and the current time.
After performing a product evaluation by one of the managers other can change the scoring for certain categories. This changes in scoring are stored in the database for reference.
The structure of the evaluation is like this:
Evaluatoin
- Category
- Scoring point
an evaluation can have many categories which all can have many scoring points.
My problem is the following:
If I change a scoring point a few times all is entered in the database but in the reports i am only seeing the first scoring point. The rest of them with the same name are left blank but are using space just as it would if all were visible. The stored procedure that is delivering the data is working fine. It bring all data to the report which then displayes it wrong.
=Fields.CategoryName is working fine... every category name is displayed correctly
=Fields.ScoringPointName is not working... it displayes only the first and leavese all the rest blank... if for example a scoring point name is Product robustnes it would display only the first change of scoring but wouldnt display the rest
Any ideas???
Found out what the problem was. Maybe it will be helpful for other people
I was showing the data in a group header section with grouping =Fields.DefinitionText. Thus it will only repeat if the Fields.DefinitionText is distinct. About the empty space it's caused by the detail section that repeats for every data record. Thus if I want to display all of the data records I have to move the group header section textboxes to the report's detail section.
Here and Here are some usefull things about reporting.
Cheers