I have ~7 million docs in a bucket and I am struggling to write the correct query/index combo to prevent it from running >5 seconds.
Here is a similar scenario to the one I am trying to solve:
I have multiple coffee shops each making coffee with different container/lid combos. These field key’s are also different for different doc types. With each sale being generated I keep track of these combos.
Here are a few example docs:
[{
"shopId": "x001",
"date": "2022-01-01T08:49:00Z",
"cappuccinoContainerId": "a001",
"cappuccinoLidId": "b001"
},
{
"shopId": "x001",
"date": "2022-01-02T08:49:00Z",
"latteContainerId": "a002",
"latteLidId": "b002"
},
{
"shopId": "x001",
"date": "2022-01-02T08:49:00Z",
"espressoContainerId": "a003",
"espressoLidId": "b003"
},
{
"shopId": "x002",
"date": "2022-01-01T08:49:00Z",
"cappuccinoContainerId": "a001",
"cappuccinoLidId": "b001"
},
{
"shopId": "x002",
"date": "2022-01-02T08:49:00Z",
"latteContainerId": "a002",
"latteLidId": "b002"
},
{
"shopId": "x002",
"date": "2022-01-02T08:49:00Z",
"espressoContainerId": "a003",
"espressoLidId": "b003"
}]
What I need to get out of the query is the following:
[{
"shopId": "x001",
"day": "2022-01-01",
"uniqueContainersLidsCombined": 2
},
{
"shopId": "x001",
"day": "2022-01-01",
"uniqueContainersLidsCombined": 4
},
{
"shopId": "x002",
"day": "2022-01-01",
"uniqueContainersLidsCombined": 2
},
{
"shopId": "x002",
"day": "2022-01-01",
"uniqueContainersLidsCombined": 4
}]
I.e. I want the total number of unique containers and lids combined per site and day.
I have tried using composite, adaptive and FTS indexes but I unable to figure this one out.
Does anybody have a different suggestion? Can someone please help?
CREATE INDEX ix1 ON default(shopId, DATE_FORMAT_STR(date,"1111-11-11"), [cappuccinoContainerId, cappuccinoLidId]);
If Using EE and shopId is immutable add PARTITION BY HASH (shopId) to above index definition (with higher partition numbers).
SELECT d.shopId,
DATE_FORMAT_STR(d.date,"1111-11-11") AS day
COUNT(DISTINCT [d.cappuccinoContainerId, d.cappuccinoLidId]) AS uniqueContainersLidsCombined
FROM default AS d
WHERE d.shopId IS NOT NULL
GROUP BY d.shopId, DATE_FORMAT_STR(d.date,"1111-11-11");
Adjust index key order of shopId, day based on the query predicates.
https://blog.couchbase.com/understanding-index-grouping-aggregation-couchbase-n1ql-query/
Update:
Based on EXPLAIN you have date predicate and all shopIds so use following index
CREATE INDEX ix2 ON default( DATE_FORMAT_STR(date,"1111-11-11"), shopId, [cappuccinoContainerId, cappuccinoLidId]);
As you need to DISTINCT of cappuccinoContainerId, cappuccinoLidId storing as single key (array of 2 elements) as [cappuccinoContainerId, cappuccinoLidId]. The advantage of this you can directly reference in COUNT as DISTINCT this allows use index aggregation. (NO DISTINCT in the Index that turns into ARRAY index and things will not work as expected .
I assume
That the cup types and lid types can be used for any drink type.
That you don't want to add any precomputed stuff to your data.
Perhaps an index like this my collection keyspace is in bulk.sales.amer, note I am not sure if this performs better or worse (or even if it is equivalent) WRT the solution posted by vsr:
CREATE INDEX `adv_shopId_concat_nvls`
ON `bulk`.`sales`.`amer`(
`shopId` MISSING,
(
nvl(`cappuccinoContainerId`, "") ||
nvl(`cappuccinoLidId`, "") ||
nvl(`latteContainerId`, "") ||
nvl(`latteLidId`, "") ||
nvl(`espressoContainerId`, "") ||
nvl(`espressoLidId`, "")),substr0(`date`, 0, 10)
)
And then a using the covered index above do your query like this:
SELECT
shopId,
CONCAT(
NVL(cappuccinoContainerId,""),
NVL(cappuccinoLidId,""),
NVL(latteContainerId,""),
NVL(latteLidId,""),
NVL(espressoContainerId,""),
NVL(espressoLidId,"")
) AS uniqueContainersLidsCombined,
SUBSTR(date,0,10) AS day,
COUNT(*) AS cnt
FROM `bulk`.`sales`.`amer`
GROUP BY
shopId,
CONCAT(
NVL(cappuccinoContainerId,""),
NVL(cappuccinoLidId,""),
NVL(latteContainerId,""),
NVL(latteLidId,""),
NVL(espressoContainerId,""),
NVL(espressoLidId,"")
),
SUBSTR(date,0,10)
Note I used the following 16 lines of data:
{"amer":"amer","date":"2022-01-01T08:49:00Z","cappuccinoContainerId":"a001","cappuccinoLidId":"b001","sales":"sales","shopId":"x001"}
{"amer":"amer","date":"2022-01-01T08:49:00Z","cappuccinoContainerId":"a001","cappuccinoLidId":"b001","sales":"sales","shopId":"x002"}
{"amer":"amer","date":"2022-01-02T08:49:00Z","latteContainerId":"a002","latteLidId":"b002","sales":"sales","shopId":"x001"}
{"amer":"amer","date":"2022-01-02T08:49:00Z","latteContainerId":"a002","latteLidId":"b002","sales":"sales","shopId":"x002"}
{"amer":"amer","date":"2022-01-02T08:49:00Z","espressoContainerId":"a003","espressoLidId":"b003","sales":"sales","shopId":"x001"}
{"amer":"amer","date":"2022-01-02T08:49:00Z","espressoContainerId":"a003","espressoLidId":"b003","sales":"sales","shopId":"x002"}
{"amer":"amer","date":"2022-01-03T08:49:00Z","cappuccinoContainerId":"a007","cappuccinoLidId":"b004","sales":"sales","shopId":"x001"}
{"amer":"amer","date":"2022-01-03T08:49:00Z","cappuccinoContainerId":"a007","cappuccinoLidId":"b004","sales":"sales","shopId":"x002"}
{"amer":"amer","date":"2022-01-03T08:49:00Z","latteContainerId":"a007","latteLidId":"b004","sales":"sales","shopId":"x001"}
{"amer":"amer","date":"2022-01-03T08:49:00Z","latteContainerId":"a007","latteLidId":"b004","sales":"sales","shopId":"x002"}
{"amer":"amer","date":"2022-01-03T01:49:00Z","espressoContainerId":"a007","espressoLidId":"b005","sales":"sales","shopId":"x001"}
{"amer":"amer","date":"2022-01-03T02:49:00Z","espressoContainerId":"a007","espressoLidId":"b005","sales":"sales","shopId":"x002"}
{"amer":"amer","date":"2022-01-03T03:49:00Z","espressoContainerId":"a007","espressoLidId":"b005","sales":"sales","shopId":"x002"}
{"amer":"amer","date":"2022-01-03T04:49:00Z","espressoContainerId":"a007","espressoLidId":"b005","sales":"sales","shopId":"x002"}
{"amer":"amer","date":"2022-01-03T05:49:00Z","espressoContainerId":"a007","espressoLidId":"b005","sales":"sales","shopId":"x002"}
{"amer":"amer","date":"2022-01-03T06:49:00Z","espressoContainerId":"a007","espressoLidId":"b005","sales":"sales","shopId":"x002"}
Applying some sorting by wrapping the above query with
SELECT T1.* FROM
(
-- paste above --
) AS T1
ORDER BY T1.day, T1,shopid, T1.uniqueContainersLidsCombined
We get
cnt day shopId uniqueContainersLidsCombined
1 "2022-01-01" "x001" "a001b001"
1 "2022-01-01" "x002" "a001b001"
1 "2022-01-02" "x001" "a002b002"
1 "2022-01-02" "x001" "a003b003"
1 "2022-01-02" "x002" "a002b002"
1 "2022-01-02" "x002" "a003b003"
1 "2022-01-03" "x001" "a007b005"
2 "2022-01-03" "x001" "a007b004"
2 "2022-01-03" "x002" "a007b004"
5 "2022-01-03" "x002" "a007b005"
If you still don't get the performance you need, you could possibly use the Eventing service to do a continuous map/reduce and an occasional update query to make sure things stay perfectly in sync.
Imagine the following GraphQL request:
{
books(
first:10,
filter: [{field: TITLE, contains: "Potter"}],
orderBy: [{sort: PRICE, direction: DESC}, {sort: TITLE}]
)
}
The result will return a connection with the Relay cursor information.
Should the cursor contain the filter and orderBy details?
Meaning querying the next set of data would only mean:
{
books(first:10, after:"opaque-cursor")
}
Or should the filter and orderBy be repeated?
In the latter case the user can specify different filter and/or orderBy details which would make the opaque cursor invalid.
I can't find anything in the Relay spec about this.
I've seen this done multiple ways, but I've found that with cursor-based pagination, your cursor exists only within your dataset, and to change the filters would change the dataset, making it invalid.
If you're using SQL (or something without cursor-based-pagination), then, you would need to include enough information in your cursor to be able to recover it. Your cursor would need to include all of your filter / order information, and you would need to disallow any additional filtering.
You'd have to throw an error if they sent "after" along with "filter / orderBy". You could, optionally, check to see if the arguments are the same as the ones in your cursor, in case of user error, but there simply is no use-case to get "page 2" of a DIFFERENT set of data.
I came across the same question / problem, and came to the same conclusion as #Dan Crews. The cursor must contain everything you need to execute the database query, except for LIMIT.
When your initial query is something like
SELECT *
FROM DataTable
WHERE filterField = 42
ORDER BY sortingField,ASC
LIMIT 10
-- with implicit OFFSET 0
then you could basically (don't do this in a real app, because of SQL Injections!) use exactly this query as your cursor. You just have to remove LIMIT x and append OFFSET y for every node.
Response:
{
edges: [
{
cursor: "SELECT ... WHERE ... ORDER BY ... OFFSET 0",
node: { ... }
},
{
cursor: "SELECT ... WHERE ... ORDER BY ... OFFSET 1",
node: { ... }
},
...,
{
cursor: "SELECT ... WHERE ... ORDER BY ... OFFSET 9",
node: { ... }
}
]
pageInfo: {
startCursor: "SELECT ... WHERE ... ORDER BY ... OFFSET 0"
endCursor: "SELECT ... WHERE ... ORDER BY ... OFFSET 9"
}
}
The next request will then use after: CURSOR, first: 10. Then you'll take the after argument and set the LIMIT and OFFSET:
LIMIT = first
OFFSET = OFFSET + 1
Then the resulting database query would be this when using after = endCursor:
SELECT *
FROM DataTable
WHERE filterField = 42
ORDER BY sortingField,ASC
LIMIT 10
OFFSET 10
As already mentioned above: This is only an example, and it's highly vulnerable to SQL Injections!
In a real world app, you could simply encode the provided filter and orderBy arguments within the cursor, and add offset as well:
function handleGraphQLRequest(first, after, filter, orderBy) {
let offset = 0; // initial offset, if after isn't provided
if(after != null) {
// combination of after + filter/orderBy is not allowed!
if(filter != null || orderBy != null) {
throw new Error("You can't combine after with filter and/or orderBy");
}
// parse filter, orderBy, offset from after cursor
cursorData = fromBase64String(after);
filter = cursorData.filter;
orderBy = cursorData.orderBy;
offset = cursorData.offset;
}
const databaseResult = executeDatabaseQuery(
filter, // = WHERE ...
orderBy, // = ORDER BY ...
first, // = LIMIT ...
offset // = OFFSET ...
);
const edges = []; // this is the resulting edges array
let currentOffset = offset; // this is used to calc the offset for each node
for(let node of databaseResult.nodes) { // iterate over the database results
currentOffset++;
const currentCursor = createCursorForNode(filter, orderBy, currentOffset);
edges.push({
cursor = currentCursor,
node = node
});
}
return {
edges: edges,
pageInfo: buildPageInfo(edges, totalCount, offset) // instead of
// of providing totalCount, you could also fetch (limit+1) from
// database to check if there is a next page available
}
}
// this function returns the cursor string
function createCursorForNode(filter, orderBy, offset) {
return toBase64String({
filter: filter,
orderBy: orderBy,
offset: offset
});
}
// function to build pageInfo object
function buildPageInfo(edges, totalCount, offset) {
return {
startCursor: edges.length ? edges[0].cursor : null,
endCursor: edges.length ? edges[edges.length - 1].cursor : null,
hasPreviousPage: offset > 0 && totalCount > 0,
hasNextPage: offset + edges.length < totalCount
}
}
The content of cursor depends mainly on your database and you database layout.
The code above emulates a simple pagination with limit and offset. But you could (if supported by your database) of course use something else.
In the meantime I came to another conclusion: I think it doesn't really matter whether you use an all-in-one cursor, or if you repeat filter and orderBy with each request.
There are basically two types of cursors:
(1.) You can treat a cursor as a "pointer to a specific item". This way the filter and sorting can change, but your cursor can stay the same. Kinda like the pivot element in quicksort, where the pivot element stays in place and everything around it can move.
Elasticsearch's Search After works like this. Here the cursor is just a pointer to a specific item in the dataset. But filter and orderBy can change independently.
The implementation for this style of cursor is dead simple: Just concat every sortable field. Done. Example: If your entity can be sorted by price and title (plus of course id, because you need some unique field as tie breaker), your cursor always consists of { id, price, title }.
(2.) The "all-in-one cursor" on the other hand acts like a "pointer to an item within a filtered and sorted result set". It has the benefit, that you can encode whatever you want. The server could for example change the filter and orderBy data (for whatever reason) without the client noticing it.
For example you could use Elasticsearch's Scroll API, which caches the result set on the server and though doesn't need filter and orderBy after the initial search request.
But aside from Elasticsearch's Scroll API, you always need filter, orderBy, limit, pointer in every request. Though I think it's an implementation detail and a matter of taste, whether you include everything within your cursor, or if you send it as separate arguments. The outcome is the same.
New to cascading, trying to find out a way to get top N tuples based on a sort/order. for example, I'd like to know the top 100 first names people are using.
here's what I can do similar in teradata sql:
select top 100 first_name, num_records
from
(select first_name, count(1) as num_records
from table_1
group by first_name) a
order by num_records DESC
Here's similar in hadoop pig
a = load 'table_1' as (first_name:chararray, last_name:chararray);
b = foreach (group a by first_name) generate group as first_name, COUNT(a) as num_records;
c = order b by num_records DESC;
d = limit c 100;
It seems very easy to do in SQL or Pig, but having a hard time try to find a way to do it in cascading. Please advise!
Assuming you just need the Pipe set up on how to do this:
In Cascading 2.1.6,
Pipe firstNamePipe = new GroupBy("topFirstNames", InPipe,
new Fields("first_name"),
);
firstNamePipe = new Every(firstNamePipe, new Fields("first_name"),
new Count("num_records"), Fields.All);
firstNamePipe = new GroupBy(firstNamePipe,
new Fields("first_name"),
new Fields("num_records"),
true); //where true is descending order
firstNamePipe = new Every(firstNamePipe, new Fields("first_name", "num_records")
new First(Fields.Args, 100), Fields.All)
Where InPipe is formed with your incoming tap that holds the tuple data that you are referencing above. Namely, "first_name". "num_records" is created when new Count() is called.
If you have the "num_records" and "first_name" data in separate taps (tables or files) then you can set up two pipes that point to those two Tap sources and join them using CoGroup.
The definitions I used were are from Cascading 2.1.6:
GroupBy(String groupName, Pipe pipe, Fields groupFields, Fields sortFields, boolean reverseOrder)
Count(Fields fieldDeclaration)
First(Fields fieldDeclaration, int firstN)
Method 1
Use a GroupBy and group them base on the columns required and u can make use of secondary sorting that is provided by the cascading ,by default it provies them in ascending order ,if we want them in descing order we can do them by reverseorder()
To get the TOP n tuples or rows
Its quite simple just use a static variable count in FILTER and increment it by 1 for each tuple count value increases by 1 and check weather it is greater than N
return true when count value is greater than N or else return false
this will provide the ouput with first N tuples
method 2
cascading provides an inbuit function unique which returns firstNbuffer
see the below link
http://docs.cascading.org/cascading/2.2/javadoc/cascading/pipe/assembly/Unique.html
I've been at this for a while. I have a data set that has a reoccurring key and a sequence similar to this:
id status sequence
1 open 1
1 processing 2
2 open 1
2 processing 2
2 closed 3
a new row is added for each 'action' that happens, so the various ids can have variable sequences. I need to get the Max sequence number for each id, but I still need to return the complete record.
I want to end up with sequence 2 for id 1, and sequence 3 for id 2.
I can't seem to get this to work without selecting the distinct ids, then looping through the results, ordering the values and then adding the first item to another list, but that's so slow.
var ids = this.ObjectContext.TNTP_FILE_MONITORING.Select(i => i.FILE_EVENT_ID).Distinct();
List<TNTP_FILE_MONITORING> vals = new List<TNTP_FILE_MONITORING>();
foreach (var item in items)
{
vals.Add(this.ObjectContext.TNTP_FILE_MONITORING.Where(mfe => ids.Contains(mfe.FILE_EVENT_ID)).OrderByDescending(mfe => mfe.FILE_EVENT_SEQ).First<TNTP_FILE_MONITORING>());
}
There must be a better way!
Here's what worked for me:
var ts = new[] { new T(1,1), new T(1,2), new T(2,1), new T(2,2), new T(2,3) };
var q =
from t in ts
group t by t.ID into g
let max = g.Max(x => x.Seq)
select g.FirstOrDefault(t1 => t1.Seq == max);
(Just need to apply that to your datatable, but the query stays about the same)
Note that with your current method, because you are iterating over all records, you also get all records from the datastore. By using a query like this, you allow for translation into a query against the datastore, which is not only faster, but also only returns only the results you need (assuming you are using Entity Framework or Linq2SQL).