I have an object model whose structure is
Dashboard List of panels
List of containers
List of widgets
If i get whole dashboard, with panels + containers + widgets, from Database then multiple I/O requires
I want to get it in one I/O .For this i prepared a query which gives me this resultset.
DASHBOARDID PANELID CONTAINERID WIDGETID
13 11
5 2
13 11
5 3
13 11
6 4
13 11
6 5
13 12
7 6
13 12
7 7
13 12
8 8
13 12
8 9
Using list datastructure this model is able to be filled but it takes time i want to efficiently fill this resultset in above object model. is there nay way ?
The strict ordering and nesting in the data makes this fairly straightforward. The solution is a loop that look for changes in ID - when the ID changes a new instance is created. It iteration of the loop maintains the current dashboard, panel, component, widget. For example
Dashboard dashboard,
Panel panel;
Container container;
Widget widget;
List<Dashboard> dashboards;
ResultSet rs = ...;
while (rs.next())
{
int dashboardID = rs.getInt(1);
int panelID = rs.getInt(2);
int dashboardID = rs.getInt(3);
int dashboardID = rs.getInt(4);
if (dashboard==null || dashboardID!=dashboard.id()) {
dashboard = new Dashboard(dashboardID);
dashboards.add(dashboard);
}
if (panel==null || panelID!=panel.id()) {
panel = new Panel(panelID);
dashboard.add(panel);
}
if (container!=null || containerID!=container.id()) {
container = new Container(containerID);
panel.add(container);
}
// wimilarly for widget
}
// all dashboards in "dashboards" initialized with hierarchy
If you are not strictly limited to using JDBC, an ORM solution would resolve your issue. E.g. in Hibernate, you can easily configure your entity mapping to use lazy or eager loading. In the latter case, when you load a Dashboard object, the persistence framework fetches all the contained panels, containers and widgets with 1 to 4 queries total (depending on configuration - note that a single join query for 4 tables might not be optimal if there are many rows), and initialize the whole object graph for you automatically.
Related
I'm trying to understand why below query is taking too long to retrieve results. I have mocked up the values used but the below query is right and is returning 40 records (a node has 8 diff values and z node has 5 diff values so total 40 combinations). It's taking 2.5 min to return those 40 records. Please let me know what the issue is here. I'm suspecting this to be Neo4j version and infrastructure we're using right now in production.
After the below query we have algo.kShortestPaths.stream so the whole thing together is taking more than 5 min. What do you suggest? Is there no other way where we can handle such combinations (a and z node combinations > 40) within 5 min?
Infrastructure details: Neo4j 3.5 community edition
2 separate datacenters, sync job - 64GB mem 16GB CPU 4 cores
Cypher Query:
MATCH (s:SiteNode {siteName: 'siteName1'})-[rl:CONNECTED_TO]-(a:EquipmentNode)
WHERE a.locationClli = s.siteName AND toUpper(a.networkType) = 'networkType1' AND NOT (toUpper(a.equipmentTid) CONTAINS 'TEST')
WITH a.equipmentTid AS tid_A
MATCH pp = (a:EquipmentNode)-[rel:CONNECTED_TO]-(a1:EquipmentNode)
WHERE a.equipmentTid = tid_A AND ALL( t IN relationships(pp)
WHERE t.type IN ['Type1'] AND (t.totalChannels > 0 AND t.totalChannelsUsed < t.totalChannels) AND t.networkId IN ['networkId1'] AND t.status IN ['status1', 'status2'] )
WITH a
MATCH (d:SiteNode {siteName: 'siteName2'})-[rl:CONNECTED_TO]-(z:EquipmentNode)
WHERE z.locationClli = d.siteName AND toUpper(z.networkType) = 'networkType2' AND NOT (toUpper(z.equipmentTid) CONTAINS 'TEST')
WITH z.equipmentTid AS tid_Z, a
MATCH pp = (z:EquipmentNode)-[rel:CONNECTED_TO]-(z1:EquipmentNode)
WHERE z.equipmentTid=tid_Z AND ALL(t IN relationships(pp)
WHERE t.type IN ['Type2'] AND (t.totalChannels > 0 AND t.totalChannelsUsed < t.totalChannels) AND t.networkId IN ['networkId2'] AND t.status IN ['status1', 'status2'])
WITH DISTINCT z, a
return a.equipmentTid, z.equipmentTid
This query was built to handle small combinations upto 4 total a and z node combinations but today we might have combinations greater than 10 or 40 or 100 so this is timing out. I'm not sure if there's a better way to write the query to improve performance assuming the community edition is good enough for our case.
My table as below
How i want to show data from
name score
Riyal 17
demo2 11
demo3 9
demo1 1
I want to show from higest to low and will show only 10 data from table.
my code is
public function index()
{
$score_board = ScoreBoard::orderBy('id')->max('score');
return new ScoreBoardResource($score_board);
}
but it gives me nothing. Any idea how can i do it
->max('score') would return 17... If you want 10 records, sorted highest to lowest, you need to use the proper methods:
$scoreBoard = ScoreBoard::orderBy('score', 'DESC')->limit(10)->get();
Please read the Documentation:
https://laravel.com/docs/8.x/queries#ordering-grouping-limit-and-offset
I have the following linq statement. It is returning everything from allStores. I removed the DefaultIfEmpty and then it only returned the stores that are in both the subs table AND the allStores table. I need it to return all stores that are in subs table even if they are not in the allStores table. I have tried a few different things and have moved around the defaultifempty but can't seem to get it to return what I need.
I need stores 1,2,4,5 returned (everything in table 2) and I need to pull the division ID for the storeId from table 3 and the address from table 3. Even if the store is not in the address table or not in divisionID table I still need the storeID returned in the query.
from a in allStores
join sub in subs on a.DivisionId equals sub.DivisionId
join d in divs on new { a.DivisionId, a.StoreId } equals new { d.DivisionId, d.StoreId } into s
from selected in s.DefaultIfEmpty()
StoreId
Address
1
1234 Elm St.
2
5678 Maple St.
3
9101 Bella Ave.
4
1234 Meadow Dr.
StoreId
StoreStatus
1
Closed
2
Open
4
Open
5
Open
StoreId
DivisionId
1
12
2
14
3
16
4
18
5
20
You're looking for multiple left joins. I've noticed that in this area of questions there are a lot of near duplicates. But many of these only discuss one left join, others don't describe a need to deal with null reference exceptions, and still others describe linq-to-sql or similar and not linq-to-objects, which seems to be your case here.
Your sample data is not complete, so I'm just going to trust that you are joining on the right fields. You're close with your join on divs. But you need to do the same thing for subs, and you need to add a select segment that outputs the final shape of each row extracting properties from each source.
var results =
from a in allStores
// left join allScores with "subs"
join sub in subs on a.DivisionId equals sub.DivisionId into subG
from sub in subG.DefaultIfEmpty()
// left join allScores with divs
join div in divs
on new { a.DivisonId, a.StoreId }
equals new { div.DivisionId, div.StoreId }
into divG
from div in divG.DefaultIfEmpty()
// select properties from all sources, keeping in mind some are null
select new {
a.StoreId,
a.StoreStatus,
sub?.Address, // if you have a newer version of c#
DivisonId = div == null ? null : (int?)div.DivisionId // if you have an older version of c#
};
Scenario
We have over 5 million document in a bucket and all of it has nested JSON with a simple uuid key. We want to add one extra field to ALL of the documents.
Example
ee6ae656-6e07-4aa2-951e-ea788e24856a
{
"field1":"data1",
"field2":{
"nested_field1":"data2"
}
}
After adding extra field
ee6ae656-6e07-4aa2-951e-ea788e24856a
{
"field1":"data1",
"field3":"data3",
"field2":{
"nested_field1":"data2"
}
}
It has only one Primary Index: CREATE PRIMARY INDEX idx FOR bucket.
Problem
It takes ages. We tried it with n1ql, UPDATE bucket SET field3 = data3. Also sub-document mutation. But all of it takes hours. It's written in Go so we could put it into a goroutine, but it's still too much time.
Question
Is there any solution to reduce that time?
As you need to add new field, not modifying any existing field it is better to use SDKs SUBDOC API vs N1QL UPDATE (It is whole document update and require fetch the document).
The Best option will be Use N1QL get the document keys then use
SDK SUBDOC API to add the field you need. You can use reactive API(asynchronously)
You have 5M documents and have primary index use following
val = ""
In loop
SELECT RAW META().id FROM mybucket WHERE META().id > $val LIMIT 10000;
SDK SUBDOC update
val = last value from the SELECT
https://blog.couchbase.com/offset-keyset-pagination-n1ql-query-couchbase/
The Eventing Service can be quite performant for these sort of enrichment tasks. Even a low end system should be able to do 5M rows in under two (2) minutes.
// Note src_bkt is an alias to the source bucket for your handler
// in read+write mode supported for version 6.5.1+, this uses DCP
// and can be 100X more performant than N1QL.
function OnUpdate(doc, meta) {
// optional filter to be more selective
// if (!doc.type && doc.type !== "mytype") return;
// test if we already have the field we want to add
if (doc.field3) return;
doc.field3 = "data3";
src_bkt[meta.id] = doc;
}
For more details on Eventing refer to https://docs.couchbase.com/server/current/eventing/eventing-overview.html I typically enrich 3/4 of a billion documents. The Eventing function will also run faster (enrich more documents per second) if you increase the number of workers in your Eventing function's setting from say 3 to 16 provided you have 8+ physical cores on your Eventing node.
I tested the above Eventing function and it enriches 5M documents (modeled on your example) on my non-MDS single node couchbase test system (12 cores at 2.2GHz) in just 72 seconds. Obviously if you have a real multi node cluster it will be faster (maybe all 5M docs in just 5 seconds).
I am querying ArangoDb of about 500k document via arangojs.query() with this very simple query
"FOR c IN Entity FILTER c.id == 261764 RETURN c"
It is a node in node-link graph.
But sometimes, it took more than 10 seconds and in the log of arangodb also has warning about query taking too long.Lots of time it happens if new session is used on browser. Is it problem of arangodb or arangojs or my query itself is not optimized?
-------------------Edit----------------------
Added db.explain
Query string:
FOR c IN Entity FILTER c.id == 211764 RETURN c
Execution plan:
Id NodeType Est. Comment
1 SingletonNode 1 * ROOT
2 EnumerateCollectionNode 140270 - FOR c IN Entity /* full collection scan */
3 CalculationNode 140270 - LET #1 = (c.`id` == 211764) /* simple expression */ /* collections used: c : Entity */
4 FilterNode 140270 - FILTER #1
5 ReturnNode 140270 - RETURN c
Indexes used:
none
Optimization rules applied:
none
As your Explain shows, your query doesn't utilize indices, but does a full collection scan.
Depending on when it finds the match (at the start or the end of the collection) execution times may vary.
See the Indexing chapter for creating indices, and the AQL Execution and Performance chapter howto analyse the output of db._explain()