Requirement:
Set of Dashboards to be shown in a ERP Home-screen. Data is filtered according to the current user permissions.
As of Now:
High Chart is used for Data Visualization. Background Page is at C# .Net
Problem :
Every time user changes the filter it hits the Live DB and fetches the Data.
Every Morning, Users login at almost same time, so there will be huge no of requests sent to SQL server at the same. Which will cause performance issue. Also there are lot more charts to come.
We're planning to implement SQL Analysis Dashboard Qubes for the Data.
Can somebody pls suggest if that's a right way or suggest any other better way.?
Better architecture for this.
Thank you.
So first things first - you need to know what is the real issue, if the resources - just add more as you need them and try to optimise your code. But from my experience this is not the case - I believe you have an classic example of row locks while you have concurrent transactions trying to access the same data. If you have problem with deadlocks you might want to try using the snapshot transaction isolation level, it this is just concurrent read you might want to create a dummy replication and copy the heaviest accessed objects into separate, read only DB, if you are already taking log backups using log shipping and read from the copy when possible sound like a bargain to me as well.
If you are happy to put some effort in fixing it properly I would recommend considering a Data Warehouse solution and link your application/reporting to it.
About the data cubes and/or SSAS solution, this is helpful but to do it you will realise that you need the DW anyway and to see a real advantage from it your customers would have to to a lot of dimensional aggregation not just a simple "refresh report to download today's data".
There will be a lot of work for you and I recommend to analyse the wait stats as a start point to understand where exactly you are right now and what is the real problem. As a gift, please find the code below to get these stats:
DECLARE #Wait_Types_Excluded TABLE([wait_type] nvarchar(60) PRIMARY KEY);
INSERT INTO #Wait_Types_Excluded([wait_type]) VALUES
(N'BROKER_EVENTHANDLER'), (N'BROKER_RECEIVE_WAITFOR'), (N'BROKER_TASK_STOP'), (N'BROKER_TO_FLUSH'), (N'BROKER_TRANSMITTER')
,(N'CHECKPOINT_QUEUE'), (N'CHKPT'), (N'CLR_AUTO_EVENT'), (N'CLR_MANUAL_EVENT'), (N'CLR_SEMAPHORE') ,(N'DIRTY_PAGE_POLL')
,(N'DISPATCHER_QUEUE_SEMAPHORE'), (N'EXECSYNC'), (N'FSAGENT'), (N'FT_IFTS_SCHEDULER_IDLE_WAIT'), (N'FT_IFTSHC_MUTEX')
,(N'KSOURCE_WAKEUP'), (N'LAZYWRITER_SLEEP'), (N'LOGMGR_QUEUE'), (N'MEMORY_ALLOCATION_EXT'), (N'ONDEMAND_TASK_QUEUE')
,(N'PREEMPTIVE_XE_GETTARGETSTATE'), (N'PWAIT_ALL_COMPONENTS_INITIALIZED'), (N'PWAIT_DIRECTLOGCONSUMER_GETNEXT')
,(N'QDS_PERSIST_TASK_MAIN_LOOP_SLEEP'), (N'QDS_ASYNC_QUEUE'), (N'QDS_CLEANUP_STALE_QUERIES_TASK_MAIN_LOOP_SLEEP')
,(N'QDS_SHUTDOWN_QUEUE'), (N'REDO_THREAD_PENDING_WORK'), (N'REQUEST_FOR_DEADLOCK_SEARCH'), (N'RESOURCE_QUEUE')
,(N'SERVER_IDLE_CHECK'), (N'SLEEP_BPOOL_FLUSH'), (N'SLEEP_DBSTARTUP'), (N'SLEEP_DCOMSTARTUP'), (N'SLEEP_MASTERDBREADY')
,(N'SLEEP_MASTERMDREADY'), (N'SLEEP_MASTERUPGRADED'), (N'SLEEP_MSDBSTARTUP'), (N'SLEEP_SYSTEMTASK'), (N'SLEEP_TASK')
,(N'SLEEP_TEMPDBSTARTUP'), (N'SNI_HTTP_ACCEPT'), (N'SP_SERVER_DIAGNOSTICS_SLEEP'), (N'SQLTRACE_BUFFER_FLUSH')
,(N'SQLTRACE_INCREMENTAL_FLUSH_SLEEP'), (N'SQLTRACE_WAIT_ENTRIES'), (N'WAIT_FOR_RESULTS'), (N'WAITFOR')
,(N'WAITFOR_TASKSHUTDOWN'), (N'WAIT_XTP_RECOVERY'), (N'WAIT_XTP_HOST_WAIT'), (N'WAIT_XTP_OFFLINE_CKPT_NEW_LOG')
,(N'WAIT_XTP_CKPT_CLOSE'), (N'XE_DISPATCHER_JOIN'), (N'XE_DISPATCHER_WAIT'), (N'XE_TIMER_EVENT')
,(N'DBMIRROR_DBM_EVENT'), (N'DBMIRROR_EVENTS_QUEUE'), (N'DBMIRROR_WORKER_QUEUE'), (N'DBMIRRORING_CMD'),
(N'HADR_CLUSAPI_CALL'), (N'HADR_FILESTREAM_IOMGR_IOCOMPLETION'), (N'HADR_LOGCAPTURE_WAIT'),
(N'HADR_NOTIFICATION_DEQUEUE'), (N'HADR_TIMER_TASK'), (N'HADR_WORK_QUEUE');
SELECT
[Approx_Wait_Stats_Restart_Date] = CAST(DATEADD(minute, -CAST((CAST(ws.[wait_time_ms] as decimal(38,18)) / 60000.0) as int), SYSDATETIME()) as smalldatetime)
,[SQL_Server_Last_Restart_Date] = CAST(si.[sqlserver_start_time] as smalldatetime)
FROM sys.dm_os_wait_stats ws, sys.dm_os_sys_info si
WHERE ws.[wait_type] = N'SQLTRACE_INCREMENTAL_FLUSH_SLEEP';
SELECT TOP 25
ws.[wait_type]
,[Total_Wait_(s)] = CAST(SUM(ws.[wait_time_ms]) OVER (PARTITION BY ws.[wait_type]) / 1000.0 as decimal(19,3))
,[Resource_(s)] = CAST(SUM([wait_time_ms] - [signal_wait_time_ms]) OVER (PARTITION BY ws.[wait_type]) / 1000.0 as decimal(19,3))
,[Signal_(s)] = CAST(SUM(ws.[signal_wait_time_ms]) OVER (PARTITION BY ws.[wait_type]) / 1000.0 as decimal(19,3))
,[Avg_Total_Wait_(ms)] = CASE WHEN SUM(ws.[waiting_tasks_count]) OVER (PARTITION BY ws.[wait_type]) > 0 THEN SUM(ws.[wait_time_ms]) OVER (PARTITION BY ws.[wait_type])/ SUM(ws.[waiting_tasks_count])OVER (PARTITION BY ws.[wait_type]) END
,[Avg_Resource_Wait_(ms) = CASE WHEN SUM(ws.[waiting_tasks_count]) OVER (PARTITION BY ws.[wait_type]) > 0 THEN SUM(ws.[wait_time_ms] - ws.[signal_wait_time_ms]) OVER (PARTITION BY ws.[wait_type])/ SUM(ws.[waiting_tasks_count]) OVER (PARTITION BY ws.[wait_type])END
,[Avg_Signal_Wait_(ms)] = CASE WHEN SUM(ws.[waiting_tasks_count]) OVER (PARTITION BY ws.[wait_type])> 0 THEN SUM(ws.[signal_wait_time_ms]) OVER (PARTITION BY ws.[wait_type])/ SUM(ws.[waiting_tasks_count]) OVER (PARTITION BY ws.[wait_type])END
,[Waiting_Tasks_QTY] = SUM(ws.[waiting_tasks_count]) OVER (PARTITION BY ws.[wait_type])
,[Percent_of_Total_Waits_Time] = CAST(CAST(SUM(ws.[wait_time_ms]) OVER (PARTITION BY ws.[wait_type]) as decimal) / CAST(SUM(ws.[wait_time_ms]) OVER() as decimal) * 100.0 as decimal(5,2))
,[Percent_of_Total_Waits_QTY] = CAST(CAST(SUM(ws.[waiting_tasks_count]) OVER (PARTITION BY ws.[wait_type]) as decimal)/ CAST(SUM(ws.[waiting_tasks_count]) OVER() as decimal) * 100.0 as decimal(5,2))
FROM sys.dm_os_wait_stats ws
LEFT JOIN #Wait_Types_Excluded wte ON ws.[wait_type] = wte.[wait_type]
WHERE wte.[wait_type] IS NULL
AND ws.[waiting_tasks_count] > 0
ORDER BY [Total_Wait_(s)] DESC;
Related
I'm brand-spakin' new to SQL and was asked to help write a query for a report. I need to limit the data to the last 10 services done by a clinician, and then subtotal the difference between the two times (time in and out) for each clinician.
I'm guessing I need to do a "LIMIT" clause to limit the data, but I'm not sure how or where to put that information. I am also thinking I need to use "GROUP BY", but not positive on that either. Any help would be appreciated.
I tried simplifying the existing query that my boss started but I'm getting error messages about the GROUP BY clause because I don't have an aggregate.
Select CV.emp_name,
CV.Visittype,
CVt.clientvisit_id,
CV.client_id,
CV.rev_timein,
CV.rev_timeout,
Convert(varchar(25),Cast(CV.rev_timein As Time),8) As Start_Time,
CV.program_id,
CV.cptcode
From ClientVisit CV
Where CV.visittype = 'Mobile Therapy' And CV.program_id = 31
And CV.cptcode <> 'NB' And CV.rev_timein <=
Convert(datetime,IsNull(#param2, GetDate())) And CV.rev_timein >=
Convert(datetime,IsNull(#param1, GetDate())) And
Cast(CV.rev_timein As time) > '15:59'
Group By CV.emp_name,
CV.rev_timein
There is a question with a similar title here: https://dba.stackexchange.com/questions/271237/postgresql-refreshing-materialized-view-fails-with-no-space-left-on-device-an
However, the symptoms I'm experiencing seem different:
when refreshing with REFRESH MATERIALIZED VIEW mySchema.myView from within a Spring Boot application, PostgresSQLwrites tmp files as long as there is space on the device, currently 128gb.
SELECT * FROM pg_stat_activity shows that only this query is currently being executed.
when executing the very same refresh query via any SQL console (like PgAdmin), all is well. The view is refreshed within a few seconds, writing no more than a few 100mb of tmp data.
We currently use PostgreSQL 13.3, but 11.11 showed the same behaviour.
JDBC Driver versions that were used:
42.2.14
42.2.18
So, I have no idea where this is coming from. Since refreshing using some SQL console works as intended, I don't see optimizing the query as the first thing to do. Also logging all SQL generated by the application shows nothing suspicious.
If it helps, here is the query for the view. It depends on other views (*_c), which all refresh fine:
WITH RECURSIVE chapters (code_id, chapter_id) AS (
SELECT Ziffer.tael_id AS code_id, Ziffer.tael_id as chapter_id
FROM tael_c AS Ziffer
INNER JOIN tael_c AS Tariff ON Ziffer.tael_idt = Tariff.tael_id AND Ziffer.tael_typ = 25 AND Ziffer.validfrom < COALESCE(Tariff.invalidfrom, DATE '9999-12-31') AND Tariff.validfrom < COALESCE(Ziffer.invalidfrom, DATE '9999-12-31')
UNION
SELECT chapters.code_id, tete_c.tael_idp as chapter_id
FROM chapters
INNER JOIN tete_c ON chapters.chapter_id = tete_c.tael_idc
INNER JOIN tael_c ON tete_c.tael_idc = tael_c.tael_id AND tael_c.tael_typ IN (3, 25)
)
SELECT Ziffer.tael_id,
GREATEST(Ziffer.wsid, Kapitel.wsid, Conn.wsid, InExTael.wsid, InEx.wsid) AS wsid,
GREATEST(Ziffer.validfrom, Kapitel.validfrom, Conn.validfrom, InExTael.validfrom, InEx.validfrom) AS validfrom,
LEAST(Ziffer.invalidfrom, Kapitel.invalidfrom, Conn.invalidfrom, InExTael.invalidfrom, InEx.invalidfrom) AS invalidfrom,
string_agg(InEx.txtx_txt, ',') FILTER (WHERE InExTael.tael_typ = 40) AS Incl,
string_agg(InEx.txtx_txt, ',') FILTER (WHERE InExTael.tael_typ = 46) AS Excl
FROM chapters
INNER JOIN tael_c AS Ziffer ON Ziffer.tael_id = chapters.code_id
INNER JOIN tael_c AS Kapitel ON chapters.chapter_id = Kapitel.tael_id AND Kapitel.validfrom < COALESCE(Ziffer.invalidfrom, DATE '9999-12-31') AND Ziffer.validfrom < COALESCE(Kapitel.invalidfrom, DATE '9999-12-31')
INNER JOIN tete_c AS Conn ON Kapitel.tael_id = Conn.tael_idp AND Conn.validfrom < COALESCE(LEAST(Ziffer.invalidfrom, Kapitel.invalidfrom), DATE '9999-12-31') AND GREATEST(Ziffer.validfrom, Kapitel.validfrom) < COALESCE(Conn.invalidfrom, DATE '9999-12-31')
INNER JOIN tael_c AS InExTael ON Conn.tael_idc = InExTael.tael_id AND InExTael.tael_typ IN (40, 46) AND InExTael.validfrom < COALESCE(LEAST(Ziffer.invalidfrom, Kapitel.invalidfrom, Conn.invalidfrom), DATE '9999-12-31') AND GREATEST(Ziffer.validfrom, Kapitel.validfrom, Conn.validfrom) < COALESCE(InExTael.invalidfrom, DATE '9999-12-31')
INNER JOIN txtx_c AS InEx ON InEx.tael_id = InExTael.tael_id AND InEx.validfrom < COALESCE(LEAST(Ziffer.invalidfrom, Kapitel.invalidfrom, Conn.invalidfrom, InExTael.invalidfrom), DATE '9999-12-31') AND GREATEST(Ziffer.validfrom, Kapitel.validfrom, Conn.validfrom, InExTael.validfrom) < COALESCE(InEx.invalidfrom, DATE '9999-12-31')
GROUP BY 1, 2, 3, 4
Any ideas on how I could debug this?
update
The refresh is part of a batch job within a Spring Boot application. It's part of a bigger project where everything is new and shiny. The application and PostgreSQL are to be deployed within OpenShift. Just to add some more context.
The same behavior occurs in a locally installed PostgreSQL (13.2, Windows), as well as in the containerized version (13.3, Linux).
The original developer put the refresh logic inside an #AfterStep listener. I have now moved it to a separate Spring Batch tasklet in order to have it in a separate transaction context (see In Spring Batch, can I insert data in a beforeStep implementation). Also, we separated the WITH part into it's own view. Still the same, OK from console, NOK from app.
I will check to see if anything from #borchvm 's link will help.
And also, I have an older version of the view which produces the same data, but is much much slower. So, at least, I still have something to hold on to. Also, the original developer who knows the data structure like his pocket will try to find another, more efficient way.
You do not have enough free temp space. I guess tables have too much data. Try to re-write your select more efficiently.
Or the worst solution is seperate your code in different materialized views then merge them into a single materialized view :)
psql> SELECT datname, temp_files AS "Temporary files",temp_bytes AS "Size of temporary files" FROM pg_stat_database ;
I'm working with my DBA to try to figure out a way to roll up all costs associated with a Work Order. Since any work Order can have multiple child work orders (through multiple "generations") as well as related work orders (through the RELATEDRECORDS table), I need to be able to get the total of the ACTLABORCOST and ACTMATERIALCOST fields for all child and related work orders (as well as each of their child and related work orders). I've worked though a hierarchical query (using CONNECT BY PRIOR) to get all the children, grandchildren, etc., but I'm stuck on the related work orders. Since every work order can have a related work order with it's own children and related work orders, I need an Oracle function that drills down through the children and the related work orders and their children and related work orders. Since I would think that this is something that should be fairly common, I'm hoping that there is someone who has done this and can share what they've done.
Another option would be a recursive query, as suggested by Francisco Sitja. Since my Oracle didn't allow 2 UNION ALLs, I had to joint to the WOANCESTOR table in both child queries instead of dedicating a UNION ALL for doing the WO hierarchy. I was then able to use the one permitted UNION ALL for doing the RELATEDRECORD hierarchy. And it seems to run pretty quickly.
with mywos (wonum, parent, taskid, worktype, description, origrecordid, woclass, siteid) as (
-- normal WO hierarchy
select wo.wonum, wo.parent, wo.taskid, wo.worktype, wo.description, wo.origrecordid, wo.woclass, wo.siteid
from woancestor a
join workorder wo
on a.wonum = wo.wonum
and a.siteid = wo.siteid
where a.ancestor = 'MY-STARTING-WONUM'
union all
-- WO hierarchy associated via RELATEDRECORD
select wo.wonum, wo.parent, wo.taskid, wo.worktype, wo.description, wo.origrecordid, wo.woclass, wo.siteid
from mywos
join relatedrecord rr
on mywos.woclass = rr.class
and mywos.siteid = rr.siteid
and mywos.wonum = rr.recordkey
-- prevent cycle / going back up the hierarchy
and rr.relatetype not in ('ORIGINATOR')
join woancestor a
on rr.relatedrecsiteid = a.siteid
and rr.relatedreckey = a.ancestor
join workorder wo
on a.siteid = wo.siteid
and a.wonum = wo.wonum
)
select * from mywos
;
Have you considered the WOGRANDTOTAL object? Its description in MAXOBJECT is "Non-Persistent table to display WO grandtotals". There is a dialog in the Work Order Tracking application that you can get to from the Select Action / More Actions menu. Since you mentioned it repeatedly, I should note that WOGRANDTOTAL values do not include joins across RELATEDRECORDS to other work order hierarchies.
You can also save yourself the complication of CONNECT BY PRIOR by joining to WOANCESTOR, which is effectively a dump from a CONNECT BY PRIOR query. (There are other %ANCESTOR tables for other hierarchies.)
I think a recursive automation script would be the best way to do what you want, if you need the results in Maximo. If you need the total cost outside of Maximo, maybe a recursive function would work.
We finally figured out how to pull this off.
WITH WO(WONUM,
PARENT) AS
((SELECT X.WONUM,
X.PARENT
FROM (SELECT R.RECORDKEY WONUM,
R.RELATEDRECKEY PARENT
FROM MAXIMO.RELATEDRECORD R
WHERE R.RELATEDRECKEY = '382418'
UNION ALL
SELECT W.WONUM,
W.PARENT
FROM MAXIMO.WORKORDER W
START WITH W.PARENT = '382418'
CONNECT BY PRIOR W.WONUM = W.PARENT) X)
UNION ALL
SELECT W.WONUM, W.PARENT FROM MAXIMO.WORKORDER W, WO WHERE W.WONUM = WO.PARENT)
SELECT DISTINCT WONUM FROM WO;
This returns a list of all of the child and related work orders for a given work order.
I've been trying for a day and a half now to figure out how to combine the same measure, in two different ways, in the same measure. It's been broken into parts, I've tried to UNION them, calculate with IF statements, I even thought I could UNION 3 summary tables to get the right output. I'm stuck using Excel 365 ProPlus (which I believe to be 2016 since Get and Transform and PowerPivot are built in).
The goal: I need to do this so that I can trick a PowerPivot table connected to the data model into displaying a) running total with b) a total line with c) a flat, non-running total Goal/Target line in the same measure. I've been able to do a & b, however c is elusive.
I tried to calculate the data in stages, with the first two steps here being that no matter what I try I can't seem to get two filters to work at the same time:
Occbase:=CALCULATE([Occurrences],
FILTER('Final Dataset',
'Final Dataset'[MainFilter] = ""))
CumOcc:=CALCULATE([Occbase],
FILTER(ALL(DimDate[DateValue]),
DimDate[DateValue] <= MAX(DimDate[DateValue])))
These two measures will do part 1, filter the dataset, and then calculate from that filter a simple running total. I've tried to do it in a single step but if the filter is working, then the running total won't work:
CombinedMakesRunningTotolStopWorking:=CALCULATE(SUM('Final Dataset'[xOccurrences]), FILTER(
ALL(Dimdate[DateValue]),
DimDate[DateValue] <= MAX(DimDate[DateValue]))
,FILTER(
'Final Dataset',
'Final Dataset'[MainFilter] = ""
|| 'Final Dataset'[Region] = "Ttl Occ MPR" //I couldn't figure out how to calculate on the fly
) //so I generated this total in PowerQuery
)
The SQL dev in me decided to try to pull all three above separately and then use UNION and SUMMARIZE by the date value and the region value but received an even worse result...
TryHarder:=SUMX(UNION(
SUMMARIZE(FILTER('Final Dataset',
'Final Dataset'[Region] = "Ttl Occ MPR"),
[Region],
[DateValue],
"OccurrencesXXX", CALCULATE([Occbase],
FILTER(ALL(DimDate[DateValue]),
DimDate[DateValue] <= MAX(DimDate[DateValue]))))
,
SUMMARIZE(FILTER(ALL('Final Dataset'),
'Final Dataset'[Region] = "PR Occ Goal"),
[Region],
[DateValue],
"OccurrencesXXX", [Occurrences])
,
SUMMARIZE(FILTER('Final Dataset',
'Final Dataset'[MainFilter] = ""),
[Region],
[DateValue],
"OccurrencesXXX", CALCULATE([Occbase],
FILTER(ALL(DimDate[DateValue]),
DimDate[DateValue] <= MAX(DimDate[DateValue]))))
), [OccurrencesXXX])
With the comically defeating result of:
I could give up and just generate a table for each chart in PowerQuery... but would have to generate a ton of tables. I have to assume I'm doing something wrong with scope/context and I have a feeling my C#/SQL mindset is putting me at a huge disadvantage in learning DAX. I'd like to understand what I'm doing wrong and learn the DAX pattern and terminology to fix it.
One way to do this is to setup a table that is not connected to the model, and then use that to determine what value you return. Example below being for a unit of measure (UOM). The idea being that the measure returned is dependent on the Unit of measure field, so adding it to the legend part of the pivot chart would return unit, case and ESU volume. It also means you could use a slicer to toggle which fields are returned in the chart.
Volume:=IF( HASONEVALUE( 'Unit of Measure'[UOM] ),
SWITCH(TRUE(),
VALUES('Unit of Measure'[Order]) = 1, [Unit Volume],
VALUES('Unit of Measure'[Order]) = 2, [Case Volume],
VALUES('Unit of Measure'[Order]) = 3, [ESU Volume]
),
[ESU Volume]
)
I'm trying to build a query that shows only non-unique duplicates. I've already built a query that shows all the records coming into consideration:
SELECT tbl_tm.title, lp_index.starttime, musicsound.archnr
FROM tbl_tm
INNER JOIN musicsound on tbl_tm.fk_tbl_tm_musicsound = musicsound.pk_musicsound
INNER JOIN lp_index ON musicsound.pk_musicsound = lp_index.fk_index_musicsound
INNER JOIN plan ON lp_index.fk_index_plan = plan.pk_plan
WHERE tbl_tm.FK_tbl_tm_title_type_music = '22' AND plan.airdate
BETWEEN to_date ('15-01-13') AND to_date('17-01-13')
GROUP BY tbl_tm.title, lp_index.starttime, musicsound.archnr
HAVING COUNT (tbl_tm.title) > 0;
The corresponding result set looks like this:
title starttime archnrr
============================================
Pumped up kicks 05:05:37 0616866
People Help The People 05:09:13 0620176
I can't dance 05:12:43 0600109
Locked Out Of Heaven 05:36:08 0620101
China in your hand 05:41:33 0600053
Locked Out Of Heaven 08:52:50 0620101
It gives me music titles played between a certain timespan along with their starting time and archive ID.
What I want to achieve is something like this:
title starttime archnr
============================================
Locked Out Of Heaven 05:36:08 0620101
Locked Out Of Heaven 08:52:50 0620101
There would only be two columns left: both share the same title and archive number but differ in the time part. Increasing the 'HAVING COUNT' value will give me a zero-row
result set, since there aren't any entries that are exactly the same.
What I've found out so far is that the solution for this problem will most likely have a nested subquery, but I can't seem to get it done. Any help on this would be greatly appreciated.
Note: I'm on a Oracle 11g-server. My user has read-only privileges. I use SQL Developer on my workstation.
You can try something like this:
SELECT title, starttime, archnr
FROM (
SELECT title, starttime, archnr, count(*) over (partition by title) cnt
FROM (your_query))
WHERE cnt > 1
Here is a sqlfiddle demo