I have a table like this:
+------------+
| EMP_CODE |
+------------+
|CODEA |
|CODEA1 |
|CODEA2 |
|CODEB |
|CODEC |
|CODEC2 |
|CODED |
|CODED1 |
|CODEE |
|CODEE1 |
|CODEE2 |
+------------+
My multi-row block in forms:
What I wanted is if I add for example the EMP_CODE CODEE, it will automatically add EMP_CODE(s) with CODEE on the next row and so on. Like this:
Just tell me if I missed out something or if there's something unclear on my explanation. Thank you!
Will try to explain one approach, perhaps there would be better ways to achieve the desired functionality.
The below is pseudo code, improvise and change as per your request.
KEY-NEXT-ITEM
BEGIN
SELECT emp INTO :your_block_name.column_name
FROM table_name
WHERE column_name = :block_name.emp;
EXCEPTION
WHEN OTHERS THEN
// consider raising or do as per logic
END;
Having answered this, I believe the table design is not correct for which perhaps instigate a separate discussion to dissect.
Related
I need a calculated column (because this will be used in a slicer) that returns the employee's most recent supervisor.
Data sample (table 'Performance'):
EMPLOYEE | DATE | SUPERVISOR
--------------------------------------------
Jim | 2018-11-01 | Bob
Jim | 2018-11-02 | Bob
Jim | 2018-11-03 | Bill
Mike | 2018-11-01 | Steve
Mike | 2018-11-02 | Gary
Desired Output:
EMPLOYEE | DATE | SUPERVISOR | LAST SUPER
---------------------------------------------------------------
Jim | 2018-11-01 | Bob | Bill
Jim | 2018-11-02 | Bob | Bill
Jim | 2018-11-03 | Bill | Bill
Mike | 2018-11-01 | Steve | Gary
Mike | 2018-11-02 | Gary | Gary
I tried to use
LAST SUPER =
LOOKUPVALUE (
Performance[SUPERVISOR],
Performance[DATE], MAXX ( Performance, [DATE] )
)
but I get the error:
Calculation error in column 'Performance'[]: A table of multiple
values was supplied where a single value was expected.
After doing more research, it appears this approach was doomed from the start. According to this, the search value cannot refer to any column in the same table being searched. However, even when I changed the search value to TODAY() or a static date as a test, I got the same error about multiple values. MAXX() is also returning the maximum date in the entire table, not just for that employee.
I wondered if it was a many to many issue, so I went back into Power Query, duplicated the original query, grouped by EMPLOYEE to get MAX(DATE), matched both fields against the original query to get the SUPERVISOR on MAX(DATE), and can treat this like a regular lookup table. While it does work, unsurprisingly the refresh is markedly slower.
I can't decide if I'm over-complicating, over-simplifying, or just wildly off base with either approach, but I would be grateful for any suggestions.
What I'd like to know is:
Is it possible to use a simple function like LOOKUPVALUES() to achieve the desired output?
If not, is there a more efficient approach than duplicating the query?
The reason LOOKUPVALUE is giving that particular error is that it's doing a lookup on the whole table, not just the rows associated with that particular employee. So if you have multiple supervisors matching the same maximal date, then you've got a problem.
If you want to use the LOOKUPVALUE function for this, I suggest the following:
Last Super =
VAR EmployeeRows =
FILTER( Performance, Performance[Employee] = EARLIER( Performance[Employee] ) )
VAR MaxDate = MAXX( EmployeeRows, Performance[Date] )
RETURN
LOOKUPVALUE(
Performance[Supervisor],
Performance[Date], MaxDate,
Performance[Employee], Performance[Employee]
)
There are two key differences here.
I'm taking the maximal date over only the rows for that particular employee (EmployeeRows).
I'm including Employee in the lookup function, so that it
only matches for the appropriate employee.
For other possible solutions, please see this question:
Return top value ordered by another column
I have a problem to produce a SSRS report that is like this:
This is how my output from stored procedure looks like:
Company Code | Company Name | Product Code | Product Name
ICE001 | Nestle | ICE001a | Drumstick Chocolate
ICE001 | Nestle | ICE001b | Drumstick KitKat
ICE001 | Nestle | ICE001c | Drumstick Chocolate
ICE002 | Walls | ICE002a | Cornetto Chocolate
ICE002 | Walls | ICE002b | Cornetto Latte
ICE002 | Walls | ICE002c | Cornetto La Liga
So how can I achieve this report structure in SSRS with the current stored procedure? Is it achievable?
Actually, yes you can. These are the steps:
Drag a table into the SSRS.
Make it 2 columns and 1 row.
On the first column, select the field you want as the sub data which in this case, Product Code.
In 2nd column, put in Product Name.
Now for the main data. Right click on the first column, select:
Insert Row > Outside Group - Above
Put the main data field which is Company Code at the first row, first column.
Put the Company Name expression at the second column, first row of the table.
Design as your preferred and generate your report :)
Doubt it. But if you adjust your dataset as below and use a table report it is.
select distinct code,company from icecream
union
select productcode,productname from icecream
every month I do a simple update statement on my oracle database. But, since monday it takes very long. The table grows every month by 5 percent. Now there are 8 million records stored.
The Statement:
update /*+ parallel(destination_tab, 4) */ destination_tab dest
set (full_name, state) =
(select /*+ parallel(source_tab, 4) */ dest.name, src.state
from source_tab src
where src.city = dest.city);
In real there are 20 fields to update, not only two... but so it looks easier to descripe the problem.
explain plan:
-----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------------
| 0 | update statement | | 8517K| 3167M| 579M (50)|999:59:59 |
| 1 | update | destination_tab | | | | |
| 2 | PX COORDINATOR | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 8517K| 3167M| 6198 (1)| 00:01:27 |
| 4 | px block iterator | | 8517K| 3167M| 6198 (1)| 00:01:27 |
| 5 | table access full | DESTINATION_TAB | 8517K| 3167M| 6198 (1)| 00:01:27 |
| 6 | table access by index rowid| SOURCE_TAB | 1 | 56 | 1 (0)| 00:00:01 |
|* 7 | index unique scan | CITY_PK | 1 | | 1 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------
Could anyone descripe to me, how this can be? The plan looks very bad! Thank you very very much.
You didn't say how long is too long. You are joining an 8 million row table. Not sure how many rows are in source_tab.
I noticed the execution plan indicates a full table scan of destination_tab. Is the city column on the destination_tab table indexed? If not, try adding an index. If it is, Oracle may be ignoring it because it knows it needs to return every value anyway and destination_tab is the driving table.
No matter how you optimize it, this will always degrade in performance as the tables grow because you are updating every row by fetching a value from the same table joined to another. That is, you are always doing N operations where N is the number of rows in destination_tab.
High-level questions/suggestions:
Do you need to update every row every time? Are only certain rows likely to have changed values? If so, can you somehow predict which rows you need to update and limit your updates to it.
Why are the hints there? If performance changes, I would experiment with dropping hints. It's the optimizer's job to find the best plan for you. By using hints, you are telling the optimizer how to do its job. You'd better be right.
You are updating the full_name column on destination_tab to the name column of the same row. But you are obtaining the name column through a join to the table. It may be quicker to take that out of your select and use something like below. This is a guess. It may not matter.
update destination_tab dest
set full_name = name,
state =
(select src.state
from source_tab src
where src.city = dest.city);
Try the following.
merge
into destination_tab d
using source_tab s
on (d.city = d.city)
when matched then
update
set d.state = s.state
where decode(d.state, s.state, 1, 0) = 0;
If this is a data warehouse, I wouldn't do updates, especially not every row in a large table. I'd probably create a materialized view combining the pieces from various base tables, and do a full refresh when needed (non-atomic: truncate + insert append).
Edit:
As for WHY the current update approach is taking much longer than usual, my guess is that in previous runs Oracle found a good number of blocks needed for the update in buffer cache, and lately Oracle has had to pull a lot from disk into buffer first. You can look into consistent gets and db block gets (logical io) vs physical io (disk).
I understand the comments about the sense of a data warehouse and so on. However, I have to do this update in this kind. The update is part of an ETL workflow. I have to copy every month the complete 8 million records of the table "destination". After this step I have to do the UPDATE which makes problems.
I do not understand the problem, that the performance is so bad day-to-day. Usually, the update runs 45 minutes. Now, it runs about 4 hours. But why? There is no sorting necessary, so the famous reason "sorting on disc instead on main memory" is not possible. What is the problem in my case?
Could there be an difference about the performance between normal update (how I do it) and the merge-update?
I am trying to achieve the following result (the first line is header)
Level 1 | Level 2 | Level 3 | Level 4 | Person
Technicals | Development | Software | Team leader | Eric
Technicals | Development | Software | Team leader | Steven
Technicals | Development | Software | Team leader | Jana
How can I do so? I tried to use the following code. The first part is to create the hierarchy which works fine. The second part is to have the date in the above mentioned table is a pretty painful.
SELECT * FROM ( /* level2 */
SELECT * FROM ( /* level1 */
SELECT * FROM arc.localnode /*create hierarchy */
WHERE tree_id = 2408362
CONNECT BY PRIOR node_id = parent_id
START WITH parent_id IS NULL ) l1node
LEFT JOIN names on l1node.prent_id = names.name_id ) l2node
At this point, I am quite lost. A bit of guidance and suggestion would be a lot of help :-)
There are two tables. The first table has data like this:
NODE_ID | PREV_ID | NEXT_ID | PARENT_ID
1421864 3482917 1421768
3482981 3482917 1421866 1421768
3482911 3060402 3482913 1421768
3482917 1421864 3482981 1421768
This is a complicated because it is in hieraracy. So obviously a PARENT_ID can be the NODE_ID of some other PARENT_ID. Similarly the parent_ID can be the PREV_ID and NEXT_ID.
The names are in seperate table with name_id. The name ID in this table is similar to NODE_ID of the main table in hieraracy.
You can use the Stragg Package mentioned in AskTom in the below link
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2196162600402
Your can also refer the below link in oracle forum
https://forums.oracle.com/forums/thread.jspa?threadID=2258996
Kindly post create and insert statements for your requirement so that we can test it and confirm
Let's say we have the following table structures:
documents docmentStatusHistory status
+---------+ +--------------------+ +----------+
| docId | | docStatusHistoryId | | statusId |
+---------+ +--------------------+ +----------+
| ... | | docId | | ... |
+---------+ | statusId | +----------+
| ... |
+--------------------+
It may be obvious, but it's worth mentioning, that the current status of a document is the last Status History entered.
The system was slowly but surely degrading in performance and I suggested changing the above structure to:
documents docmentStatusHistory status
+--------------+ +--------------------+ +----------+
| docId | | docStatusHistoryId | | statusId |
+--------------+ +--------------------+ +----------+
| currStatusId | | docId | | ... |
| ... | | statusId | +----------+
+--------------+ | ... |
+--------------------+
This way we'd have the current status of a document right where it should be.
Because the way the legacy applications were built I could not change the code on legacy applications to update the current status on the document table.
In this case I had to open an exception to my rule to avoid triggers at all costs, simply because I don't have access to the legacy applications code.
I created a trigger that updates the current status of a document every time a new status is added to the status history, and it works like a charm.
However, in an obscure and rarely used situation there is a need to DELETE the last status history, instead of simply adding a new one. So, I created the following trigger:
create or replace trigger trgD_History
after delete on documentStatusHistory
for each row
currentStatusId number;
begin
select statusId
into currentStatusId
from documentStatusHistory
where docStatusHistoryId = (select max(docStatusHistoryId)
from documentStatusHistory
where docId = :old.docId);
update documentos
set currStatusId = currentStatusId
where docId = :old.docId;
end;
And thats where I got the infamous error ORA-04091.
I understand WHY I'm getting this error, even though I configured the trigger as an AFTER trigger.
The thing is that I can't see a way around this error. I have searched the net for a while and couldn't find anything helpful so far.
In time, we're using Oracle 9i.
The standard workaround to a mutating table error is to create
A package with a collection of keys (i.e. docId's in this case). A temporary table would also work
A before statement trigger that initializes the collection
A row-level trigger that populates the collection with each docId that has changed
An after statement trigger that iterates over the collection and does the actual UPDATE
So something like
CREATE OR REPLACE PACKAGE pkg_document_status
AS
TYPE typ_changed_docids IS TABLE OF documentos.docId%type;
changed_docids typ_changed_docids := new typ_changed_docids ();
<<other methods>>
END;
CREATE OR REPLACE TRIGGER trg_init_collection
BEFORE DELETE ON documentStatusHistory
BEGIN
pkg_document_status.changed_docids.delete();
END;
CREATE OR REPLACE TRIGGER trg_populate_collection
BEFORE DELETE ON documentStatusHistory
FOR EACH ROW
BEGIN
pkg_document_status.changed_docids.extend();
pkg_document_status.changed_docids( pkg_document_status.changed_docids.count() ) := :old.docId;
END;
CREATE OR REPLACE TRIGGER trg_use_collection
AFTER DELETE ON documentStatusHistory
BEGIN
FOR i IN 1 .. pkg_document_status.changed_docids.count()
LOOP
<<fix the current status for pkg_document_status.changed_docids(i) >>
END LOOP;
pkg_document_status.changed_docids.delete();
END;
seems to be a duplicate of this question
check out Tom Kyte's take on that