Hi I have the following table and I want to update the rows with same group where if the pervious row that have count zero have value I want to populate the other one with it if the row that needs to be populated was null otherwise no changes will be done.
Table 1 before update:
Group | Count | Name | lastName |
Grp1 | 0 | tyler | calark |
Grp1 | 1 | lara | jason |
Grp1 | 2 | null | spark |
Table 1 after update:
Group | Count | Name | lastName |
Grp1 | 0 | tyler | calark |
Grp1 | 1 | lara | jason |
Grp1 | 2 | tyler | spark |
This is how I understood it.
Before:
SQL> select * From test;
CGROUP CCOUNT NAME LAST_NAME
---------- ---------- ---------- ----------
D1 0 tyler clark
D1 1 jason lara
D1 2 spark
Update:
SQL> update test a set
2 a.name = (select b.name
3 from test b
4 where b.cgroup = a.cgroup
5 and b.ccount = 0
6 )
7 where a.name is null;
1 row updated.
After:
SQL> select * From test;
CGROUP CCOUNT NAME LAST_NAME
---------- ---------- ---------- ----------
D1 0 tyler clark
D1 1 jason lara
D1 2 tyler spark
SQL>
Related
When I get a tkprof output, I see "starts=1" data in the execution plan area, between the time and cost data. So what does that mean?
Basically, this area :
(cr=15 pr=0 pw=0 time=514 us starts=1 cost=3 size=7383 card=107)
********************************************************************************
SQL ID: 7jk33n4f4mpy9 Plan Hash: 1445457117
select *
from
hr.employees
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.04 0.03 0 351 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 9 0.00 0.00 0 15 0 107
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 11 0.04 0.03 0 366 0 107
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS
Number of plan statistics captured: 1
Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
107 107 107 TABLE ACCESS FULL EMPLOYEES (cr=15 pr=0 pw=0 time=514 us starts=1 cost=3 size=7383 card=107)
********************************************************************************
STARTS is the number of times that line in the plan started. Easier to see when using (say) a join. Here's an example
SQL> select /*+ leading(d) use_nl(e) gather_plan_statistics */
2 e.ename, d.dname
3 from scott.dept d,
4 scott.emp e
5 where e.deptno = d.deptno
6 and e.sal > 1000;
ENAME DNAME
---------- --------------
CLARK ACCOUNTING
KING ACCOUNTING
MILLER ACCOUNTING
JONES RESEARCH
SCOTT RESEARCH
ADAMS RESEARCH
FORD RESEARCH
ALLEN SALES
WARD SALES
MARTIN SALES
BLAKE SALES
TURNER SALES
12 rows selected.
SQL>
SQL> select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'));
PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------
--------------------------------
SQL_ID 37nwzk5qypud3, child number 0
-------------------------------------
select /*+ leading(d) use_nl(e) gather_plan_statistics */
e.ename, d.dname from scott.dept d, scott.emp e where
e.deptno = d.deptno and e.sal > 1000
Plan hash value: 4192419542
-------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 12 |00:00:00.01 | 32 |
| 1 | NESTED LOOPS | | 1 | 13 | 12 |00:00:00.01 | 32 |
| 2 | TABLE ACCESS FULL| DEPT | 1 | 4 | 4 |00:00:00.01 | 7 |
|* 3 | TABLE ACCESS FULL| EMP | 4 | 3 | 12 |00:00:00.01 | 25 |
-------------------------------------------------------------------------------------
We scanned DEPT and got 4 rows. For each of those 4 rows, we then did a full scan of EMP, hence line 3 started 4 times.
Need help on Oracle SELECT statement.
I Have table like this with used days. User can have 11 days
+----+----------+-----+-----------+
| ID | NAME | USED| DATE |
+----+----------+-----+-----------+
| 1 | John | 1 |01/01/2018 |
| 2 | John | 2 |01/03/2018 |
| 3 | John | 2 |01/05/2018 |
+----+----------+-----+-----------+
So on QUERY SELECT i want to have result of left days like this
+----+----------+------+-----------+
| ID | NAME | DAYS | USED| LEFT|
+----+----------+------+-----------+
| 1 | John | 11 | 1 | 10 |
| 2 | John | 10 | 2 | 8 |
| 3 | John | 8 | 2 | 6 |
+----+----------+------+-----------+
Any help how to achieve this result ?
The main thing you need is the analytic version of sum. Also, try not to use Oracle keywords (like date) as column names.
-- sample data
with ex as (select 1 as id, 'John' as name, 1 as used, to_date('01/01/2018','mm/dd/yyyy') as date1 from dual
union select 2 as id, 'John' as name, 2 as used, to_date('01/03/2018','mm/dd/yyyy') from dual
union select 3 as id, 'John' as name, 2 as used, to_date('01/05/2018','mm/dd/yyyy') from dual)
-- main query
select id,
name,
11-sum(used) over (partition by name order by date1) + used as days,
used,
11-sum(used) over (partition by name order by date1) as left
from ex;
Output:
ID NAME DAYS USED LEFT
---------- ---- ---------- ---------- ----------
1 John 11 1 10
2 John 10 2 8
3 John 8 2 6
I have this situation in Oracle while trying to do a select from 2 tables:
Table1
-------------
Col1 Col2
-------------
1 abc
2 aa
3 lab
4 mm
5 nn
6 kk
7 pp
Table 2
-----------
Col1 Col2
----------
4 xxx
7 yyy
(null) zzz
I want all rows including (null) zzz of Table2 in my select.
What I am doing is -
SELECT column1, column2,….FROM Table1, Table2 Where table1.col1 = Table2.col1(+)
But it does not give me the row with NULL value in Table2.
I tried putting OR conditions with left outer join on col1 but it does not work.
SELECT column1, column2,…. FROM Table1, Table2 Where table1.col1 = Table2.col1(+) OR Table2.Col1 IS NULL
This is expected output:
Col1 Table1.Col2 Table2.Col2
--------------------------------
1 abc (null)
2 aa (null)
3 lab (null)
4 mm xxx
5 nn (null)
6 kk (null)
7 pp yyy
(null) (null) zzz
Its seems what you want is a FULL OUTER JOIN
SQL Fiddle
Query 1:
SELECT
t1.COl1
,t1.col2
,t2.col2
FROM Table1 t1
FULL OUTER JOIN Table2 t2 ON
( t1.col1 = t2.col1 )
ORDER BY Col1
Results:
| COL1 | COL2 | COL2 |
|--------|--------|--------|
| 1 | abc | (null) |
| 2 | aa | (null) |
| 3 | lab | (null) |
| 4 | mm | xxx |
| 5 | nn | (null) |
| 6 | kk | (null) |
| 7 | pp | yyy |
| (null) | (null) | zzz |
An important advice to you is to get rid of (+) outer join notation and use ANSI .. JOIN ON syntax.
I am working on a trigger which needs INSERT INTO with WHERE logic.
I have three tables.
Absence_table:
-----------------------------
| user_id | absence_reason |
-----------------------------
| 1234567 | 40 |
| 1234567 | 50 |
| 1213 | 40 |
| 1314 | 20 |
| 1111 | 20 |
-----------------------------
company_table:
-----------------------------
| user_id | company_id |
-----------------------------
| 1234567 | 10201 |
| 1213 | 10200 |
| 1314 | 10202 |
| 1111 | 10200 |
-----------------------------
employment_table:
--------------------------------------
| user_id | emp_type | emp_no |
--------------------------------------
| 1234567 | Int | 1 |
| 1213 | Int | 2 |
| 1314 | Int | 3 |
| 1111 | Ext | 4 |
--------------------------------------
and finally I have the table out where data should be going only who have emp_type = Int in employment_table and have company_id = 10200
out:
--------------------------------
| employee_id | absence_reason |
--------------------------------
| 1 | 40 |
| 1 | 50 |
| 2 | 40 |
| 3 | 20 |
--------------------------------
Here is my trigger:
CREATE OR REPLACE TRIGGER "INOUT"."ABSENCE_TRIGGER"
AFTER INSERT ON absence_table
FOR EACH ROW
DECLARE
BEGIN
CASE
WHEN INSERTING THEN
INSERT INTO out (absence_reason, employee_id)
VALUES (:NEW.absence_reason, (SELECT employee_id FROM employment_table WHERE user_id = :NEW.user_id)
WHERE user_id IN
(SELECT user_id FROM employment_table WHERE employment_type = 'INT')
AND user_id IN
(SELECT user_id FROM company_table WHERE company_id = '10200');
END CASE;
END absence_trigger;
It is obviously not working and I can't figure out what should I do to make it work. Any suggestions?
change the insert to this:
insert into out (absence_reason, employee_id)
select :NEW.absence_reason, e.emp_no
from employment_table e
inner join company_table c
on c.user_id = e.user_id
where e.user_id = :NEW.user_id
and e.emp_type = 'INT'
and c.company_id = '10200';
which should work. note you had emp_no in your sample structure yet employee_id in the trigger insert too. i've assumed emp_no is right. also emp_type vs employment_type.
Finally in your trigger you have company_id in quotes. Is it really a varchar2? if so OK, if not, don't use quotes.
The parentheses are not balanced. The one for values is not closed. This is the cause of your specific error, but #DazzaL's answer looks like the correct solution.
I have the following two tables:
TableOne
========
Id1|ColA1|ColB1|ColC1|ColD1|ColE1
--------------------------------
1| AFoo|BFoo |CFoo | DFoo| EFoo
2| AFoo|BBar |CFoo | DFoo| EFoo
TableTwo
========
Id2|ColA2|ColB2|ColC2
---------------------
11| 1 |ABC |NOP |
12| 1 |ABC |QRS |
13| 1 |DEF |TUV |
14| 1 |DEF |WXY |
15| 1 |DEF |FGH |
16| 2 |ABC |NOP |
I run the following query:
select t1.*, t2.*
from TableOne t1
inner join TableTwo t2 on t2.ColA2=t1.Id1
where t1.ColA1='AFoo'
and get the following result:
Result
======
Id1|ColA1|ColB1|ColC1|ColD1|ColE1|Id2|ColA2|ColB2|ColC2
-------------------------------------------------------
1| AFoo|BFoo |CFoo | DFoo| EFoo| 11| 1 | ABC | NOP
1| AFoo|BFoo |CFoo | DFoo| EFoo| 12| 1 | ABC | QRS
1| AFoo|BFoo |CFoo | DFoo| EFoo| 13| 1 | DEF | TUV
1| AFoo|BFoo |CFoo | DFoo| EFoo| 14| 1 | DEF | WXY
1| AFoo|BFoo |CFoo | DFoo| EFoo| 15| 1 | DEF | FGH
2| AFoo|BBar |CFoo | DFoo| EFoo| 16| 2 | ABC | NOP
What I really want returned is:
Desired Result
======
Id1|MaxDup
----------------------------------------
1| 3 (This is because there are 3 DEF records)
2| 1 (This is because there is 1 ABC record)
So, I am trying to track the maximum number of occurrences in ColB2 which appears for each TableOne row. In the example above, the ID1 of 1 has two ABC records and three DEF records associated with it. Since there are more DEF records than ABC records, I want the count of DEF records returned.
Can anybody provide a working example that can demonstrate this?
Try this one (I haven't tested it):
with t2 as (
select ColA2, ColB2, count(*) cnt
from TableTwo
group by ColA2, ColB2
)
select t1.Id1,
( select max(cnt) MaxDup
from t2
where t2.ColA2=t1.Id1)
from TableOne t1
Here is a result based on your test data.
SQL> select
cola2
, cnt as maxDup
from (
select
cola2
, colb2
, cnt
, rank() over (partition by cola2 order by cnt desc ) cnt_rnk
from (
select cola2
, colb2
, count(*) as cnt
from TableTwo
group by cola2, colb2
)
)
where cnt_rnk = 1
/
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
COLA MAXDUP
---- ----------
1 3
2 1
SQL>
The innermost query counts the duplicate. The middle query ranks the counts of duplicates. The outermost query filters for the combo with the highest count:
Maybe this one?
SELECT t2.id2 id, count(t2.ColB2) MaxDup
FROM TableTwo t2
WHERE EXISTS(
SELECT * FROM TableOne t1
WHERE t1.id1 = t2.id2
AND t1.ColA1='AFoo')
AND rownum < 2
GROUP BY t2.id2, t2.ColB2
ORDER BY MaxDup desc;
It's SQL Server code, but I think it can be safely transferred to Oracle:
select id1,cola1,colb1,colc1,cold1,cole1,max(dup) as dup2 from (
select t1.id1, t1.cola1, t1.colb1, t1.colc1,t1.cold1,t1.cole1, t2.colb2,count(*) as dup
from t1
inner join t2 on t2.cola2=t1.Id1
where t1.cola1='Afoo'
group by t1.id1, t1.cola1, t1.colb1, t1.colc1,t1.cold1,t1.cole1, t2.colb2)
tab
group by id1,cola1,colb1,colc1,cold1,cole1
It works on Sql Server, and by enveloping in a subquery you can extract only id1 and dup2. Tried expanding your dataset, it works.