I have a Oracle Database;
and I want to create a table with two columns, one contain id and the other contain incremented dates in rows.
i want to specify in my PL/SQL code the limit dates, and the code will generate the rows between the two limit dates (from and to).
This is an example output result :
+-----+--------------------+
| id |dates |
+-----+--------------------+
| 1 |01/02/2011 04:00:00 |
+-----+--------------------+
| 2 |01/02/2011 05:00:00 |
+-----+--------------------+
| 3 |01/02/2011 06:00:00 |
+-----+--------------------+
| 4 |01/02/2011 07:00:00 |
+-----+--------------------+
| 5 |01/02/2011 08:00:00 |
....
...
..
| 334 |05/03/2011 023:00:00|
+-----+--------------------+
You haven't exactly deluged us with details, but this is the sort of construct you want:
select level as id
, &&start_date + ((level-1) * (1/24) as dates
from dual
connect by level <= ((&&end_date - &&start_date)*24)
/
This assumes your input values are whole days, You will need to adjust the maths if your start or end date contains a time component.
You would need to start with a date baseline:
vBaselineDate := TRUNC(SYSDATE);
OR
vBaselineDate := TO_DATE('28-03-2013 12:00:00', 'DD-MM-YYYY HH:MI:SS');
Then increment the baseline by adding fractions of a day depending on how large you want the range, eg: 1 minute, 1 hour etc.
FOR i IN 1..334 LOOP
INSERT INTO mytable
(id, dates)
VALUES
(i, (vBaselineDate + i/24));
END LOOP;
COMMIT;
1/24 = 1 hour.
1/1440 = 1 minute;
Hope this helps.
Related
I have a 3 node cluster. There is 1 database and 1 table. I have not created a projection. If I load 100 rows in the table using copy command then:
How many projections would be created? I suspect only 1 super projection, am I correct?
If I am using segmentation then would that distribute data evenly (~33 rows) per node? Does that mean I have now 3 Read Optimised Storage (ROS) one per node and the projection has 3 ROSes?
If I use KSafety value as 1 then a copy of each ROS (buddy) would be stored in another node? So DO I have 6 ROSes now, each containing 33 rows?
Well, let's play the scenario ...
You will see that you get a projection and its identical buddy projection ...
And you can query the catalogue to count the rows and identify the projections ..
-- load a file with 100 random generated rows into table example;
-- generate the rows from within Vertica, and export to file
-- then create a new table and see what the projections look like
CREATE TABLE rows100 AS
SELECT
(ARRAY['Ann','Lucy','Mary','Bob','Matt'])[RANDOMINT(5)] AS fname,
(ARRAY['Lee','Ross','Smith','Davis'])[RANDOMINT(4)] AS lname,
'2001-01-01'::DATE + RANDOMINT(365*10) AS hdate,
(10000 + RANDOM()*9000)::NUMERIC(7,2) AS salary
FROM (
SELECT tm FROM (
SELECT now() + INTERVAL ' 1 second' AS t UNION ALL
SELECT now() + INTERVAL '100 seconds' AS t -- Creates 100 rows
) x TIMESERIES tm AS '1 second' OVER(ORDER BY t)
) y
;
-- set field separator to vertical bar (the default, actually...)
\pset fieldsep '|'
-- toggle to tuples only .. no column names and no row count
\tuples_only
-- spool to example.bsv - in bar-separated-value format
\o example.bsv
SELECT * FROM rows100;
-- spool to file off - closes output file
\o
-- create a table without bothering with projections matching the test data
DROP TABLE IF EXISTS example;
CREATE TABLE example LIKE rows100;
-- load the new table ...
COPY example FROM LOCAL 'example.bsv';
-- check the nodes ..
SELECT node_name FROM nodes;
-- out node_name
-- out ----------------
-- out v_sbx_node0001
-- out v_sbx_node0002
-- out v_sbx_node0003
SELECT
node_name
, projection_schema
, anchor_table_name
, projection_name
, row_count
FROM v_monitor.projection_storage
WHERE anchor_table_name='example'
ORDER BY projection_name, node_name
;
-- out node_name | projection_schema | anchor_table_name | projection_name | row_count
-- out ----------------+-------------------+-------------------+-----------------+-----------
-- out v_sbx_node0001 | public | example | example_b0 | 38
-- out v_sbx_node0002 | public | example | example_b0 | 32
-- out v_sbx_node0003 | public | example | example_b0 | 30
-- out v_sbx_node0001 | public | example | example_b1 | 30
-- out v_sbx_node0002 | public | example | example_b1 | 38
-- out v_sbx_node0003 | public | example | example_b1 | 32
I am trying to display data as such:
In our database we have events (with unique ID) and then a start date. The events do not overlap, and each one starts on the date the last one ended. However we don't have 'end date' in the database.
I have to feed the data into another system so that it shows event ID, start date, and end date (which is just the next start date).
I want to avoid creating a custom view as that's really frowned upon here for this database. So I'm wondering if there's a good way to do this in a query.
Essentially it would be:
EventA | Date1 | Date2
EventB | Date2 | Date3
EventC | Date3 | Date4
The events are planned years in advance and I only need to have the next few months pulled for the query, so no worry about running out of 'next event start dates' and in case it matters, this query will be part of a webservice call.
The basic pseudo code for event and date would be:
select Event.ID, Event.StartDate
from Event
where Event.StartDate > sysdate and Event.StartDate < sysdate+90
Essentially I want to take the next row's Event.StartDate and make it the current row's Event.EndDate
Use the LEAD analytic function:
Oracle Setup:
A table with 10 rows:
CREATE TABLE Event ( ID, StartDate ) AS
SELECT LEVEL, TRUNC( SYSDATE ) + LEVEL
FROM DUAL
CONNECT BY LEVEL <= 10;
Query:
select ID,
StartDate,
LEAD( StartDate ) OVER ( ORDER BY StartDate ) AS EndDate
from Event
where StartDate > sysdate and StartDate < sysdate+90
Output:
ID | STARTDATE | ENDDATE
-: | :-------- | :--------
1 | 22-JUN-19 | 23-JUN-19
2 | 23-JUN-19 | 24-JUN-19
3 | 24-JUN-19 | 25-JUN-19
4 | 25-JUN-19 | 26-JUN-19
5 | 26-JUN-19 | 27-JUN-19
6 | 27-JUN-19 | 28-JUN-19
7 | 28-JUN-19 | 29-JUN-19
8 | 29-JUN-19 | 30-JUN-19
9 | 30-JUN-19 | 01-JUL-19
10 | 01-JUL-19 | null
db<>fiddle here
I have attached tables, product and date.
Lets say my product table has data till yesterday i.e 05/31/2018
am trying to populate season table where I could do the calculation till 5/31/2018 where value = (value on same day last year/previous day last year) with Ch(a) and P(pen), however the data set was till 5/31/2018. my aim is to get data/calculation for 06/1/2018 till 12/31/2018 as well. how do i get the data for these future dates as I have the data to calculate these future dates in prod table.
appreciate if you can help.
Thank you!
You can generate a series of dates using CONNECT BY subquery like this one:
SELECT Start_date + level - 1 as my_date
FROM (
SELECT date '2018-01-01' as Start_date FROM dual
)
CONNECT BY Start_date + level - 1 <= date '2018-01-05'
Demo: http://www.sqlfiddle.com/#!4/072359/1
| MY_DATE |
|----------------------|
| 2018-01-01T00:00:00Z |
| 2018-01-02T00:00:00Z |
| 2018-01-03T00:00:00Z |
| 2018-01-04T00:00:00Z |
| 2018-01-05T00:00:00Z |
It's difficult to explain the question well in the title.
I am inserting 6 values from (or based on values in) one row.
I also need to insert a value from a second row where:
The values in one column (ID) must be equal
The values in column (CODE) in the main source row must be IN (100,200), whereas the other row must have value of 300 or 400
The value in another column (OBJID) in the secondary row must be the lowest value above that in the primary row.
Source Table looks like:
OBJID | CODE | ENTRY_TIME | INFO | ID | USER
---------------------------------------------
1 | 100 | x timestamp| .... | 10 | X
2 | 100 | y timestamp| .... | 11 | Y
3 | 300 | z timestamp| .... | 10 | F
4 | 100 | h timestamp| .... | 10 | X
5 | 300 | g timestamp| .... | 10 | G
So to provide an example..
In my second table I want to insert OBJID, OBJID2, CODE, ENTRY_TIME, substr(INFO(...)), ID, USER
i.e. from my example a line inserted in the second table would look like:
OBJID | OBJID2 | CODE | ENTRY_TIME | INFO | ID | USER
-----------------------------------------------------------
1 | 3 | 100 | x timestamp| substring | 10 | X
4 | 5 | 100 | h timestamp| substring2| 10 | X
My insert for everything that just comes from one row works fine.
INSERT INTO TABLE2
(ID, OBJID, INFO, USER, ENTRY_TIME)
SELECT ID, OBJID, DECODE(CODE, 100, (SUBSTR(INFO, 12,
LENGTH(INFO)-27)),
600,'CREATE') INFO, USER, ENTRY_TIME
FROM TABLE1
WHERE CODE IN (100,200);
I'm aware that I'll need to use an alias on TABLE1, but I don't know how to get the rest to work, particularly in an efficient way. There are 2 million rows right now, but there will be closer to 20 million once I start using production data.
You could try this:
select primary.* ,
(select min(objid)
from table1 secondary
where primary.objid < secondary.objid
and secondary.code in (300,400)
and primary.id = secondary.id
) objid2
from table1 primary
where primary.code in (100,200);
Ok, I've come up with:
select OBJID,
min(case when code in (300,400) then objid end)
over (partition by id order by objid
range between 1 following and unbounded following
) objid2,
CODE, ENTRY_TIME, INFO, ID, USER1
from table1;
So, you need a insert select the above query with a where objid2 is not null and code in (100,200);
I'm having a hard time creating a query to do the following:
I have this table, called LOG:
ID | INSERT_TIME | LOG_VALUE
----------------------------------------
1 | 2013-04-29 18:00:00.000 | 160473
2 | 2013-04-29 21:00:00.000 | 154281
3 | 2013-04-30 09:00:00.000 | 186552
4 | 2013-04-30 14:00:00.000 | 173145
5 | 2013-04-30 14:30:00.000 | 102235
6 | 2013-05-01 11:00:00.000 | 201541
7 | 2013-05-01 23:00:00.000 | 195234
What I want to do is build a query that returns, for each day, the last values inserted (using the max value of INSERT_TIME). I'm only interested in the date part of that column, and in the column LOG_VALUE. So, this would be my resultset after running the query:
2013-04-29 154281
2013-04-30 102235
2013-05-01 195234
I guess that I need to use GROUP BY over the INSERT_TIME column, along with MAX() function, but by doing that, I can't seem to get the LOG_VALUE. Can anyone help me on this, please?
(I'm on Oracle 10g)
SELECT trunc(insert_time),
log_value
FROM (
SELECT insert_time,
log_value,
rank() over (partition by trunc(insert_time)
order by insert_time desc) rnk
FROM log)
WHERE rnk = 1
is one option. This uses the analytic function rank to identify the row with the latest insert_time on each day.