How to show timestamp with time zone for mobile clients - spring

I have a DB and an application server located in NYC. I want to write a mobile application, which will be used by users from different cities (Los Angeles, New York, Miami). I use Postgresql as database, Spring MVC for backend and Ionic for mobile application. My question is - when user A from Los Angeles inserts a data into database for time 10:00 AM, how should I show this data to the user from New York, because there is a timezone difference? How should I store this kind of data in Postgres, and how should I process it in Spring MVC? Should my rest return time in milliseconds, or should I use timestamp or timestampz?
Thanks in advance!

You should store the data as TIMESTAMPTZ. This is what that data type is for. This blog post explains it a bit.
TIMESTAMPTZ stores all timestamps in absolute time, and displays them based on the user's timezone settings. For example:
postgres=# create table times ( some_time timestamptz );
CREATE TABLE
postgres=# set timezone = 'US/Eastern';
SET
postgres=# insert into times values ( '2017-09-15 10:00:00' );
INSERT 0 1
postgres=# set timezone = 'US/Pacific';
SET
postgres=# select * from times;
some_time
------------------------
2017-09-15 07:00:00-07
(1 row)
postgres=# set timezone = 'US/Eastern';
SET
postgres=# select * from times;
some_time
------------------------
2017-09-15 10:00:00-04
(1 row)
This allows you to display the correct time, from the user's perspective, even for a user who themselves moves around and changes time zones. You just need to remember to set the time zone in JDBC when you connect.

Related

Oracle TO_DATE with only time input will add date component based on what logic?

Running this code in Oracle 11/12:
select to_date('101200', 'hh24miss') from dual
will return a DATE component that Oracle automatically adds based on what logic?
Eg:
select to_char(to_date('101200', 'hh24miss'), 'yyyymmdd') from dual
returns
20160701
We see the added date component is always set to the first day of the current month. Where does this logic come from?
Thanks in advance
A value of date data type always has date and time components. if you specify only time portion of the datetime value as you did, the date portion defaults to the first day of the current month.
Here is one of the places (7th paragraph) in the Oracle documentation where this behavior is documented.
There is also undocumented TIME literal and TIME data type (needs to be enabled via 10407 (datetime TIME datatype creation) event) if you need to use and store just time, without date part.
Here is a small demonstration of using time literal and time data type. But again it's undocumented and unsupported feature.
SQL> select time '11:32:00' as res
2 from dual;
res
------------------------
11.32.00.000000000 AM
You can use time literal without enabling 10407 event, but in order to be able to define a column of time data type the 10407 event needs to be enabled:
SQL> create table time_table(time_col time);
create table time_table(time_col time)
*
ERROR at line 1:
ORA-00902: invalid datatype
-- enable 10407 event
SQL> alter session set events '10407 trace name context forever, level 1';
Session altered.
Now we can create a table with a column of time data type:
SQL> create table time_table(time_col time);
Table created.
SQL> insert into time_table(time_col)
2 values(time '11:34:00');
1 row created.
SQL> select * from time_table;
TIME_COL
---------------
11.34.00 AM
SQL> alter session set events '10407 trace name context off';
Session altered.

Best way to create tables for huge data using oracle

Functional requirement
We kinda work for devices. Each device, roughly speaking, has its unique identifier, an IP address, and a type.
I have a routine that pings all devices that has an IP address.
This routine is nothing more than a C# console application, which runs every 3 minutes trying to ping the IP address of each device.
The result of the ping I need to store in the database, as well as the date of verification (regardless of the result of the ping).
Then we got into the technical side.
Technical part:
Assuming my ping and bank structuring process is ready from the day 01/06/2016, I need to do two things:
Daily extraction
Extraction in real time (last 24 hours)
Both should return the same thing:
Devices that are unavailable for more than 24 hours.
Devices that are unavailable for more than 7 days.
Understood to be unavailable the device to be pinged AND did not responded.
It is understood by available device to be pinged AND answered successfully.
What I have today and works very badly:
A table with the following structure:
create table history (id_device number, response number, date date);
This table has a large amount of data (now has 60 million, but the trend is always grow exponentially)
** Here are the questions: **
How to achieve these objectives without encountering problems of slowness in queries?
How to create a table structure that is prepared to receive millions / billions of records within my corporate world?
Partition the table based on date.
For partitioning strategy consider performance vs maintanence.
For easy mainanence use automatic INTERVAL partitions by month or week.
You can even do it by day or manually pre-define 2 day intervals.
You query only needs 2 calendar days.
select id_device,
min(case when response is null then 'N' else 'Y' end),
max(case when response is not null then date end)
from history
where date > sysdate - 1
group by id_device
having min(case when response is null then 'N' else 'Y' end) = 'N'
and sysdate - max(case when response is not null then date end) > ?;
If for missing responses you write a default value instead of NULL, you may try building it as an index-organized table.
You need to read about Oracle partitioning.
This statement will create your HISTORY table partitioned by calendar day.
create table history (id_device number, response number, date date)
PARTITION BY RANGE (date)
INTERVAL(NUMTOYMINTERVAL(1, 'DAY'))
( PARTITION p0 VALUES LESS THAN (TO_DATE('5-24-2016', 'DD-MM-YYYY')),
PARTITION p1 VALUES LESS THAN (TO_DATE('5-25-2016', 'DD-MM-YYYY'));
All your old data will be in P0 partition.
Starting 5/24/2016 a new partition will be automatically created each day.
HISTORY now is a single logical object but physically it is a collection of identical tables stacked on top of each other.
Because each partitions data is stored separately, when a query asks for one day worth of data, only a single partition needs to be scanned.

Oracle database and time zones

I'm working with an oracle database.
Every table got a creationdate and lastmodified field.
lastmodified is the actual time of the time zone ( UTC+1 + summer time if needed), while creationdate is the same time, but at UTC+0, leaving a difference of 2 hours in summer time and 1 hour otherwise.
Is there a way to change the creationdate so it's also using utc+1 + summer time when needed ? Thanks
You can use this one:
ALTER SESSION SET TIME_ZONE = 'Europe/Zurich';
SELECT
TO_TIMESTAMP_TZ(TO_CHAR(creationdate,'yyyymmddhh24miss"UTC"'), 'yyyymmddhh24missTZR') AT LOCAL AS creationdate_local_TIMESTAMP,
CAST(TO_TIMESTAMP_TZ(TO_CHAR(creationdate,'yyyymmddhh24miss"UTC"'), 'yyyymmddhh24missTZR') AT LOCAL AS DATE) AS creationdate_local_DATE
FROM your_table;
You must set you session time zone to region like above. If you use static value (e.g. ALTER SESSION SET TIME_ZONE = '+02:00';) it will not work properly.
Important note: if you update your table with converted time values, then you must ensure that you don't do it several times, because every update your time would be shifted by 1 or 2 hours again and again.

Oracle change dbtimezone

My database was configured with an dbtimezone=+2:00:
When my application sends a date which has a timezone, does Oracle automatically translate the date to its dbtimezone and store it in the column?
When my application asks for a field date, does Oracle automatically translate it to the application timezone?
In order to be consistency with business rules, I wanted to change this dbtimezone to UTC. So I made the alter database set time_zone='UTC' command, I restarted the oracle server and now the select dbtimezone from dual; command returns "UTC".
But, all fields date in DB haven't changed (no change -2 hours from GMT+2 to UTC). When I ask the sysdate, it returns the GMT+2 date ... I try to change my SQL Developer configuration timezone to UTC but it didn't change anything. Do I have an issue of Oracle session parameters that convert my DB data to GMT+2 before displaying it ?
Finally, does anyone have a good practice to make this change ? (change the database timezone and existing date to a new timezone).
If all you're doing is changing the database time zone setting, then you are only going to notice any change in output if your data is stored with the TIMESTAMP WITH LOCAL TIME ZONE type.
I don't recommend that though. It would be much better if your data was just stored in a regular TIMESTAMP field and was already set to UTC.
You should read the documentation about all of the different date and time datatypes, so you understand how each of these types works and differs from the other.

Oracle 10g - Determine the average of concurrent connections

Is it possible to determine the average of concurrent connections on a 10g large database installation?
Any ideas??
This is probably more of a ServerFault question.
On a basic level, you could do this by regularly querying v$session to count the number of current sessions, store that number somewhere, and average it over time.
But there are already good utilities available to help with this. Look into STATSPACK. Then look at the scripts shown here to get you started.
Alternatively you could install a commercial monitoring application like Spotlight on Oracle.
If you have Oracle Enterprise Manager set up you can create a User Defined Metric which records SELECT COUNT(*) FROM V$SESSION. Select Related Links -> User Defined Metrics to set up a new User Defined Metric. Once it collects some data you can get the data out in raw form or it will do some basic graphing for you. As a bonus you can also set up alerting if you want to be e-mailed when the metric reaches a certain value.
The tricky bit is recording the connections. Oracle doesn't do this by default, so if you haven't got anything in place then you won't have a historical record.
The easiest way to start recording connections is with Oracle's built in audit functionality. It's as simple as
audit session
/
We can see the records of each connection in a view called dba_audit_session.
Now what? The following query uses a Common Table Expression to generate a range of datetime values which span 8th July 2009 in five minute chunks. The output of the CTE is joined to the audit view for that date; A count is calulated for each connection which spans a five minute increment.
with t as
( select to_date('08-JUL-2009') + ((level-1) * (300/86400)) as five_mins
from dual connect by level <= 288)
select to_char(t.five_mins, 'HH24:MI') as five_mins
, sum(case when t.five_mins between timestamp and logoff_time
then 1
else 0 end) as connections
from t
, dba_audit_session ssn
where trunc(ssn.timestamp) = to_date('08-JUL-2009')
group by to_char(t.five_mins, 'HH24:MI')
order by t.five_mins
/
You can then use this query as the input into a query which calculates the average number of connections.
This is a fairly crude implementation: I choose five minute increments out of display considerations , but obviously the finer grained the increment the more accurate the measure. Be warned: if you make the increments too fined grained and you have a lot of connections the resultant cross join will take a long time to run!

Resources