Query time of ms access on network - visual-studio-2005

A ms access 2000 mdb file is hosted on a network drive. Inside this mdb, tbl_History table has 42,223 records. Every month, new 2,400 records are stored on this table.
A customized application developed by VS 2005 is queried on this table. The query is like this -
SELECT Top 1 MEMonth, MEYear FROM tbl_History WHERE SnapDate IN (SELECT MAX(SnapDate) FROM tbl_History WHERE CompanyCode = 'ABC'
To get the query result, it takes a minute. Is there any way to fine-tune the query time?
Thank you.

Related

How can rebuild index in IOT(Index organized table)?

Dear all experts.
I have IOT having 7 million records in oracle database, eventually iot use for fast access primary key but in my case, when i select primary key column it takes 5-4 seconds for select single column.
My query is:
Select Emp_Refno from Emp_master where Rownum =1 order
by Emp_Refno asc;
I have also used Sql Tunning Advisor for optimize it and also get index suggest ion from SQL Tunning Advisor and also applied it, But in explain plan not seen this index and it takes same time after it.
I'm curious if the following query has the same execution time:
select * from (select Emp_Refno from Emp_master order by Emp_Refno asc) where rownum = 1
This is how I usually write top-n queries for Oracle.

Best way to create tables for huge data using oracle

Functional requirement
We kinda work for devices. Each device, roughly speaking, has its unique identifier, an IP address, and a type.
I have a routine that pings all devices that has an IP address.
This routine is nothing more than a C# console application, which runs every 3 minutes trying to ping the IP address of each device.
The result of the ping I need to store in the database, as well as the date of verification (regardless of the result of the ping).
Then we got into the technical side.
Technical part:
Assuming my ping and bank structuring process is ready from the day 01/06/2016, I need to do two things:
Daily extraction
Extraction in real time (last 24 hours)
Both should return the same thing:
Devices that are unavailable for more than 24 hours.
Devices that are unavailable for more than 7 days.
Understood to be unavailable the device to be pinged AND did not responded.
It is understood by available device to be pinged AND answered successfully.
What I have today and works very badly:
A table with the following structure:
create table history (id_device number, response number, date date);
This table has a large amount of data (now has 60 million, but the trend is always grow exponentially)
** Here are the questions: **
How to achieve these objectives without encountering problems of slowness in queries?
How to create a table structure that is prepared to receive millions / billions of records within my corporate world?
Partition the table based on date.
For partitioning strategy consider performance vs maintanence.
For easy mainanence use automatic INTERVAL partitions by month or week.
You can even do it by day or manually pre-define 2 day intervals.
You query only needs 2 calendar days.
select id_device,
min(case when response is null then 'N' else 'Y' end),
max(case when response is not null then date end)
from history
where date > sysdate - 1
group by id_device
having min(case when response is null then 'N' else 'Y' end) = 'N'
and sysdate - max(case when response is not null then date end) > ?;
If for missing responses you write a default value instead of NULL, you may try building it as an index-organized table.
You need to read about Oracle partitioning.
This statement will create your HISTORY table partitioned by calendar day.
create table history (id_device number, response number, date date)
PARTITION BY RANGE (date)
INTERVAL(NUMTOYMINTERVAL(1, 'DAY'))
( PARTITION p0 VALUES LESS THAN (TO_DATE('5-24-2016', 'DD-MM-YYYY')),
PARTITION p1 VALUES LESS THAN (TO_DATE('5-25-2016', 'DD-MM-YYYY'));
All your old data will be in P0 partition.
Starting 5/24/2016 a new partition will be automatically created each day.
HISTORY now is a single logical object but physically it is a collection of identical tables stacked on top of each other.
Because each partitions data is stored separately, when a query asks for one day worth of data, only a single partition needs to be scanned.

Selecting all User Created Tables in ORACLE

I am using Oracle Xpress Edition. I want to know how to select only user created tables in Oracle DB.?
I am using this query:
select * from user_tables;
But it showing 24 rows. But i have only created 6 table.I don't know why & from where other tables (like APEX$_WS_FILES,DEPT, DEMO_USERS,APEX$_ACL,, APEX$_WS_HISTORY, etc) are showing.
How to avoid those useless table.?
These tables were presumably created during any Oracle APEX related installation. You can use the below steps to get rid of them.
SELECT * FROM ALL_OBJECTS WHERE OBJECT_TYPE = 'TABLE' AND OWNER = 'your_user' ORDER BY created;
As these tables have been installed via an application, they most
probably have been installed in a small and coherent time window. What
I mean here is that probably they have been installed within a time
frame of 30 mins, 1 hr or so. So if you order them by creation time,
they will all flock to consecutive rows in the output of above query.
Identify the time frame in which they have been started and
finished installing those tables. Write the above query once again to
filter that time frame out. You are then expected to get only your
tables.

Tuning the below Sql query

I have a database of Old version Oracle 8.1.7 there I have been running the below Union query
select c_ordine_es
,c_ordine_salesnet
,v_oyov
,v_annuale
,v_oneoff
,v_canone
,c_operatore_tam
from v_ordine_cliente_easysell
where d_ultima_modifica>DataRif
union
select c_ordine_es
,c_ordine_salesnet
,v_oyov
,v_annuale
,v_oneoff
,v_canone
,c_operatore_tam
from v_ordine_cliente_easysell v
,scarti_interfaccia_easysell_o s
where s.c_codice_es=v.c_ordine_es
and s.t_tabella_es=pkType.K_SCARTO_ORDINE_CLIENTE;
Every time I run these query the SQL Client(I am using Toad) hangs. Here I must mention data in v_ordine_cliente_easysell and scarti_interfaccia_easysell_o these two views/synonyms are fetched using DB Link(To another SIEBEL DB). I guess the problem is happening at the time of fetching data via DB_LINK, as the SIEBEL DB is alwas very busy. Would you please suggest me how could I tune the above query?
The Explain plan goes like below
OPERATION OPTIONS OBJECT_NODE POSITION COST CARDINALITY BYTES
SELECT STATEMENT 28 28 478912 61779648
HASH JOIN 1 28 478912 61779648
INDEX FAST FULL SCAN 1 2 11354 102186
REMOTE SIEB.WORLD 2 23 4218 506160
Make a local copy of table v_ordine_cliente_easysell.
Make a local copy of table scarti_interfaccia_easysell_o filtered with
t_tabella_es=pkType.K_SCARTO_ORDINE_CLIENTE.
Run the query on local copies, but using UNION ALL

Oracle 10g - Determine the average of concurrent connections

Is it possible to determine the average of concurrent connections on a 10g large database installation?
Any ideas??
This is probably more of a ServerFault question.
On a basic level, you could do this by regularly querying v$session to count the number of current sessions, store that number somewhere, and average it over time.
But there are already good utilities available to help with this. Look into STATSPACK. Then look at the scripts shown here to get you started.
Alternatively you could install a commercial monitoring application like Spotlight on Oracle.
If you have Oracle Enterprise Manager set up you can create a User Defined Metric which records SELECT COUNT(*) FROM V$SESSION. Select Related Links -> User Defined Metrics to set up a new User Defined Metric. Once it collects some data you can get the data out in raw form or it will do some basic graphing for you. As a bonus you can also set up alerting if you want to be e-mailed when the metric reaches a certain value.
The tricky bit is recording the connections. Oracle doesn't do this by default, so if you haven't got anything in place then you won't have a historical record.
The easiest way to start recording connections is with Oracle's built in audit functionality. It's as simple as
audit session
/
We can see the records of each connection in a view called dba_audit_session.
Now what? The following query uses a Common Table Expression to generate a range of datetime values which span 8th July 2009 in five minute chunks. The output of the CTE is joined to the audit view for that date; A count is calulated for each connection which spans a five minute increment.
with t as
( select to_date('08-JUL-2009') + ((level-1) * (300/86400)) as five_mins
from dual connect by level <= 288)
select to_char(t.five_mins, 'HH24:MI') as five_mins
, sum(case when t.five_mins between timestamp and logoff_time
then 1
else 0 end) as connections
from t
, dba_audit_session ssn
where trunc(ssn.timestamp) = to_date('08-JUL-2009')
group by to_char(t.five_mins, 'HH24:MI')
order by t.five_mins
/
You can then use this query as the input into a query which calculates the average number of connections.
This is a fairly crude implementation: I choose five minute increments out of display considerations , but obviously the finer grained the increment the more accurate the measure. Be warned: if you make the increments too fined grained and you have a lot of connections the resultant cross join will take a long time to run!

Resources