Coding View in Progress OpenEdge Database - view

I have started exploring Progress databases and I would like to ask if someone knows how I code a view (Like in SQL) in the OpenEdge Procedure Editor?

Rule #1 -- Progress is not SQL.
There is some very limited and very old-style SQL support in the procedure editor. (It is SQL-89 ish)
It is not something that you should use for anything other than a very quick and extremely simple ad-hoc query. Such as:
select count( * ) from customer.
Anything fancier is going to lead to endless pain and frustration.

There are no views in Progress. Plenty of other things though, like FOR-statements, joins and other stuff.
In an ABL-editor you can write:
FOR EACH table1 NO-LOCK, EACH table2 NO-LOCK WHERE table2.id = table1.id:
DISPLAY table1.field2 table2.field3 WITH FRAME frOne 20 DOWN.
END.
However storing that as a "view" that you can query in the future is not something that is possible.
Don't think about what's possible in SQL or not - you need to focus on Progress instead.

CREATE VIEW ne_customer AS
SELECT cust_no, last_name, street, city, state
FROM SPORTS.customer
WHERE state in ('NH', 'MA', 'ME', 'CT', 'RI', 'VT') ;
See:
http://knowledgebase.progress.com/articles/Article/000035978

Related

SQL Plan change reasons

One of the job schedulers is running in the production environment on a daily basis which use to take only 20 mins based past execution history, but today it's been more than 2 hours still not completed.
a) How to check whether the SQL plan has changed today or not?
b) What could be the reasons for the plan change? One I know due to code change. What else could cause plan change?
You can check if the SQL execution plan has changed by using the Active Workload Repository (AWR). First, you need to find the SQL_ID for the relevant query. The view GV$SQL contains the most recent SQL. If you can't find the query in this view, try DBA_HIST_SQLTEXT instead.
select sql_id, sql_text
from gv$sql
where lower(sql_fulltext) like '%some unique string%';
With the SQL_ID, you can start investigating historical information. The table DBA_HIST_SQLSTAT contains lots of summary information about the SQL. The most important column is PLAN_HASH_VALUE; if that value changes, then the execution plan has changed.
select snap_id, sql_id, plan_hash_value, executions_delta, elapsed_time_delta/100000 seconds_delta
,dba_hist_sqlstat.*
from dba_hist_sqlstat
--join to dba_hist_snapshot if you want to find precise times instead of SNAP_IDs.
where sql_id = '&SQL_ID'
order by dba_hist_sqlstat.snap_id;
If the plan has changed, you can view both plans with this:
select * from table(dbms_xplan.display_awr(sql_id => '&SQL_ID'));
Unfortunately, the most difficult part of query tuning with Oracle is that there are a dozen different ways to view the execution plans, and each of them provides slightly different data.
This query only returns numbers for the last execution, but it returns actual numbers and times, which helps you focus on the specific operation and wait events that caused the problem.
select dbms_sqltune.report_sql_monitor(sql_id => '&SQL_ID', type => 'text') from dual;
This query returns some additional execution plan information, specifically the Note section. Most graphical IDEs leave out that section, but it's vital for complex troubleshooting. If something weird is going on, the Note section will often explain why.
select * from table(dbms_xplan.display_cursor(sql_id => '&SQL_ID'));
There are many reasons why execution plans can change. If you add additional information to the question I may be able to make an educated guess.
Quick Check :
Please check whether the Statistics, is upto Date, both System and Table statistics.
Pleae check if any changes to table or index made ?

Combining 2 tables with the number of users on a different table ORACLE SQl

Is this considered good practice?
Been trying to merge data from 2 different tables (dsb_nb_users_option & dsb_nb_default_options) for the number of users existing on dsb_nb_users table.
Would a JOIN statement be the best option?
Or a sub-query would work better for me to access the data laying on dsb_nb_users table?
This might be an operation i will have to perform a few times so i want to understand the mechanics of it.
INSERT INTO dsb_nb_users_option(dsb_nb_users_option.code, dsb_nb_users_option.code_value, dsb_nb_users_option.status)
SELECT dsb_nb_default_option.code, dsb_nb_default_option.code_value, dsb_nb_default_option.status
FROM dsb_nb_default_options
WHERE dsb_nb_users.user_id IS NOT NULL;
Thank you for your time!!
On the face of it, I see nothing wrong with your query to achieve your goal. That said, I see several things worth pointing out.
First, please learn to format your code - for your own sanity as well as that of others who have to read it. At the very least, put each column name on a line of its own, indented. A good IDE tool like SQL Developer will do this for you. Like this:
INSERT INTO dsb_nb_users_option (
dsb_nb_users_option.code,
dsb_nb_users_option.code_value,
dsb_nb_users_option.status
)
SELECT
dsb_nb_default_option.code,
dsb_nb_default_option.code_value,
dsb_nb_default_option.status
FROM
dsb_nb_default_options
WHERE
dsb_nb_users.user_id IS NOT NULL;
Now that I've made your code more easily readable, a couple of other things jump out at me. First, it is not necessary to prefix every column name with the table name. So your code gets even easier to read.
INSERT INTO dsb_nb_users_option (
code,
code_value,
status
)
SELECT
code,
code_value,
status
FROM
dsb_nb_default_options
WHERE
dsb_nb_users.user_id IS NOT NULL;
Note that there are times you need to qualify a column name, either because oracle requires it to avoid ambiguity to the parser, or because the developer needs it to avoid ambiguity to himself and those that follow. In this case we usually use table name aliasing to shorten the code.
select a.userid,
a.username,
b.user_mobile_phone
from users a,
join user_telephones b on a.userid=b.userid;
Finally, and more critical your your overall design, It appears that you are unnecessarily duplicating data across multiple tables. This goes against all the rules of data design. Have you read up on 'data normalization'? It's the foundation of relational database theory.

Does TOAD for oracle generate logs of executed SQL?

I cannot seem to find a view which I created in one of my schemas within TOAD. Lets assume I don't know the exact schema in which I've created it, is there any way where I can find all the create statements which have been executed within a period of time, lets say the last days.
Thank you in advance.
If you created the view, just query ALL the views, and order by the date in which it was created.
select * from dba_objects
where object_type = 'VIEW'
order by created desc, last_ddl_time desc
We're hitting DBA_ views to make sure we look at EVERYTHING, not just the things you have PRIVS for. Switch to ALL_ views in case you lack access, and hope you didn't create the view in a schema in which your current logon can't see.
The other way to go is query the views themselves and key in on the table you think you included in the SQL behind the view.
SELECT *
FROM dba_views
WHERE UPPER (text_vc) LIKE '%EMPLOYEES%';
You might be looking for a feature called "SQL Recall" in Toad. Press F8 or View/SQL Recall. It will show you the SQL you ran in the last month or so.

What's the best practice to filter out specific year in query in Netezza?

I am a SQL Server guy and just started working on Netezza, one thing pops up to me is a daily query to find out the size of a table filtered out by year: 2016,2015, 2014, ...
What I am using now is something like below and it works for me, but I wonder if there is a better way to do it:
select count(1)
from table
where extract(year from datacolumn) = 2016
extract is a built-in function, applying a function on a table with size like 10 billion+ is not imaginable in SQL Server to my knowledge.
Thank you for your advice.
The only problem i see with the query is the where clause which executes a function on the 'variable' side. That effectively disables zonemaps and thus forces netezza to scan all data pages, not only those with data from that year.
Instead write something like:
select count(1)
from table
where datecolumn between '2016-01-01' and '2016-12-31'
A more generic alternative is to create a 'date dimension table' with one row per day in your tables (and a couple of years into the future)
This is an example for Postgres: https://medium.com/#duffn/creating-a-date-dimension-table-in-postgresql-af3f8e2941ac
This enables you to write code like this:
Select count(1)
From table t join d_date d on t.datecolumn=d.date_actual
Where year_actual=2016
You may not have the generate_series() function on your system, but a 'select row_number()...' can do the same trick. A download is available here: https://www.ibm.com/developerworks/community/wikis/basic/anonymous/api/wiki/76c5f285-8577-4848-b1f3-167b8225e847/page/44d502dd-5a70-4db8-b8ee-6bbffcb32f00/attachment/6cb02340-a342-42e6-8953-aa01cbb10275/media/generate_series.tgz
A couple of further notices in 'date interval' where clauses:
Those columns are the most likely candidate for a zonemaps optimization. Add a 'organize on (datecolumn)' at the bottom of your table DDL and organize your table. That will cause netezza to move around records to pages with similar dates, and the query times will be better.
Furthermore you should ensure that the 'distribute on' clause for the table results in an even distribution across data slices of the table is big. The execution of the query will never be faster than the slowest dataslice.
I hope this helps

Is it possible to know what triggers would fire given a query?

I have a database with (too) many triggers. They can cascade.
I have a query, which seems simple, and by no means I can remember the effect of all triggers. So, that simple query might actually be not simple at all and not do what I expect.
Is there a way to know what triggers would fire before running the query, or what triggers have fired after running it (not committed yet)?
I am not really interested in queries like SELECT … FROM user_triggers WHERE … because I know them already, and also because it does not tell me whether the firing conditions of the triggers will be met in my query.
Thanks
"I have a database with (too) many triggers. They can cascade."
This is just one of the reasons why many people anathematize triggers.
"Is there a way to know what triggers would fire before running the
query"
No. Let's consider something which you might find in an UPDATE trigger body:
if :new.sal > :old.sal * 1.2 then
insert into big_pay_rises values (:new.empno, :old.sal, :new.sal, sysdate);
end if;
How could we tell whether the trigger on BIG_PAY_RISES will fire? It might, it might not depending on an algorithm we cannot parse out of the DML statement.
So, the best you can hope for is a recursive search of DBA_TRIGGERS and DBA_DEPENDENCIES to identify all the triggers which might feature in your cascade. But it's going to be impossible to identify which ones will definitely fire in any given scenario.
" or what triggers have fired after running it (not committed yet)?"
As others have pointed out, logging is one option. But if you are using Oracle 11g you have another option: the PL/SQL Hierarchical Profiler. This is a non-intrusive tool which tracks all the PL/SQL program units touched by a PL/SQL call, including triggers. One of the cool features of the Hierarchical Profiler is that it includes PUs which belong in other schemas, which might be useful with cascading triggers.
So, you just need to wrap your SQL in an anonymous block and call it with the Hierarchical Profiler. Then you can filter you report to reveal only the triggers which fired. Find out more .
Is there a way to know what triggers would fire before running the query, or what triggers have fired after running it (not committed yet)?
To address this I would run the query inside an anonymous block using a PL/SQL debugger.
There is no such thing called parse through your query and give u the triggers involved in your query. It is going to be as simple as this. Just pick the table names from the query you are running and for each one just list the triggers using the following query before running the query. Isn't that simple enough?
select trigger_name
, trigger_type
, status
from dba_triggers
where owner = '&owner'
and table_name = '&table'
order by status, trigger_name

Resources