Create parameterized view in Impala - view

My goal is to create a parameterized view in Impala so users can easily change values in a query. If I run below query, for example, in HUE, is possible to introduce a value.
SELECT * FROM customers WHERE customer_id = ${id}
But I would like to create a view as follows, that when you run it, it asks you for the value you want to search. But this way is not working:
CREATE VIEW test AS SELECT * FROM customers WHERE customer_id = ${id}
Someone know if it is possible?
Many thanks

When you creating a view, it takes the actual variable's value.
Two workarounds exist:
Create a real table where you will store/update the parameter.
CREATE VIEW test AS SELECT * FROM customers JOIN id_table ON customer_id = id_tableid
Pass a parameter into the view with the help of the user-defined function(UDF). Probably you will need two UDFs set and get. Set UDF will write UDF on HDFS and Get UDF will read the variable from HDFS.
Two above mentioned workarounds work but not ideal. My suggestion is to use Hive for parametrized view creation. You can create a GenericUDF via which you can access hive configuration and read the variable and perform filtration. You can't use it for Impala.
SELECT Generic_UDF(array(customer_id)) FROM customers
GenericUDFs has method configure you can use it to read the hive variable:
public void configure(MapredContext mapredContext) {
String name = mapredContext.getJobConf().get("name");
}

You could do the opposite, e.g. parameterize the query on the view instead

Related

Switching between multiple tables at runtime in Oracle

I am working in an environment where we have separate tables for each client (this is something which I can't change due to security and other requirements). For example, if we have clients ACME and MEGAMART then we'd have an ACME_INFO table and MEGAMART_INFO tables and both tables would have the same structure (let's say ID, SOMEVAL1, SOMEVAL2).
I would like to have a way to easily access the different tables dynamically.
To this point I've dealt with this in a few ways including:
Using dynamic SQL in procedures/functions (not fun)
Creating a view which does a UNION ALL on all of the tables and which adds a CLIENT_ID COLUMN (i.e. "CREATE VIEW COMBINED_VIEW AS SELECT 'ACME' CLIENT_ID, ID, SOMEVAL1, SOMEVAL2 FROM ACME_INFO UNION ALL SELECT 'MEGMART' CLIENT_ID, ID, SOMEVAL1, SOMEVAL2") which performs surprisingly well, but is a pain to maintain and kind of defeats some of the requirements which dictate that we have separate tables for each client.
SYNONYMs won't work because we need different connections to act on different clients
A view which refers to a package which has a package variable for the active client. This is just evil and doesn't even work out all that well.
What I'd really like is to be able to create a table function, macro, or something else where I can do something like
SELECT * FROM FN_CLIENT_INFO('ACME');
or even
UPDATE (SELECT * FROM FN_CLIENT_INFO('ACME')) SET SOMEVAL1 = 444 WHERE ID = 3;
I know that I can partially achieve this with a pipelined function, but this mechanism will need to be used by a reporting platform and if the reporting platform does something like
SELECT * FROM FN_CLIENT_INFO('ACME') WHERE SOMEVAL1 = 4
then I want it to run efficiently (assuming SOMEVAL1 has an index for example). This is where a macro would do well.
Macros seem like a good solution, but the above won't work due to protections put in place to prevent against SQL injection.
Is there a way to create a macro that somehow verifies that the passed in VARCHAR2 is a valid table name and therefore can be used or is there some other approach to address what I need?
I was thinking that if I had a function which could translate a client name to a DBMS_TF.TABLE_T then I could use a macro, but I haven't found a way to do that well.
A lesser-known method for such cases is to use a system-partitioned table. For instance, consider the following code:
Full example: https://dbfiddle.uk/UQsAgHCk
create table t_common(a int, b int)
partition by system (
partition ACME_INFO,
partition MEGAMART_INFO
);
insert into t_common partition(acme_info)
values(1,1);
insert into t_common partition(megamart_info)
values(2,2);
commit;
select * from t_common partition(acme_info);
select * from t_common partition(megamart_info);
As demonstrated, a common table can be used with different partitions for different clients, allowing it to be used as a regular table. We can create a system-partitioned table and utilize the exchange partition feature with older tables. Then, we can drop the older tables and create views with the same names, so that older code continues to work with views while all new code can work with the common table by specifying a partition.

How to Create a VIEW in oracle

So I'm supposed to create a view product_view that presents the information about how many products of a particular type are in each warehouse: product ID, product name, category_id, warehouse id, total quantity on hand for this warehouse.
So I used this query and tried to change it so many times but I keep getting errors
CREATE OR REPLACE VIEW PRODUCT_VIEW AS
SELECT p.product_id, p.product_name,
COUNT(p.product_id), SUM(i.quantity_on_hand)
FROM oe.product_information p JOIN oe.inventories i
ON p.product_id=i.product_id
ORDER BY i.warehouse_id;
ERROR at line 2:
ORA-00928: missing SELECT keyword
Please help... Thanks
Image showing the Tables in the OE schema
Image showing the error that occurs
When I get errors creating a view, I firstly drop the CREATE ... AS line and fix the query until it works. Then you need to name all the columns, for instance COUNT(p.product_id) won't work, you'll need to write something like COUNT(p.product_id) AS product_count or specify a list of aliases, like so
I'm not sure what the output of your query should look like. You'll get better answers quicker on stackexchange if you type a minimal example including the CREATE statments, some input data and your desired output, leaving out columns that are not essential.

How to create Interactive/Classic Report with dynamic SQL?

I'm running Apex 19.2 and I would like to create a classical or interactive report based on dynamic query.
The query I'm using is not known at design time. It depends on an page item value.
-- So I have a function that generates the SQL as follows
GetSQLQuery(:P1_MyItem);
This function may return something like
select Field1 from Table1
or
Select field1,field2 from Table1 inner join Table2 on ...
So it's not a sql query always with the same number of columns. It's completely variable.
I tried using PL/SQL function Body returning SQL Query but it seems like Apex needs to parse the query at design time.
Has anyone an idea how to solve that please ?
Cheers,
Thanks.
Enable the Use Generic Column Names option, as Koen said.
Then set Generic Column Count to the upper bound of the number of columns the query might return.
If you need dynamic column headers too, go to the region attributes and set Type (under Heading) to the appropriate value. PL/SQL Function Body is the most flexible and powerful option, but it's also the most work. Just make sure you return the correct number of headings as per the query.

How do I use a parameter within BI Publisher to update the table name I am querying in an Oracle Data set

I am trying to create a BI Publisher data model which runs the Oracle query below -
SELECT *
FROM audit_YYYYMM
(this should be the YYYYMM of the current date)
How do I setup a parameter default value within the datamodel to grab the YYYYMM from the SYSDATE?
How do I append this parameter within the data set SQL Query?
I tried SELECT * FROM audit_:Month_YYYYMM
(where I had a string parameter called Month_YYYMM)
This did not work.
You are going to have to use something like EXECUTE_IMMEDIATE. And you may have to make a separate PL/SQL package to launch rather than use the built in data definition stuff.
https://docs.oracle.com/cd/B19306_01/appdev.102/b14261/executeimmediate_statement.htm

Need index with sys_connect_by_path function? How to emulate it?

I have a self referencing table in Oracle 9i, and a view that gets data from it:
CREATE OR REPLACE VIEW config AS
SELECT c.node_id,
c.parent_node_id,
c.config_key,
c.config_value,
(SELECT c2.config_key
FROM vera.config_tab c2
WHERE c2.node_id = c.parent_node_id) AS parent_config_key,
sys_connect_by_path(config_key, '.') path,
sys_connect_by_path(config_key, '->') php_notation
FROM config_tab c
CONNECT BY c.parent_node_id = PRIOR c.node_id
START WITH c.parent_node_id IS NULL
ORDER BY LEVEL DESC
The table stores configuration for PHP application. Now I need to use same config in oracle view.
I would like to select some values from the view by path, but unfortunately this takes 0,15s so it's unacceptable cost.
SELECT * FROM some_table
WHERE some_column IN (
SELECT config_value FROM config_tab WHERE path = 'a.path.to.config'
)
At first I thought of a function index on sys_connect_by_path, but it is impossible, as it needs also CONNECT BY clause.
Any suggestions how can I emulate an index on the path column from the 'config' view?
If your data doesn't change frequently in the config_tab, you could use a materialized view with the same query as your view. You could then index the path column of your materialized view.
CREATE MATERIALIZED VIEW config
REFRESH COMPLETE ON DEMAND
AS <your_query>;
CREATE INDEX ix_config_path ON config (path);
Since this is a complex query, you would need to do a full refresh of your materialized view every time the base table is updated so that the data in the MV doesn't become stale.
Update
Your column path will be defined as a VARCHAR2(4000). You could limit the size of this column in order to index it. In your query, replace sys_connect_by_path(...) by SUBSTR(sys_connect_by_path(..., 1, 1000) for example.
You won't be able to use REFRESH ON COMMIT on a complex MV. A simple trigger won't work. You will have to modify the code that updates your base table to include a refresh somehow, I don't know if this is practical in your environment.
You could also use a trigger that submits a job that will refresh the MV. The job will execute once you commit (this is a feature of dbms_job). This is more complex since you will have to check that you only trigger the job once per transaction (using a package variable for example). Again, this is only practical if you don't update the base table frequently.

Resources