To find out all objects that are using one keyword as a variable - oracle

I have data in a config table as below.
Param_key param_value
"RAL RREC INCLUDE TERR" "GISS"
"XNA MIF DTT POSTFIX EXCL" "GISS"
"NON CUST DTT POSTFIX INCL" "GISS"
"GIS_TERRITORY_CHNL_XREF" "GISS"
"GIS TERRITORY" "GISS"
Now the last "GIS TERRITORY" is used as a variable as below in a procedure called "Update_xref".
v_gis_territory VARCHAR2(4) := mckb_load.param_tools.get_hash_string('GIS TERRITORY');
Likewise I have to identify all the objects where other values(param_key) of config table are used.
Do we have any table similar to dba_depndencies to achieve above?
Appreciate for any input.

Related

How to update table from JSON flowfile

I have a flow-files with the below structure
{
"PN" : "U0-WH",
"INPUT_DATE" : "44252.699895833335",
"LABEL" : "Marker",
"STATUS" : "Approved",
}
and I need to execute an update statement using some fields
update table1 set column1 = 'value' where pn=${PN}
I found convertJsonToSQL but am not sure how to use it in this case
You can use a processor namely ConvertjSONToSQL. Using this you can convert your json into an update query.
ConvertjSONToSQL Description
It takes the following parameters :
1. JDBC Connection Pool : Create a JDBC pool which takes DB connection information as input.
2. Statement Type : Here you need to provide type of statement you want to create. In your case its 'UPDATE'.
3. Table Name : Name of the table for which update query needed to be created
4. Schema Name : Name of the schema of your database.
5. Translate Field Names : If true, the Processor will attempt to translate JSON field names into the appropriate column names for the table specified. If false, the JSON field names must match the column names exactly, or the column will not be updated
6. Unmatched Field Behaviour : if an incoming JSON element has a field that does not map to any of the database table's columns, this property specifies how to handle the situation
7. Unmatched Column Behaviour : If an incoming JSON element does not have a field mapping for all of the database table's columns, this property specifies how to handle the situation
8. Update Keys : A comma-separated list of column names that uniquely identifies a row in the database for UPDATE statements. If the Statement Type is UPDATE and this property is not set, the table's Primary Keys are used. In this case, if no Primary Key exists, the conversion to SQL will fail if Unmatched Column Behaviour is set to FAIL. This property is ignored if the Statement Type is INSERT
Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)
Read the description above and try to use the properties given. Detailed description of the processor is given in the link.
ConvertjSONToSQL Description

Determine whether a SAS dataset is a table or view

I'm trying to determine, given a SAS dataset's name, whether it is a table or view.
The context is that I have a data step where I iterate over a list of dataset names, and if the dataset is a table (and not a view) I'd like to perform a call execute to a sql procedure which drops the table whose name is specified. As it stands now, the code works as intended but throws several warnings of the form
WARNING: File WORK.datasetname.DATA does not exist.
Here is the code I'm using:
data _null_;
set work.ds_list;
tbl_loc = scan(tbl_name,1,'.');
if(tbl_loc = 'WORK') then do;
drop_string = catx(' ',
'proc sql; drop table',
tbl_name,
'; quit;');
call execute (drop_string);
put ' ** Queueing call to drop table ' tbl_name;
end;
run;
So how do I determine from the dataset's name whether it is a view or table?
Thanks!
The function EXIST function will help you here.
if exist(tbl_name,'DATA') then memtype = 'TABLE'; else
if exist(tbl_name,'VIEW') then memtype = 'VIEW';
drop_statements = catx
( ' ',
'proc sql; drop', memtype, tbl_name, '; quit;'
);
From Docs
Syntax
EXIST(member-name <, member-type <, generation>>)
Required Argument
member-name
is a character constant, variable, or expression that specifies the
SAS library member. If member-name is blank or a null string, then
EXIST uses the value of the LAST system variable as the member name.
Optional Arguments
member-type
is a character constant, variable, or expression that specifies the
type of SAS library member. A few common member types include ACCESS,
CATALOG, DATA, and VIEW. If you do not specify a member-type, then the
member type DATA is assumed.
Rather than 'create it' how about using SASHELP.VTABLE to determine if it's a VIEW or DATA.
data temp /view=temp;
set sashelp.class;
run;
data check;
set sashelp.vtable;
where libname='WORK';
run;
Note that the memtype in this case is VIEW. You could probably join your data set to the table as well or do some form of lookup, but a join would be pretty straightforward.
Then once you have the data sets, you can use a PROC DATASETS to drop them all at once rather than one at a time. You don't indicate what initially created this list, but how that list is created is important and could possibly simplify this a lot.
proc datasets lib=work;
delete temp / memtype=view;
run;quit;
so - you'd like to delete all datasets, but not views, from a library?
Simply use the (documented) delete procedure:
proc delete lib=work data=_all_ (memtype=data) ;
run;

Drupal 7 | Query through multiple node references

Let me start with structure first:
[main_node]->field_reference_to_sub_node->[sub_node]->field_ref_to_sub_sub_node->[sub_sub_node]
[sub_sub_node]->field_type = ['wrong_type', 'right_type']
How to efficiently query all [sub_sub_node] ids with right_type, referenced by main_node (which is current opened node)?
Doing node_load on foreach seems a bit of overkill for this. Anybody has some better solutions? Greatly appreciated!
If you want to directly query the table of the fields:
$query = db_select('node', 'n')->fields('n_sub_subnode', array('nid'));
$query->innerJoin('table_for_field_reference_to_sub_node', 'subnode', "n.nid = subnode.entity_id AND subnode.entity_type='node'");
$query->innerJoin('node', 'n_subnode', 'subnode.subnode_target_id = n_subnode.nid');
$query->innerJoin('table_for_field_ref_to_sub_sub_node', 'sub_subnode', "n_subnode.nid = sub_subnode.entity_id AND sub_subnode.entity_type='node'");
$query->innerJoin('node', 'n_sub_subnode', 'sub_subnode.sub_subnode_target_id = n_sub_subnode.nid');
$query->innerJoin('table_for_field_type', 'field_type', "n_sub_subnode.nid = field_type.entity_id AND field_type.entity_type='node'");
$query->condition('n.nid', 'your_main_node_nid');
$query->condition('field_type.field_type_value', 'right_type');
Here is the explanation of each line:
$query = db_select('node', 'n')->fields('n_sub_subnode', array('nid'));
We start by querying the base node table, with the alias 'n'. This is the table used for the 'main_node'. The node ids which will be returned will be however from another alias (n_sub_subnode), you will see it below.
$query->innerJoin('table_for_field_reference_to_sub_node', 'subnode', "n.nid = subnode.entity_id AND subnode.entity_type='node'");
The first join is with the table of the field_reference_to_sub_node field, so you have to replace this with the actual name of the table. This is how we will get the references to the subnodes.
$query->innerJoin('node', 'n_subnode', 'subnode.subnode_target_id = n_subnode.nid');
A join back to the node table for the subnodes. You have to replace the 'subnode_target_id' with the actual field for the target id from the field_reference_to_sub_node table. The main purpose of this join is to make sure there are valid nodes in the subnode field.
$query->innerJoin('table_for_field_ref_to_sub_sub_node', 'sub_subnode', "n_subnode.nid = sub_subnode.entity_id AND sub_subnode.entity_type='node'");
The join to the table that contains references to the sub_sub_node, so you have to replace the 'table_for_field_ref_to_sub_sub_node' with the actual name of the table. This is how we get the references to the sub_sub_nodes.
$query->innerJoin('node', 'n_sub_subnode', 'sub_subnode.sub_subnode_target_id = n_sub_subnode.nid');
The join back to the node table for the sub_sub_nodes, to make sure we have valid references. You have to replace the 'sub_subnode_target_id' with the actual field for the target id from the 'field_ref_to_sub_sub_node' table.
$query->innerJoin('table_for_field_type', 'field_type', "n_sub_subnode.nid = field_type.entity_id AND field_type.entity_type='node'");
And we can now finally join the table with the field_type information. You have to replace the 'table_for_field_type' with the actual name of the table.
$query->condition('n.nid', 'your_main_node_nid');
You can put now a condition for the main node id if you want.
$query->condition('field_type.field_type_value', 'right_type');
And the condition for the field type. You have to replace the 'field_type_value' with the actual name of the table field for the value.
Of course, if you are really sure that you always have valid references, you can skip the joins to the node table and directly join the field tables using the target id and the entity_id fields (basically the target_id from on field table has to be the entity_id for the next one).
I really hope I do not have typos, so please check the queries carefully.

Extracting an Array of Structs in Hive

I have an external table in hive
CREATE EXTERNAL TABLE FOO (
TS string,
customerId string,
products array< struct <productCategory:string, productId:string> >
)
PARTITIONED BY (ds string)
ROW FORMAT SERDE 'some.serde'
WITH SERDEPROPERTIES ('error.ignore'='true')
LOCATION 'some_locations'
;
A record of the table may hold data such as:
1340321132000, 'some_company', [{"productCategory":"footwear","productId":"nik3756"},{"productCategory":"eyewear","productId":"oak2449"}]
Do anyone know if there is a way to simply extract all the productCategory from this record and return it as an array of productCategories without using explode. Something like the following:
["footwear", "eyewear"]
Or do I need to write my own GenericUDF, if so, I do not know much Java (a Ruby person), can someone give me some hints? I read some instructions on UDF from Apache Hive. However, I do not know which collection type is best to handle array, and what collection type to handle structs?
===
I have somewhat answered this question by writing a GenericUDF, but I ran into 2 other problems. It is in this SO Question
You can use json serde or build-in functions get_json_object, json_tuple.
With rcongiu's Hive-JSON SerDe the usage will be:
define table:
CREATE TABLE complex_json (
DocId string,
Orders array<struct<ItemId:int, OrderDate:string>>)
load sample json into it (it is important for this data to be one-lined):
{"DocId":"ABC","Orders":[{"ItemId":1111,"OrderDate":"11/11/2012"},{"ItemId":2222,"OrderDate":"12/12/2012"}]}
Then fetching orders ids is as easy as:
SELECT Orders.ItemId FROM complex_json LIMIT 100;
It will return the list of ids for you:
itemid
[1111,2222]
Proven to return correct results on my environment. Full listing:
add jar hdfs:///tmp/json-serde-1.3.6.jar;
CREATE TABLE complex_json (
DocId string,
Orders array<struct<ItemId:int, OrderDate:string>>
)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe';
LOAD DATA INPATH '/tmp/test.json' OVERWRITE INTO TABLE complex_json;
SELECT Orders.ItemId FROM complex_json LIMIT 100;
Read more here:
http://thornydev.blogspot.com/2013/07/querying-json-records-via-hive.html
One way would be to use either the inline or explode functions, like so:
SELECT
TS,
customerId,
pCat,
pId,
FROM FOO
LATERAL VIEW inline(products) p AS pCat, pId
Otherwise you can write UDF. Check out this post and this post for that. Along with the following resources:
Matthew Rathbone's guide to writing generic UDFs
Mark Grover's how to guide
the baynote blog post on generic UDFs
If size of array is fixed ( like 2 ). Please try:
products[0].productCategory,products[1].productCategory
But if not, UDF should be the right solution. I guess that you could do it in JRuby. GL!

With Oracle XML Tables do XQuery selects use XmlIndexes?

I am trying to retrieve keys and parent keys from some structured xml stored as binary xml in oracle. I have tried created unstructured index and also an index with a structured component. The structured component works fine when doing a SELECT against XMLTABLE() but I cannot retrieve values of parent node using XMLTable. I am therefore trying the following Xquery to retrieve parent values but this is not using the index at all. Does this style of query support using XmlIndexes? I can't find anything in the docs that say either way.
SELECT y.*
FROM xml_data x, XMLTABLE(xmlnamespaces( DEFAULT 'namespace'),
'for $i in /foo/bar
return element r {
$i/someKey
,element parentKey { $i/../someKey }
}'
PASSING x.import_xml
COLUMNS
someKey VARCHAR2(100) PATH 'someKey'
,parentKey VARCHAR2(100) PATH 'parentKey'
) y
Thanks, Tom

Resources