ORACLE: Table or View Not found using Semantic Sparql query - oracle

On a Windows Server 19 based Oracle 19c Enterprise database, as the EE user, I created a Triple table, a user owned network and user owned model:
CREATE TABLE EE.RDF_WORDNET (TRIPLE MDSYS.SDO_RDF_TRIPLE_S)
COLUMN TRIPLE NOT SUBSTITUTABLE AT ALL LEVELS TABLESPACE USERS
LOGGING COMPRESS NOCACHE PARALLEL MONITORING;
exec sem_apis.create_sem_network('semts', network_owner=>'EE', network_name=>'EE_WordNet' );
exec sem_apis.create_sem_model('wn','RDF_WORDNET','triple', network_owner=>'EE', network_name=>'EE_WordNet');
Then bulk-load in a ton of data from the Princeton Wordnet, which all goes in with out error....and create the entailment
exec SEM_APIS.CREATE_ENTAILMENT('rdfs_rix_wn',
SEM_Models('wn'),
SEM_Rulebases('RDFS'),
network_owner=>'EE',
network_name=>'EE_WordNet');
I can check the tables and all the data looks like its good, and the views/tables created in the network_owner (EE) schema look right, but when I run a SPARQL query as the EE user (network/model owner), I get ORA-00942: table or view does not exist, and I can't figure out what it can't see....
Select *
From Table(Sem_Match('(?s <wn20schema:containsWordSense> ?ws)
( ?ws <wn20schema:word> ?w)
( ?w <wn20schema:lexicalForm> ?l )
( ?s <wn20schema:containsWordSense> ?ws2)
( ?ws2 <wn20schema:word> ?w2)
( ?w2 <wn20schema:lexicalForm> ?l2 )',
Sem_Models('wn'), Null, Null, Null))
Where Upper(L) = Upper('Gold');
Results in:
ORA-00942: table or view does not exist
ORA-06512: at "MDSYS.RDF_MATCH_IMPL_T", line 161
ORA-06512: at "MDSYS.RDF_APIS_INTERNAL", line 8702
ORA-06512: at "MDSYS.S_SDO_RDF_QUERY", line 26
ORA-06512: at "MDSYS.RDF_APIS_INTERNAL", line 8723
ORA-06512: at "MDSYS.RDF_MATCH_IMPL_T", line 144
ORA-06512: at line 4
There are 25 tables created in the EE schema when the model is created:
EE_WORDNET#RDF_CLIQUE$
EE_WORDNET#RDF_COLLISION$
EE_WORDNET#RDF_CRS_URI$
EE_WORDNET#RDF_DELTA$
EE_WORDNET#RDF_GRANT_INFO$
EE_WORDNET#RDF_HIST$
EE_WORDNET#RDF_LINK$
EE_WORDNET#RDF_MODEL$_TBL
EE_WORDNET#RDF_MODEL_INTERNAL$
EE_WORDNET#RDF_NAMESPACE$
EE_WORDNET#RDF_NETWORK_INDEX_INTERNAL$
EE_WORDNET#RDF_PARAMETER
EE_WORDNET#RDF_PRECOMP$
EE_WORDNET#RDF_PRECOMP_DEP$
EE_WORDNET#RDF_PRED_STATS$
EE_WORDNET#RDF_RI_SHAD_2$
EE_WORDNET#RDF_RULE$
EE_WORDNET#RDF_RULEBASE$
EE_WORDNET#RDF_SESSION_EVENT$
EE_WORDNET#RDF_SYSTEM_EVENT$
EE_WORDNET#RDF_TERM_STATS$
EE_WORDNET#RDF_TS$
EE_WORDNET#RDF_VALUE$
EE_WORDNET#RENAMED_APPTAB_RDF_MODEL_ID_1
EE_WORDNET#SEM_INDEXTYPE_METADATA$
and 41 views, including the RDF_WORDNET, which was originally created as the Triple table above...
EE_WORDNET#RDFI_RDFS_RIX_WN
EE_WORDNET#RDFM_WN
EE_WORDNET#RDFR_OWL2EL
EE_WORDNET#RDFR_OWL2RL
EE_WORDNET#RDFR_OWLPRIME
EE_WORDNET#RDFR_OWLSIF
EE_WORDNET#RDFR_RDF
EE_WORDNET#RDFR_RDFS
EE_WORDNET#RDFR_RDFS++
EE_WORDNET#RDFR_SKOSCORE
EE_WORDNET#RDFT_WN
EE_WORDNET#RDF_DTYPE_INDEX_INFO
EE_WORDNET#RDF_MODEL$
EE_WORDNET#RDF_PRIV$
EE_WORDNET#RDF_RULEBASE_INFO
EE_WORDNET#RDF_RULES_INDEX_DATASETS
EE_WORDNET#RDF_RULES_INDEX_INFO
EE_WORDNET#RDF_VMODEL_DATASETS
EE_WORDNET#RDF_VMODEL_INFO
EE_WORDNET#SEMI_RDFS_RIX_WN
EE_WORDNET#SEMM_WN
EE_WORDNET#SEMP_WN
EE_WORDNET#SEMR_OWL2EL
EE_WORDNET#SEMR_OWL2RL
EE_WORDNET#SEMR_OWLPRIME
EE_WORDNET#SEMR_OWLSIF
EE_WORDNET#SEMR_RDF
EE_WORDNET#SEMR_RDFS
EE_WORDNET#SEMR_RDFS++
EE_WORDNET#SEMR_SKOSCORE
EE_WORDNET#SEMT_WN
EE_WORDNET#SEM_DTYPE_INDEX_INFO
EE_WORDNET#SEM_INF_HIST
EE_WORDNET#SEM_MODEL$
EE_WORDNET#SEM_NETWORK_INDEX_INFO
EE_WORDNET#SEM_RULEBASE_INFO
EE_WORDNET#SEM_RULES_INDEX_DATASETS
EE_WORDNET#SEM_RULES_INDEX_INFO
EE_WORDNET#SEM_VMODEL_DATASETS
EE_WORDNET#SEM_VMODEL_INFO
RDF_WORDNET
As a test, I granted Select, Insert, Update to MDSYS on all these tables and views, made no difference...
I might add on an Oracle 12.2 database with a system owned network, and the same data bulk-loaded into the same model, this query returns the expected data, so this has something to do with it being a user owned network...
Any thoughts as to who needs permissions to what for this to work?

Related

How to use Hive Set statement is SAS SQL?

I am not aware of SAS SQL but one of our user is struggling with the syntax actually,
PROC SQL;
24 CONNECT TO IMPALA (USER="&TDUSER" PW="&***" DSN="BIGDATA" DATABASE= abc);
25 CREATE TABLE SIL_MONITORED AS SELECT * FROM CONNECTION TO IMPALA
26 (
27 SELECT DISTINCT a.partyid, a.baselinerecordstatuscode, cast(b.cin as decimal(10)) as cin, a.business_date
28 /*cast(unix_timestamp(a.business_date, "yyyy-MM-dd") as timestamp) as business_date*/
29 FROM abc.baseline_party as a
30 LEFT JOIN
31 abc.baseline_relationship as b
32 on a.partyid = b.partyid
33 where a.business_date = (select max(business_date) from abc.baseline_party)
34 and upper(a.baselinerecordstatuscode) = 'MONITORED') ;
Recommendation from Bigdata side to use the below property to overcome the Scratch limit issue
set mem_limit=1g
but we aren't sure to use it the above SAS Client side to incorporate and make it work. If it is Hue, it can be set at session level but not SAS.
He has tried like below but it was being ignored at Bigdata side for the other property (SCRATCH_LIMIT),
PROC SQL;
24 CONNECT TO IMPALA (USER="&TDUSER" PW="&***" DSN="BIGDATAPROD" DATABASE= abc
24 **! conopts='SCRATCH_LIMIT=200g'**);
25 CREATE TABLE SIL_MONITORED AS SELECT * FROM CONNECTION TO IMPALA
What's the right way to make
set mem_limit=1g
to work with the above SQL from SAS side?
Thank you!

Oracle Application Express 32k limit - workaround

I am trying to insert 1000,000 rows into a table in Oracle APEX but there is a 32k limit and I can only insert 4000 rows at a time. Is there a way to get round this limit. Otherwise I'm gonna be doing inserts for the rest of my life.
Cheers
Brian
example script
INSERT ALL
INTO staff (staffid, forename, familyName, jobRole) values (staffid_auto_incr.nextval, 'Raymond', 'Mccoy', 'manager')
INTO staff (staffid, forename, familyName, jobRole) values (staffid_auto_incr.nextval, 'Janice', 'Cunningham', 'admin')
INTO staff (staffid, forename, familyName, jobRole) values (staffid_auto_incr.nextval, 'Christine', 'Reyes', 'sales')
etc.etc.
select * from dual

more efficent way of reading data from two table and writing them in a new one using batch

I'm trying to write a spring batch to move data from two tables to a single table. I'm having a problem now and I thought of many ways to solve this problem but I'm still wondering if there is a more efficent solution to my problem?
Basically the problem is, I have two tables lets call them table A and table B and their structure is as the following:
table A
column 1A column 2A
======== ========
bmw 123555
nissan 123456777
audi 12888
toyota 9800765
kia 85834945
table B
column 1B column 2B
======== ========
12 caraudi
123456 carnissan
123 carbmw
0125 carvvv
88963 carbbn
what I'm trying to do is to create a table c from the batch's wrtier which holds all the data from table B (column 1B and column 2B)and column 1A only without losing any data from both tables and without writing duplicated data based on column 2A and column 1B. column A and column B have only one column in common (coulmn 1B == column 2A) but column 2A has a 3 digits suffix added to each id so if we do a join and compare I have to use a substr method and it will be very slow coz I have huge tables.
The other solution I thinked of is to have a reader for table A and write all results to tempA table without the suffix, then another reader that compare tables tempA and table B and write the data to table c as the following
table c
column 1A ( can be nullable because not all the records in column 2A exists in column 1B)
column 1B
column 2B
so the table will look like this
table C
column 1c column 2c column 3c
========= ========= =========
12 caraudi audi
123456 carnissan nissan
123 carbmw bmw
0125 carvv
88963 carbbn
9800765 toyota
85834945 kia
is this the bet way to solve the problem? or is there any other way that is more efficient?
thanks in advance!
Before giving up on a LEFT OUTER JOIN from tableA to tableB (or a FULL OUTER JOIN if your query conditions require it) consider using db2expln or the Visual Explain utility in IBM Data Studio to determine the cost of some alternative ways to perform a "begins with" match on VARCHAR columns:
ON a.col2a LIKE b.col1b || '___'
ON a.col2a >= b.col1b || '000' AND a.col2a <= b.col1b || '999'
If 1b is a CHAR column, you might need to trim off its trailing spaces before concatenating additional characters to it: RTRIM( b.col1b ) || '000'
Assuming column 1b is indexed, one prefix-based matching predicate or another is bound to make a join between those two tables less expensive than creating, populating, and joining to your own temp table. If I'm wrong (or there are other complicating factors) and a temp table ends up being the best option, be sure to use a declared global temporary table (DGTT) so you can avoid the logging overhead of populating it.

Oracle Merge, not logging errors

I'm merging several tables in Oracle 10g, into a consolidated table, like this:
table_A (will have all the records)
table_b -part of the data to be merged
table_c -part of the data to be merged
table_d -part of the data to be merged
now, i run it with error logging like this
MERGE INTO TABLE_A A USING (SELECT * FROM TABLE_B) B
ON
(
A.NOMBRE=B.NOMBRE AND
A.PRIMER_APELLIDO=B.PRIMER_APELLIDO AND
A.SEGUNDO_APELLIDO=B.SEGUNDO_APELLIDO AND
TO_CHAR(A.FECHA_NACIMIENTO,'DD/MM/YYYY')=TO_CHAR(B.FECHA_NACIMIENTO,'DD/MM/YYYY') AND
A.SEXO=B.SEXO
)
WHEN MATCHED THEN
UPDATE SET DGP2011='1'
WHEN NOT MATCHED THEN
INSERT
(
A.FOLIO_RELACIONADO,
A.CVE_PROGRAMA,
A.FECHA_ALTA,
A.PRIMER_APELLIDO,
A.SEGUNDO_APELLIDO,
A.NOMBRE,
A.FECHA_NACIMIENTO,
A.SEXO,
A.CVE_NACIONALIDAD,
A.CVE_ENTIDAD_NACIMIENTO,
A.CVE_GRADO_ESCOLAR,
A.CVE_GRADO_ESTUDIOS,
A.CURP,
A.CALLE,
A.NUM_EXT,
A.NUM_INT,
A.CODIGO_POSTAL,
A.ENTRE_CALLE,
A.Y_CALLE,
A.OTRA_REFERENCIA,
A.TELEFONO,
A.COLONIA,
A.LOCALIDAD,
A.CVE_MUNICIPIO,
A.CVE_ENTIDAD_FEDERATIVA,
A.CVE_CCT,
A.PRIMER_APELLIDO_C,
A.SEGUNDO_APELLIDO_C,
A.NOMBRE_C,
A.FECHA_NACIMIENTO_C,
A.SEXO_C,
A.CVE_ESTADO_CIVIL_C,
A.CVE_GRADO_ESTUDIOS_C,
A.CVE_PARENTESCO_C,
A.CURP_C,
A.CVE_TIPO_ID_OFCL_C,
A.ID_DOCTO_OFL_C,
A.CVE_NACIONALIDAD_C,
A.CVE_ENTIDAD_NACIMIENTO_C,
A.CALLE_C,
A.NUM_EXT_C,
A.NUM_INT_C,
A.CODIGO_POSTAL_C,
A.ENTRE_CALLE_C,
A.Y_CALLE_C,
A.OTRA_REFERENCIA_C,
A.TELEFONO_C,
A.COLONIA_C,
A.LOCALIDAD_C,
A.CVE_MUNICIPIO_C,
A.CVE_ENTIDAD_FEDERATIVA_C,
A.E_MAIL_C,
A.DGP2011
)
VALUES
(
B.FOLIO_RELACIONADO,
B.CVE_PROGRAMA,
B.FECHA_ALTA,
B.PRIMER_APELLIDO,
B.SEGUNDO_APELLIDO,
B.NOMBRE,
TO_CHAR(B.FECHA_NACIMIENTO,'DD/MM/YYYY'),
B.SEXO,
B.CVE_NACIONALIDAD,
B.CVE_ENTIDAD_NACIMIENTO,
B.CVE_GRADO_ESCOLAR,
B.CVE_GRADO_ESTUDIOS,
B.CURP,
B.CALLE,
B.NUM_EXT,
B.NUM_INT,
B.CODIGO_POSTAL,
B.ENTRE_CALLE,
B.Y_CALLE,
B.OTRA_REFERENCIA,
B.TELEFONO,
B.COLONIA,
B.LOCALIDAD,
B.CVE_MUNICIPIO,
B.CVE_ENTIDAD_FEDERATIVA,
B.CVE_CCT,
B.PRIMER_APELLIDO_C,
B.SEGUNDO_APELLIDO_C,
B.NOMBRE_C,
TO_CHAR(B.FECHA_NACIMIENTO_C,'DD/MM/YYYY'),
B.SEXO_C,
B.CVE_ESTADO_CIVIL_C,
B.CVE_GRADO_ESTUDIOS_C,
B.CVE_PARENTESCO_C,
B.CURP_C,
B.CVE_TIPO_ID_OFCL_C,
B.ID_DOCTO_OFL_C,
B.CVE_NACIONALIDAD_C,
B.CVE_ENTIDAD_NACIMIENTO_C,
B.CALLE_C,
B.NUM_EXT_C,
B.NUM_INT_C,
B.CODIGO_POSTAL_C,
B.ENTRE_CALLE_C,
B.Y_CALLE_C,
B.OTRA_REFERENCIA_C,
B.TELEFONO_C,
B.COLONIA_C,
B.LOCALIDAD_C,
B.CVE_MUNICIPIO_C,
B.CVE_ENTIDAD_FEDERATIVA_C,
B.E_MAIL_C,
'1'
)LOG ERRORS INTO ELOG_SEGURO_ESCOLAR REJECT LIMIT UNLIMITED;
and it just raises the error "ORA-01722: invalid number" and toad highlights the 'A.' part of the query.
Now about the tables
table A has all the fields in varchar2 (4000)
table b to d have formatting according to the data they hold (date, number, etc)
the thing is, even with the error logging clause it raises the error and doesn't merge anything!
Plus i have no idea what i should be looking for to find the 'invalid number' field
Any advice would be deeply appreciated
Found it!
It was the TO_CHAR(A.FECHA_NACIMIENTO,'DD/MM/YYYY') line. Just left it like this
A.FECHA_NACIMIENTO=B.FECHA_NACIMIENTO and it worked. Thanks anyway!

oracle hierarchical query nocycle and connect by root

Can somebody explain use of nocycle and connect by root clauses in hierarchical queries in oracle, also when we dont use 'start with' what is the order we get the rows, i mean when we don't use 'start with' we get lot many rows, can anybody explain nocycle and connect by root(how is different than start with?) using simple emp table, Thanks for the help
If your data has a loop in it (A -> B -> A -> B ...), Oracle will throw an exception, ORA-01436: CONNECT BY loop in user data if you do a hierarchical query. NOCYCLE instructs Oracle to return rows even if such a loop exists.
CONNECT_BY_ROOT gives you access to the root element, even several layers down in the query. Using the HR schema:
select level, employee_id, last_name, manager_id ,
connect_by_root employee_id as root_id
from employees
connect by prior employee_id = manager_id
start with employee_id = 100
LEVEL EMPLOYEE_ID LAST_NAME MANAGER_ID ROOT_ID
---------- ----------- ------------------------- ---------- ----------
1 100 King 100
2 101 Kochhar 100 100
3 108 Greenberg 101 100
4 109 Faviet 108 100
...
Here, you see I started with employee 100 and started finding his employees. The CONNECT_BY_ROOT operator gives me access to King's employee_id even four levels down. I was very confused at first by this operator, thinking it meant "connect by the root element" or something. Think of it more like "the root of the CONNECT BY clause."
Here is about nocycle use in query.
Suppose we have a simple table
with r1 and r2 column names and the values for
first row r1=a,r2=b
and second row r1=b,r2=a
Now we know a refers to b and b refers back to a .
Hence there is a loop and if we write a hierarchical query as
select r1 from table_name
start with r1='a'
connect by prior r2=r1;
we get connect by loop error
Hence use nocycle to allow oracle to give results even if loop exists.
Hence the query
select r1 from table_name
start with r1='a'
connect by nocycle prior r2=r1;

Resources