Failed to execute query. Error: File 'https://track2gen2storage.blob.core.windows.net/\sourcedata\sample.csv' cannot be opened because it does not exist or it is used by another process.
we performed these steps:-
create database SalesdataDemo
use salesdataDemo
-----create master key
CREATE MASTER KEY ENCRYPTION BY PASSWORD = <Password>;
SELECT *
FROM sys.symmetric_keys AS SK
WHERE SK.name = '##MS_DatabaseMasterKey##';
CREATE DATABASE SCOPED CREDENTIAL ADL_User
WITH
IDENTITY = '<client_id>#<OAuth_2.0_Token_EndPoint>',Secret = <Key>
CREATE DATABASE SCOPED CREDENTIAL adls_credential
WITH IDENTITY ='SHARED ACCESS SIGNATURE',
SECRET = <azure_storage_account_key>
CREATE EXTERNAL DATA SOURCE adlsdatasource
WITH
( LOCATION = 'https://track2gen2storage.blob.core.windows.net',
CREDENTIAL = adls_credential
) ;
CREATE EXTERNAL FILE FORMAT adls_csv
WITH (
FORMAT_TYPE = DELIMITEDTEXT,
FORMAT_OPTIONS ( FIELD_TERMINATOR = ',', STRING_DELIMITER = '"', FIRST_ROW = 2 )
);
CREATE EXTERNAL TABLE sampledata ( <ColumnName><Datatype>)
WITH (
LOCATION = '/sourcedata/sample.csv',
DATA_SOURCE = adlsdatasource,
FILE_FORMAT = adls_csv
)
select * from sampledata
I think the problem is your external table location is starting with /. Try changing it to:
CREATE EXTERNAL TABLE sampledata ( <ColumnName><Datatype>)
WITH (
LOCATION = 'sourcedata/sample.csv',
DATA_SOURCE = adlsdatasource,
FILE_FORMAT = adls_csv
)
Here is the document you can also take a look for reference:
https://learn.microsoft.com/en-us/azure/synapse-analytics/sql/create-use-external-tables
One more question, why do you need database scoped credential named ADL_User?
Related
Can someone let me know if there is an DECLARE equivalent in Databricks SQL
The SQL Code that I have trying to execute with Databricks SQL is as follows:
DECLARE
#EnrichedViewDatabase sysname,
#EnrichedViewSchema sysname,
#EnrichedColumnSuffix varchar(50),
#LanguageCode varchar(10),
#BaseTableSuffix varchar(50),
#PreviewOnly bit, --Indicate whether to preview the SQL Script (without creating the views) = 1 ; Create views = 0;
#CurrentDatabase sysname,
#CurrentDatabaseSchema sysname
SET #EnrichedViewDatabase = 'mydatabasenr1'
SET #EnrichedViewSchema = 'dbo'
SET #EnrichedColumnSuffix = 'code'
SET #LanguageCode = 1033
SET #BaseTableSuffix = ''
SET #PreviewOnly = 0
SET #CurrentDatabase = 'mydatabasenr2'
SET #CurrentDatabaseSchema = 'dbo'
DECLARE #ColumnMetadata nvarchar(MAX), #ColumnMetadataSQL nvarchar(MAX)
The above SQL gives me the following error:
mismatched input 'DECLARE'
== SQL ==
DECLARE
^^^
#EnrichedViewDatabase sysname,
#EnrichedViewSchema sysname,
#EnrichedColumnSuffix varchar(50),
#LanguageCode varchar(10),
#BaseTableSuffix varchar(50),
#PreviewOnly bit, --Indicate whether to preview the SQL Script (without creating the views) = 1
Any thoughts?
DECLARE is not supported in Databricks SQL. The equivalent code to the one which you are trying to achieve in the above case would be to use SET directly.
%sql
SET EnrichedViewDatabase = 'mydatabasenr1';
SET EnrichedViewSchema = 'dbo';
SET EnrichedColumnSuffix = 'code';
SET LanguageCode = 1033;
SET BaseTableSuffix = '';
SET PreviewOnly = 0 ;
SET CurrentDatabase = 'mydatabasenr2';
SET CurrentDatabaseSchema = 'dbo';
SET returns a key-value pair as result. To access the value using the respective key (assigned in the above procedure), you can do it in the following way:
%sql
select ${hiveconf:CurrentDatabase} as x,${hiveconf:EnrichedViewDatabase} as y,${hiveconf:LanguageCode} as z
I have a two node gg1 and gg2 setup of goldengate. Seem to work. However when I insert a row into my table T1
connect sender/oracle
create table t1 (f1 char, f2 char);
alter table t1 add constraint t1_i1 primary key (f1);
I get an error
2022-01-30T19:16:05.705-0500 WARNING OGG-01154 Oracle GoldenGate Delivery for Oracle, d_rep.prm: SQL error 1 mapping SENDER.T1 to SENDER.T1 OCI Error ORA-00001: unique constraint (SENDER.T1_I1) violated (status = 1), SQL <INSERT INTO "SENDER"."T1" ("F1","F2") VALUES (:a0,:a1)>.
enter image description here
enter image description here
enter image description here
here are prm files for gg1
# cat s_ext.prm
extract s_ext
userid ggs_owner, password Newpassword_2
tranlogoptions excludeuser ggs_owner
exttrail /u01/gg/dirdat/lt
ddl include all
getupdatebefores
sequence sender.*;
table sender.*;
# cat s_pmp.prm
extract s_pmp
userid ggs_owner, password Newpassword_2
rmthost 10.10.0.216, mgrport 7809
rmttrail /u01/gg/dirdat/rt
passthru
sequence sender.*;
table sender.*;
# cat d_rep.prm
replicat d_rep
userid ggs_owner, password Newpassword_2
assumetargetdefs
discardfile /u01/gg/dirrpt/drep1.dsc, append
reperror (default, exception)
map sender.*, target sender.*;
MACRO #exception_handler
BEGIN
, TARGET GGS_OWNER.GGS_EXCEPTIONS
, COLMAP ( rep_name = #GETENV('GGENVIRONMENT', 'GROUPNAME')
, TABLE_NAME = #GETENV ('GGHEADER', 'TABLENAME')
, ERRNO = #GETENV ('LASTERR', 'DBERRNUM')
, DBERRMSG = #GETENV ('LASTERR', 'DBERRMSG')
, OPTYPE = #GETENV ('LASTERR', 'OPTYPE')
, ERRTYPE = #GETENV ('LASTERR', 'ERRTYPE')
, LOGRBA = #GETENV ('GGHEADER', 'LOGRBA')
, LOGPOSITION = #GETENV ('GGHEADER', 'LOGPOSITION')
, COMMITTIMESTAMP = #GETENV ('GGHEADER', 'COMMITTIMESTAMP')
, GGS_FILENAME = #GETENV('GGFILEHEADER', 'FILENAME')
, CDRFAIL = #GETENV('DELTASTATS','CDR_RESOLUTIONS_FAILED')
, CDRSUC = #GETENV('DELTASTATS','CDR_RESOLUTIONS_SUCCEEDED')
, CDRDETECT = #GETENV('DELTASTATS','CDR_CONFLICTS'))
, INSERTALLRECORDS
, EXCEPTIONSONLY;
END;
MAP sender.* #exception_handler();
site gg2 is similar.
All row that i insert get insert with no abends but the warning confuses me.
According to the message you are getting a Key violation. Does the target table already have a record with the same key? Your exceptions mapping is handling it and you should see records in the ggs.ggs_exceptions table. Also, do a STATS on Replicat.
STATS REPLICAT repname TOTAL
and look for discards or collisions.
It seem like key "1" might already be on the target database. can you check that first?
ALso, you might want to remove the reperror and exception to test.
I get the error when I run a mapping.
I've created a "New Data Server"
I can successfully test the connection by clicking on "Test Connection" button.
But the mapping cannot be run successfully. The full error message is below:
ODI-1228: Task Merge rows-IKM Oracle Merge-Load USERS fails on the target connection DB-TARGET.
Caused By: java.sql.SQLException: ORA-12154: TNS:невозможно разрешить заданный идентификатор соединения
at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:495)
at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:447)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1055)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:624)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:253)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:613)
at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:214)
at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:38)
at oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:891)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1194)
at oracle.jdbc.driver.OracleStatement.executeInternal(OracleStatement.java:1835)
at oracle.jdbc.driver.OracleStatement.execute(OracleStatement.java:1790)
at oracle.jdbc.driver.OracleStatementWrapper.execute(OracleStatementWrapper.java:301)
at oracle.odi.runtime.agent.execution.sql.SQLCommand.execute(SQLCommand.java:205)
at oracle.odi.runtime.agent.execution.sql.SQLExecutor.execute(SQLExecutor.java:142)
at oracle.odi.runtime.agent.execution.sql.SQLExecutor.execute(SQLExecutor.java:28)
at oracle.odi.runtime.agent.execution.TaskExecutionHandler.handleTask(TaskExecutionHandler.java:52)
at oracle.odi.runtime.agent.execution.SessionTask.processTask(SessionTask.java:206)
at oracle.odi.runtime.agent.execution.SessionTask.doExecuteTask(SessionTask.java:117)
at oracle.odi.runtime.agent.execution.AbstractSessionTask.execute(AbstractSessionTask.java:886)
at oracle.odi.runtime.agent.execution.SessionExecutor$SerialTrain.runTasks(SessionExecutor.java:2225)
at oracle.odi.runtime.agent.execution.SessionExecutor.executeSession(SessionExecutor.java:610)
at oracle.odi.runtime.agent.processor.TaskExecutorAgentRequestProcessor$1.doAction(TaskExecutorAgentRequestProcessor.java:718)
at oracle.odi.runtime.agent.processor.TaskExecutorAgentRequestProcessor$1.doAction(TaskExecutorAgentRequestProcessor.java:611)
at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:203)
at oracle.odi.runtime.agent.processor.TaskExecutorAgentRequestProcessor.doProcessStartAgentTask(TaskExecutorAgentRequestProcessor.java:800)
at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.access$1400(StartSessRequestProcessor.java:74)
at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$StartSessTask.doExecute(StartSessRequestProcessor.java:702)
at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:180)
at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$2.run(DefaultAgentTaskExecutor.java:108)
at java.lang.Thread.run(Thread.java:748)
Caused by: Error : 12154, Position : 154, Sql =
MERGE
INTO TARGET.USERS USERS
USING
(
SELECT
USE.NAME AS NAME ,
USE.PASSWD AS PASSWD ,
USE.USER_ROLE AS USER_ROLE
FROM
**SOURCE.USERS#"MySource" USE**
) MERGE_SUBQUERY
ON
(
USERS.NAME = MERGE_SUBQUERY.NAME
)
WHEN NOT MATCHED THEN
INSERT
(
NAME ,
PASSWD ,
USER_ROLE
)
VALUES
(
MERGE_SUBQUERY.NAME ,
MERGE_SUBQUERY.PASSWD ,
MERGE_SUBQUERY.USER_ROLE
)
WHEN MATCHED THEN
UPDATE SET
PASSWD = MERGE_SUBQUERY.PASSWD ,
USER_ROLE = MERGE_SUBQUERY.USER_ROLE , OriginalSql =
MERGE
INTO TARGET.USERS USERS
USING
(
SELECT
USE.NAME AS NAME ,
USE.PASSWD AS PASSWD ,
USE.USER_ROLE AS USER_ROLE
FROM
SOURCE.USERS#"MySource" USE
) MERGE_SUBQUERY
ON
(
USERS.NAME = MERGE_SUBQUERY.NAME
)
WHEN NOT MATCHED THEN
INSERT
(
NAME ,
PASSWD ,
USER_ROLE
)
VALUES
(
MERGE_SUBQUERY.NAME ,
MERGE_SUBQUERY.PASSWD ,
MERGE_SUBQUERY.USER_ROLE
)
WHEN MATCHED THEN
UPDATE SET
PASSWD = MERGE_SUBQUERY.PASSWD ,
USER_ROLE = MERGE_SUBQUERY.USER_ROLE , Error Msg = ORA-12154: TNS:невозможно разрешить заданный идентификатор соединения
at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:499)
... 30 more
Don't be confused with russian description of the error ORA-12154.
It means TNS:could not resolve the connect identifier specified.
tnsnames.ora file exists. I can also connect using SQL Developer
After some investigation it seems to me that the line
SOURCE.USERS#"MySource" USE
is the culprit
But before this script was executed ODI executed another script and executed it successfully. Below are its contents:
create database link "MySource" connect to SOURCE identified by <#=odiRef.getInfo("SRC_PASS") #> using '***'
I ran both queries in SQL Developer and replaced <#=odiRef.getInfo("SRC_PASS") #> with the real value.
I could reproduce the error.
I ran the query below to be sure that the dblink had been created:
select * from all_db_links;
Then I ran against a discussion where someone suggested to create dblinks in this way:
create database link "MySource" connect to SOURCE identified by *** using
'(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = oracle.****.***.**)
)
)';
It worked. I could use this link in the subsequent queries. But I noticed that a shorter way was to create a dblink by using SID only:
create database link "MySource" connect to SOURCE identified by *** using oracle;
So I switched to ODI and changed the connection strings used in all the data servers from jdbc:oracle:thin:#<host>:<port>/<service name> to the format jdbc:oracle:thin:#<host>:<port>:<sid>
This resolved the isssue. I could run the mapping successfully.
Not sure why a dblink that is created based on service name cannot be used.
Oracle doesn't forbid it according to the docu
CREATE DATABASE LINK:
https://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_5005.htm
I can't find a complete example on how I would be able to load a CSV file directlry with a external table into a Sql Datawarehouse.
The file is on a Storage account https://tstodummy.blob.core.windows.net/
Blob container referencedata-in, folder csv-uploads, file something.csv.
This is my code
CREATE DATABASE SCOPED CREDENTIAL tstodummy_refdata_credential
WITH IDENTITY = 'USER',
SECRET = '....'
GO
CREATE EXTERNAL DATA SOURCE tstodummy_referencedata
WITH ( TYPE = HADOOP,
LOCATION = 'wasb://referencedata-in#tstodummy.blob.core.windows.net',
CREDENTIAL = tstodummy_refdata_credential);
GO
CREATE EXTERNAL FILE FORMAT aps_bma_referencedata_ff
WITH (FORMAT_TYPE = DELIMITEDTEXT,
FORMAT_OPTIONS(
FIELD_TERMINATOR = ';',
STRING_DELIMITER = '"',
FIRST_ROW = 2,
USE_TYPE_DEFAULT = True)
)
CREATE EXTERNAL TABLE [stg_aps_bma_refdata].[PlanDeMaintenance]
( [Version] VARCHAR(255) NULL
, [Description] VARCHAR(255) NULL
, [Date_Start] VARCHAR(255) NULL
, [Date_Stop] VARCHAR(255) NULL
) WITH ( LOCATION = '\referencedata-in\csv-uploads\PlanDeMaintanance'
, DATA_SOURCE = tstodummy_referencedata
, FILE_FORMAT = aps_bma_referencedata_ff
, REJECT_TYPE = VALUE
, REJECT_VALUE = 0
)
I've been playing with all kind of combinations in the Location ... But, Nogo
The error is
Msg 105002, Level 16, State 1, Line 26
EXTERNAL TABLE access failed because the specified path name '/referencedata-in/csv-uploads/PlanDeMaintanance.csv' does not exist. Enter a valid path and try again.
I can't see your storage structure, but I think you'll find that the problem is the inclusion of "/referencedata-in" in the external table location.
One small thing, you might also want to consider a "wasbs" prefix on your storage URL, so that SSL encryption is applied to the transfer.
Finally, this did the trick in case of others encounterig the sale troubles.
I did not yet add, in this code the remark I received, meanwhile done.
CREATE EXTERNAL DATA SOURCE tsto_referencedata
WITH ( TYPE = HADOOP,
LOCATION = 'wasb://referencedata-in#tsto.blob.core.windows.net',
CREDENTIAL = tsto_refdata_credential);
GO
CREATE EXTERNAL FILE FORMAT aps_bma_referencedata_ff
WITH (FORMAT_TYPE = DELIMITEDTEXT,
FORMAT_OPTIONS(
FIELD_TERMINATOR = ',',
STRING_DELIMITER = '"',
FIRST_ROW = 2,
USE_TYPE_DEFAULT = True)
)
CREATE EXTERNAL TABLE [stg_aps_bma_refdata].[PlanDeMaintenance.csv]
( [Version] VARCHAR(255) NULL
, [Description] VARCHAR(255) NULL
, [Date_Start] VARCHAR(255) NULL
, [Date_Stop] VARCHAR(255) NULL
) WITH ( LOCATION = '/csv-uploads/PlanDeMaintenance.csv'
, DATA_SOURCE = tsto_referencedata
, FILE_FORMAT = aps_bma_referencedata_ff
, REJECT_TYPE = VALUE
, REJECT_VALUE = 0
)
I have the following problem in an Oracle 12 c Release 2.
I have a Database Trigger on the After Logon system event created and owned by SYS. It inserts database connection custom audit logon records into a table, named AUDIT_LOGON (owned not by SYS).
If the user who is connecting to the database is under standard (Traditional) audit for INSERT TABLE, then a new record is created in the SYS.AUD$ table, related to the insert made by the Database trigger, but on the account of the connecting user - field USERID = connecting user.
As above said, the Database Trigger on After Logon is owned (and created) by SYS.
All actions performed inside it should generate no audit trail in standard audit trail table SYS.AUD$, as they are performed under SYS authority and as such unaffectable by the Standard (Traditional) Audit.
Until Oracle 11g R2 everything is correctly, no record gets created in the SYS.AUD$ for INSERT (table) actions performed inside the DB trigger invoked by connecting users, while being audited for INSERT TABLE.
In 12c (R2 verified) I do get a record in the SYS.AUD$ for every insert on the AUDIT_LOGON table and (again) the connecting user is under INSERT TABLE audit.
This is not the expected behaviour, as the operations performed inside a trigger body owned by SYS are not supposed to generate anything on the SYS.AUD$!
Hints:
The illustration with the audited INSERT TABLE is just one case. the same would be if more system privileges would be audited to the user[s] and the DB Trigger would be performing them. In a real-life case, some of the connecting users are under full audit (all audit options) and the trigger more complex. Thus having tens (if not more) of records in the AUD$, for the actions inside the DB Trigger to have the single row in the custom logon audit table - is not acceptable.
It was precisely this the reason that the DB Trigger is created (and owned) by SYS - to avoid extra trigger-internal logs in the AUD$. And it was working this way, correctly, until 11g R2.
the new (12c) field CURRENT_USER shows SYS in the use-case above
How can I have the 11g behaviour on the above?
at your disposal for further queries
best regards
Altin
USE-CASE-CODE:
-- = = = = = = = = = = = = = = SETUP = = = = = = = = = = = = = =
-- Create the audit repository schema user
create user TESTREPO identified by manager
default tablespace USERS
temporary tablespace TEMP
profile DEFAULT
password expire
quota unlimited on USERS;
-- Create the audit repository table
create table TESTREPO.AUDIT_LOGON
(
logon_date DATE,
username VARCHAR2(100),
hostname VARCHAR2(200)
)
tablespace USERS
pctfree 10
initrans 1
maxtrans 255
storage
(
initial 64K
next 1M
minextents 1
maxextents unlimited
);
-- Create the testing user
create user TESTUSER identified by manager
default tablespace USERS
temporary tablespace TEMP
profile DEFAULT
password expire;
-- Grant/Revoke system privileges
grant create session to TESTUSER;
-- Create Database After Logon Trigger as SYS (owned by SYS!)
CREATE OR REPLACE TRIGGER "LOGON_TRIGGER"
AFTER LOGON ON DATABASE
BEGIN
insert into TESTREPO.AUDIT_LOGON
values (sysdate, USER, SYS_CONTEXT('userenv', 'host'));
END;
/
-- = = = = = = = = = = = = = = SETUP = = = = = = = = = = = = = =
-- = = = = = = = = = = = = = = TEST PREREQUISITES = = = = = = = = = = = = = =
-- Audit the TESTUSER for System Privilege INSERT TABLE
audit insert table by testuser by access;
-- = = = = = = = = = = = = = = TEST PREREQUISITES = = = = = = = = = = = = = =
-- = = = = = = = = = = = = = = TEST = = = = = = = = = = = = = =
1. Logon as TESTUSER
2. Query the TESTREPO.AUDIT_LOGON table. A new record is created by TESTUSER - expected behaviour
3. Query the SYS.AUD$ table.
Oracle 11g R2 and below: NO standard audit record created by TESTUSER - expected behaviour
Oracle 12c R1 and above: Standard audit record IS created by TESTUSER - unexpected (!) behaviour
-- = = = = = = = = = = = = = = TEST = = = = = = = = = = = = = =