java.sql.SQLException: ORA-01157: cannot identify/lock data file - oracle

I was getting the below given error, When I run the application:
Caused by: org.hibernate.exception.GenericJDBCException: could not execute native bulk manipulation query
.
.
Caused by: java.sql.SQLException: ORA-01157: cannot identify/lock data file - see DBWR trace file
ORA-01110: data file : '/fld1/fld2/mytemp_tablespace.dbf'
I tried to find out this files and came to know that there is no folders. I have ,
then created the respective folders and a new empty mytemptemp_tablespace.dbf file. But still the same error is getting over there.
Any idea why this error is happening?If it is an SQL exception it could have happened at the right beginning itself.
What I have done is, I have created a new schema and exported the db from the old to this new one.
Also how can I see or get the DBWR trace file.

This could be the result of a restored database and during the restore rman was not able to create the tempfiles because of a missing directory.
Solution is quite simple, once the directories are created, just add one or more tempfiles:
alter tablespace mytemp_tablespace add tempfile '/fld1/fld2/mytemp_tablespace01.dbf';
when the temp tablespace has it's storage, your actions can succeed.

Related

Reg: database is not starting up an error

getting below error while starting the database:-
startup
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/mis/PARAMETERFILE/spfile.276.967375255'
ORA-17503: ksfdopn:10 Failed to open file +DATA/mis/PARAMETERFILE/spfile.276.967375255
ORA-04031: unable to allocate 56 bytes of shared memory ("shared pool","unknown object","KKSSP^24","kglseshtSegs")
Your database cannot find the SPFILE (newer init.ora) within ASM with the actual system parameters or has no permissions to access it.
Either your Grid Infrastructure stack or the dbs/spfile.ora is pointing to the wrong file.
To find out what the grid infrastructure stack is using, run "srvctl" which should display the parameterfile name the database should be using
srvctl config database -d <dbname>
...
Spfile: +DATA/<dbname>/PARAMETERFILE/spfile.269.1066152225
...
Then check (as the grid user), if the file indeed is not visible (by using asmcmd):
asmcmd
ASMCMD> ls +DATA/<dbname>/PARAMETERFILE/
spfile.269.1066152225
If the name is different, then you got the issue... (and you have to point to the correct file).
If the name is correct, then it could be wrong permissions on the oracle executable(s) (check My Oracle Support):
RAC Database Can't Start: ORA-01565, ORA-17503: ksfdopn:10 Failed to open file +DATA/BPBL/spfileBPBL.ora (Doc ID 2316088.1)

how to resolve thousand of errors in oracle import using impdp where I don't know the what parameters were used during expdp?

I am trying to import an oracle 11g dump file using impdp utility but while doing so, inter alia, I am facing two major errors:
First, It is showing the following error:
Processing object type DATABASE_EXPORT/TABLESPACE
ORA-39083: Object type TABLESPACE:"HIS_USER" failed to create with error:
ORA-01119: error in creating database file '/oracle/app/oracle/oradata/dwhrajdr1/his_user13.dbf'
ORA-27040: file create error, unable to create file
OSD-04002: unable to open file
O/S-Error: (OS 3) The system cannot find the path specified.
so to solve this, I have created the tablesapce with same name but now it is showing that 'HIS_USER' tablespace already exists.
Second, I am getting thousands of errors, where it is showing user or role does not exist:
Failing sql is:
GRANT EXECUTE ANY ASSEMBLY TO "DSS"
ORA-39083: Object type SYSTEM_GRANT failed to create with error:
ORA-01917: user or role 'DSS' does not exist
Please suggest how to solve these errors!
How can I import the dumpfile without making hundreds of users/roles or tablespaces?
you can generate sql statement using impdp the following way.
http://www.dba-oracle.com/t_convert_expdp_dmp_file_sql.htm
then adjust parameter accordingly.
scott

What path should be used to locate CSV File used in SQL statement (Load data local infile) When WAR file Deployed on Tomcat

I have been working on Spring Boot project, I am using Flyway for database version control in this project. In migration folder there are some SQL files having "Load data local infile" Statements - referencing some CSV files.
Example:
load data local infile 'C:/Program Files (x86)/Apache Software Foundation/Tomcat 8.5/webapps/originator/WEB-INF/classes/insertData/subject.csv' INTO TABLE subject
How can I make this path relative?
I have tried
'./classes/insertData/subject.csv'
'./insertData/subject.csv'
And some other combinations also but could not fixed this issue
Error:
Caused by: java.sql.SQLException: Unable to open file '../../insertData/subject.csv'for 'LOAD DATA LOCAL INFILE' command.Due to underlying IOException:
BEGIN NESTED EXCEPTION java.io.FileNotFoundException MESSAGE:
....\insertData\subject.csv (The system cannot find the path
specified) STACKTRACE: java.io.FileNotFoundException:
....\insertData\subject.csv (The system cannot find the path
specified)
I was able to insert data into tables from CSV files within a flyway migration from a resource path. Within the migration script I used the entire path written as shown below.
LOAD DATA LOCAL INFILE './src/main/resources/<FOLDER>/<FILE>.csv' INTO TABLE <TABLE_NAME>
FIELDS TERMINATED BY ','
optionally enclosed by '"'
LINES TERMINATED BY '\r\n'
IGNORE 1 LINES;
The other statements would be dependent on your file structure I just wanted to include the entire example.
Instead of writing SQL script you can use Java-based migration to read and insert data into a table. You can use "flyway.locations" property to specify the path for java based migration in your application.properties. As flyway by default search for "./db/migration" of resources.
For further details check the https://flywaydb.org/documentation/migrations#java-based-migrations

Oracle Data Integrator SQL to HDFS IKM returns error

I am using ODI (12.1.3.0.0). I created topology for Oracle DB which is OK and I created topology for HDFS using File technology where I think the problem is in.
DataServer for HDFS, I left JDBC driver empty, and filled JDBC Url with hdfs://remotehostname:port
Physical Schema for HDFS, I filled both Schema and Work Schema with /my/path
Then created Logical Schema and Model. After that created Datastore under the model with these definitions.
Name: TestName
Resource Name: TESTFILE.txt
File Format: Fixed
After all these, created a project and a mapping under the project.
Finally when I run the mapping I see these errors:
ODI-1217: Session Oracle2HDFSMapping_Physical_SESS (15) fails with return code ODI-1298.
ODI-1226: Step Physical_STEP fails after 1 attempt(s).
ODI-1240: Flow Physical_STEP fails while performing a Add execute to Sqoop script-IKM SQL to HDFS File (Sqoop)- operation. This flow loads target table null.
ODI-1298: Serial task "SERIAL-MAP_MAIN- (10)" failed because child task "SERIAL-EU-GGUSER_UNIT (20)" is in error.
ODI-1298: Serial task "SERIAL-EU-GGUSER_UNIT (20)" failed because child task "Add execute to Sqoop script-IKM SQL to HDFS File (Sqoop)- (40)" is in error.
Caused By: java.io.IOException: Cannot run program "chmod": CreateProcess error=2, The system cannot find the file specified
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047)
at java.lang.Runtime.exec(Runtime.java:617)
at java.lang.Runtime.exec(Runtime.java:450)
at java.lang.Runtime.exec(Runtime.java:347)
at oracle.odi.runtime.agent.execution.cmd.OSCommandExecutor.execute(OSCommandExecutor.java:54)
at oracle.odi.runtime.agent.execution.cmd.OSCommandExecutor.execute(OSCommandExecutor.java:29)
at oracle.odi.runtime.agent.execution.TaskExecutionHandler.handleTask(TaskExecutionHandler.java:52)
at oracle.odi.runtime.agent.execution.SessionTask.processTask(SessionTask.java:203)
at oracle.odi.runtime.agent.execution.SessionTask.doExecuteTask(SessionTask.java:114)
at oracle.odi.runtime.agent.execution.AbstractSessionTask.execute(AbstractSessionTask.java:886)
at oracle.odi.runtime.agent.execution.SessionExecutor$SerialTrain.runTasks(SessionExecutor.java:2198)
at oracle.odi.runtime.agent.execution.SessionExecutor.executeSession(SessionExecutor.java:591)
at oracle.odi.runtime.agent.processor.TaskExecutorAgentRequestProcessor$1.doAction(TaskExecutorAgentRequestProcessor.java:718)
at oracle.odi.runtime.agent.processor.TaskExecutorAgentRequestProcessor$1.doAction(TaskExecutorAgentRequestProcessor.java:611)
at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:203)
at oracle.odi.runtime.agent.processor.TaskExecutorAgentRequestProcessor.doProcessStartAgentTask(TaskExecutorAgentRequestProcessor.java:800)
at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.access$1400(StartSessRequestProcessor.java:74)
at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$StartSessTask.doExecute(StartSessRequestProcessor.java:702)
at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:180)
at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$2.run(DefaultAgentTaskExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified
at java.lang.ProcessImpl.create(Native Method)
at java.lang.ProcessImpl.<init>(ProcessImpl.java:385)
at java.lang.ProcessImpl.start(ProcessImpl.java:136)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028)
... 20 more
I wonder where I did it wrong?
For a file Datastore, you need to define the attributes (columns) by opening the Datastore and going on the attribute tab. If the file already exists, you can reverse-engineer the attributes and rename them and change the datatype if needed.
The error message you received for the second task mentions that the file (generated in the fist task) does not exist. So there might be a problem with the first task, probably due to the missing attributes in your datastore.
Here is a detailed article about SQL To HDFS file (Sqoop) KM written by the ODI A-Team : http://www.ateam-oracle.com/importing-data-from-sql-databases-into-hadoop-with-sqoop-and-oracle-data-integrator-odi/

Error in metadata: MetaException(message:java.lang.IllegalStateException: Can't overwrite cause)

I have created a external table in hive and when I provide the location of the data for this table I get the following error:
FAILED: Error in metadata: MetaException(message:java.lang.IllegalStateException: Can't overwrite cause)
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
Also I am able to load the same file using PIG Script using the PigStorage() loader function.
I have the following permissions on the file: rw-rw-r-
and on the folder where this file resides (Giving the path of this folder in location in the query ) : drwxrwxr-x
What can be the cause for this and how to correct this error ?
The solution is to have write permission on the file....
Another possible cause of this issue is having your LOCATION wrong for your hive table (in case someone else has this issue and can't figure out what is going wrong).

Resources