get the database Name , Creation Date when Database is UNmounted Oracle 11g - oracle

Is it possible to get the database name , Creation Date
when the database is on Startup but NOT mounted ?
when it is mounted we can use this statement :
select NAME,CREATED from V$DATABASE ;
Any help please?

No. When unmounted, the only file that has been opened is the parameter file - init or spfile. The only information there is that which is needed properly allocate memory for the instance, where to write critical logs, and where to find the control file(s).

Related

Problem in accessing name of database in SQL Server Profiler

I want to find which database is querying from, here is my output:
EXEC sp_example #stat = N'SELECT stat FROM [dbo].[statsUSers] AS [UserStats];
What I want is like this:
EXEC sp_example #stat=N'SELECT stat FROM [MyOwnDataBase].[statsUSers] AS [UserStats];
I've already tried this tip:
SQL Server Profiler - how do I find which database is being connected?
but still it's [dbo] and not showing the name of the database.
Question
How can I access name of database?
I don't want [dbo] changes to something meaningless - I want the actual name of database.
When creating the trace, you can select Show all columns, which will then display the DatabaseID and DatabaseName columns.
Note that dbo is the schema name, not the database name. There is no option to capture the default schema of the user, this is the one they would refer to if accessing a table like SELECT * FROM table. To capture the default schema you would instead have to capture the username then work out what the user's default schema is.
I would advise you to move away from the essentially deprecated Profiler, to Extended EVents, which provides far more information and puts far less load on the server.
In Extended Events, you can add the database_name also.

How to resolve "could not write block of temporary file: No space left on device" in postgresql?

I have local database in postgres. In which single table contains data of "74980435".
When I have tried to execute SELECT query it is throwing this error:
"could not write block 657567 of temporary file: No space left on device".
I am trying to execute select query in Laravel.
Can anyone help me?
Your query (which you didn't show) is probably missing a join condition or two, or it tries to sort an enormous amountt of rows or cache an enormous function result or materialize node.
When data don't fit in work_mem, PostgreSQL starts putting them into temporary disk files. Your query created enough of those to fill the file system temporarily.
You can set the temp_file_limit parameter as a defense, but you should figure out the bug in your query.

oracle dbf file is normal, but cannot mount

Things start not close oracle process when I shutdown system, after I exec startup appear error ORA-01157 ORA-01110.
I very sure dbf file is existed, and I use dbv see the file, every thing is normal.
Last thing, I try offline drop those dbf, but cannot recovery them.
Please give me some help, thank you very much!
mount your database :
SQL> startup mount;
Provided your database is in NOARCHIVELOG mode, Issue the following queries :
SQL> select min(first_change#) min_first_change
from v$log V1 inner join v$logfile f on ( l.group# = f.group# );
SQL> select change# ch_number from v$recover_file;
If the ch_number is greater than the min_first_change of your logs, the datafile can be recovered.
If the ch_number is less than the min_first_change of your logs,
the file cannot be recovered.
In this case;
restore the most recent full backup (and thus lose all changes to
the database since) or recreate the tablespace.
Recover the datafile(If the case in the upper yellow part isn't met):
SQL> recover datafile '/opt/oracle/resource/undotbs02.dbf';
Confirm each of the logs that you are prompted for until you receive the message Media Recovery Complete. If you are prompted for a non-existing
archived log, Oracle probably needs one or more of the online logs to proceed with the recovery. Compare the sequence number referenced in the
ORA-00280 message with the sequence numbers of your online logs. Then enter the full path name of one of the members of the redo group whose sequence
number matches the one you are being asked for. Keep entering online logs as requested until you receive the message Media Recovery Complete .
If the database is at mount point, open it :
SQL> alter database open;
If the DBF file fails to mount then check the source of DBF file, whether it is imported from other database or converted with any other tool. Generally, if the DBF file does not have a specific form then it cannot be mounted, troubleshoot Oracle DBF file by following steps
https://docs.cloud.oracle.com/iaas/Content/File/Troubleshooting/exportpaths.htm
If the database is still causing the problem then there could be problems with other components and before mounting fix them with a professional database recovery tool like https://www.filerepairtools.com/oracle-database-recovery.html

oracle: unable to open log file error

So, i have a file A.txt
ENG,England,English
SCO,Scotland,English
IRE,Ireland,English
WAL,Wales,Welsh
i wish to load to an oracle EXTERNAL TABLES. So this is all the things that i did up til now.
CREATE DIRECTORY LOCAL_DIR AS 'C:/Directory/'
GRANT ALL ON DIRECTORY LOCAL_DIR TO ruser;
I then pasted A.txt at C:/Directory/
Then i executed following query:
CREATE TABLE countries_ext (
country_code VARCHAR2(5),
country_name VARCHAR2(50),
country_language VARCHAR2(50)
)
ORGANIZATION EXTERNAL (
TYPE ORACLE_LOADER
DEFAULT DIRECTORY LOCAL_DIR
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ','
MISSING FIELD VALUES ARE NULL
(
country_code,country_name,country_language
)
)
LOCATION ('Countries1.txt')
)
PARALLEL 5
REJECT LIMIT UNLIMITED;
It showed Table Created.
But when i try to execute the query:
SELECT * FROM countries_ext; I get following exception:
Unable to open file countries_ext_5244.log. The location or file does not exist.
Could someone tell me what i'm doing wrong here?
You have created your directory like this:
CREATE DIRECTORY LOCAL_DIR AS 'C:/Directory/'
But according to the Oracle documentation
the path does not include a trailing slash.
" can external tables be also used with files that are in remote machines?"
We can create directory objects against any OS directory which is addressable from the database server. So the remote machine must be mapped to a drive on the database server.
There are several issues with this. The first is that in my experience DBAs are reluctant to have their database dependent on other servers, especially ones outside their control. The second is performance: external tables cannot be tuned like heap tables, because they are just files on the local OS. Performance against a remote file will definitely be worse, because of network latency. Which brings us to the third issue, the network admin who will be concerned about the impact on the network of queries executing against a remote file. I'll be honest, I have never seen external tables being used against remote files.
Which isn't to say it can't work. But whether it's the best solution depends on how large the file is, how often you're going to query it, the capacity of your network and what your reasons are for not hosting the file on the database server.

Solve problems with external table

I have problems with some Oracle external table
create table myExternalTable
(field1, field2....)
ORGANIZATION EXTERNAL
(TYPE ORACLE_LOADER
DEFAULT DIRECTORY myDirectory
ACCESS PARAMETERS
(RECORDS DELIMITED BY NEWLINE
NOLOGFILE
NOBADFILE
NODISCARDFILE
FIELDS TERMINATED BY '$')
LOCATION ('data.dsv'));
commit;
alter table myExternalTable reject limit unlimited; --solve reject limit reached problem
select * from myExternalTable;
When I select on the table I have this error :
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04040: file data.dsv in myDirectory not found
It seems that the error description is not right because normally the table is already loaded with data.dsv when created.
Plus data.dsv exists in myDirectory.
What is going on? Can somebody help?
Note :
Instead of the select, this is what I normally do :
merge into myDatabaseTable
using
(select field1, field2,.... from myExternalTable) temp
on (temp.field1= myDatabaseTable.field1)
when matched then update
set myDatabaseTable.field1 = temp.field1,
myDatabaseTable.field2 = temp.field2,
......;
This works good on my development environment but on some other environment I have the error I said before :
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04040: file data.dsv in myDirectory not found
First I thought that, in the environment it does not work, the directory did not point where it had to but selecting on dba_directories table, I could see the path of the directory is correct.
The problem is related with the access rights of the user on the operating system side. It is defined in Oracle Support Note Create Database Directory on Remote Share/Server (Doc ID 739772.1)
For my case, I created the directory with a sysdba and then for allowing other users to accesss that external table, I created another table which is creates by Create Table as Select statement for the external table. Otherwise, I need to map the Windows Oracle Service owner user to the exact Oracle user which is already defined in the note.
So, it is more like a well fitting workaround for my case.
To sum up the steps in a nutshell:
1- Create the external table T2
2- Create a table named T1 with CTAS to external table
3- Give SELECT right to T1
Hope this helps for your case.

Resources