How to switch to the only PDB in Oracle 12+? - oracle

Using Oracle's vagrant boxes, you can easily add scripts that are run post installation by putting them in the userscripts directory. I want to create my standard users, which is easy (CREATE USER etc...). However, those user needs to be created in the PDB and not in CDB$ROOT.
So, how do I switch from sys / as sysdba, which is connected to CDB$ROOT, to the one and only PDB in the database? The name of the PDB should not be hardcoded, as it is controlled by a parameter in the Vagrantfile. The script should run successfully without intervention.
I got so far, this code is working, but butt-ugly:
COLUMN pdb_name NEW_VALUE mypdb
SELECT pdb_name
FROM (
SELECT pdb_name,
RANK() OVER (ORDER BY CREATION_SCN) r
FROM dba_pdbs p1
WHERE pdb_name <> 'PDB$SEED'
)
WHERE r = 1;
ALTER SESSION SET CONTAINER=&mypdb;
There must be an easier way...

If it is true that this is the "one and only" pdb, why all the ordering? Don't you just need
COLUMN pdb_name NEW_VALUE mypdb
SELECT pdb_name
FROM dba_pdbs p1
WHERE pdb_name <> 'PDB$SEED'
But since you are using the vagrant file, you could have your scripts do
grep ORACLE_PDB Vagrantfile | awk ...
to get the name of the PDB and then set TWO_TASK or similar to that.

Related

Creating directory direct from oracle [duplicate]

How do you create a physical directory on the OS from within PL/SQL? I looked at the CREATE OR REPLACE DIRECTORY command but that doesn't do it. Neither does UTL_FILE appear to be capable.
In the end I did find an easier solution. Use
select os_command.exec('mkdir /home/oracle/mydir') from dual;
or simply
x := os_command.exec('mkdir /home/oracle/mydir');
UTL_FILE still lacks this capability - probably a holdover from the pre-DIRECTORY object days where you had to explicitly define the OS file directories you could access in a startup parameter, so there was no need to create directories dynamically anyway.
I think the easiest way to do this is with an Oracle Java stored procedure that uses:
File f = new File(dirname);
return (f.mkdir()) ? 1 : 0;
If you go this route make sure that you use dbms_java.grant_permission to grant java.io.FilePermission to the user that owns the executing code.
I believe the only way to do this is to use an external procedure (C or Java) and call it through PL/SQL. PL/SQL itself does not have the means to create the physical OS directory.
PL/SQL Tips provides a good example of how to create a C external procedure that executes shell commands. Note that I would not consider it best practice to allow this for security reasons.
It you can create the directory first, then you can use the
create or replace directory myDir as '<path-to-dir>/myDir';
Note that you will need to have the CREATE ANY DIRECTORY privilege assigned to the user executing the command. After the directory is created with the command above, be sure to assign any needed privileges on the directory to other users.
grant read, write on directory myDir to testUsers;
I just checked the new docs for database version 11.2, and there's still no routine I can find to create a directory. So, like the other respondents, I recommend using a Java or C routine.
You can execute OS commands from within Oracle using DBMS_SCHEDULER or internal Java procedure, for example, using my XT_SHELL package:
install it using install.sql:
Execute OS command using xt_shell.shell_exec(pCommand in varchar2,timeout in number) in SQL or PL/SQL:
SQL> select * from table(xt_shell.shell_exec('/bin/mkdir /tmp/test-dir',1000));
COLUMN_VALUE
--------------------------------------------------------------------------------
SQL> select * from table(xt_shell.shell_exec('/bin/mkdir /tmp/test-dir/test-dir2',1000));
COLUMN_VALUE
--------------------------------------------------------------------------------
SQL> select * from table(xt_shell.shell_exec('/bin/ls -l /tmp/test-dir',1000));
COLUMN_VALUE
--------------------------------------------------------------------------------
total 4
drwxr-xr-x 2 oracle oinstall 4096 Apr 19 12:14 test-dir2

SQLPLUS connection to different dbs

Hello i want to connect to following dbs in loop and execute statements on each:
conn support/support#sp0666to
conn support/support#sp0667to
conn support/support#sp0668to
Is there any way to do this in sqlplus?
Thank you for your answers in advance!
Create one script (doWork.sql) that contains the majority of what you want to do:
conn &1/&2#&3
select EMPLOYEE, AUTHORIZED, TIME, DAT, WORKSTATION
from EMPLOYEE
where status = 25;
In a separate script (goToWork.sql):
set lines 1500 pages 10000
set colsep ';'
set sqlprompt ''
set heading on
set headsep off
set newpage none column tm new_value file_time noprint
select to_char(sysdate, 'DDMMYYYY_HH24.MI') tm from dual;
accept user
accept pass
spool C:\Users\NANCHEV\Desktop\parked.csv
##doWork &user &pass sp0666to
##doWork &user &pass sp0667to
##doWork &user &pass sp0668to
spool off;
exit
If you want separate files, then move the two spool commands to the doWork.sql file.
Assuming you want to run the same set of queries for each database, I'd create a script file (e.g. main_statements.sql) containing those statements.
Then, if the list of databases was static, I'd create a second script file (e.g. run_me.sql) in the same directory, with contents along the lines of:
connect &&user/&&password#db1
##main_statements.sql
connect &&user/&&password#db2
##main_statements.sql
connect &&user/&&password#db3
##main_statements.sql
...
If, however, the databases are static but the list is contained in a database somewhere, then I'd write a script (e.g. run_me.sql) that generates a script, something like:
set echo off
set feedback off
set verify off
spool databases_to_run_through.sql
select 'connect '||username||'/'||password||'#'||database_name||chr(10)||
'##main_statements.sql'
from list_of_databases_to_query;
spool off;
##databases.run_through.sql
N.B. untested. Also, I have assumed that your table contains the usernames and passwords for each db that needs to be connected to; if that's not the case, you'll have to work out how to handle them; maybe they're all the same (in which case, you can hardcode them - or better yet, use substitution variables (e.g. &&username) to avoid having to store them in a plain file. You'd then have to enter them at runtime.
You'll also need to run the script from the same directory, otherwise you could end up with the generated script not being created in the same directory as your main_statements.sql equivalent script.
Yes it's possible, you can use oracle DBLink to connect to different dbs just like your example.

How do I check the NLS_LANG of the client?

I'm working on Windows OS, I know that this setting is stored in the registry. The problem is that the registry path changes from version to version, browsing though that bunch of registry keys is definitly not a good idea.
I can get the NLS_LANG of the server with SELECT USERENV ('language') FROM DUAL.
I'd like to compare that with the client setting and show a warning when they don't match, just like Pl/Sql Developer does.
This is what I do when I troubleshoot encoding-issues. (The NLS_LANG value read by sqlplus):
SQL>/* It's a hack. I don't know why it works. But it does!*/
SQL>#[%NLS_LANG%]
SP2-0310: unable to open file "[NORWEGIAN_NORWAY.WE8MSWIN1252]"
You will have to extract the NLS_LANG value in current ORACLE_HOME from the registry.
All client-side tools (sqlplus, sqlldr, exp, imp, oci, etc...) read this value from registry
and determine if any character transcoding should occur.
ORACLE_HOME and registry section:
C:\>dir /s/b oracle.key
C:\Oracle10\BIN\oracle.key
C:\>type C:\Oracle10\BIN\oracle.key
SOFTWARE\ORACLE\KEY_OraClient10204_Home
In times like these I turn to IPython to demonstrate an idea:
A couple of lookups and you are there!
In [36]: OHOMES_INSTALLED = !where oci.dll
In [37]: OHOMES_INSTALLED
Out[37]:
['C:\\Oracle10\\BIN\\oci.dll',
'C:\\oraclexe\\app\\oracle\\product\\11.2.0\\server\\bin\\oci.dll']
In [38]: ORACLE_HOME = os.path.dirname(OHOMES_INSTALLED[0])
In [39]: ORACLE_HOME
Out[39]: 'C:\\Oracle10\\BIN'
In [40]: f = open(os.path.join(ORACLE_HOME, "oracle.key"))
In [41]: SECTION = f.read()
In [42]: SECTION
Out[42]: 'SOFTWARE\\ORACLE\\KEY_OraClient10204_Home\n'
In [43]: from _winreg import *
In [44]: aReg = ConnectRegistry(None,HKEY_LOCAL_MACHINE)
In [46]: aKey = OpenKey(aReg,SECTION.strip())
In [47]: val = QueryValueEx(aKey, "NLS_LANG")
In [48]: print val
(u'NORWEGIAN_NORWAY.WE8MSWIN1252', 1)
According to Jocke's answer (thanks Jocke), I tested the following query:
SELECT DISTINCT client_charset FROM v$session_connect_info
WHERE sid = sys_context('USERENV','SID');
It perfectly does the job, but I'm unsure if any user will have the necessary rights.
I am not sure if this works every time but for me in sql*plus:
variable n varchar2(200)
execute sys.dbms_system.get_env('NLS_LANG', :n )
print n
AMERICAN_AMERICA.WE8ISO8859P1
Just build a function-wrapper, grant execute to the users who needs it, and there you go.

Oracle DB won't allow users to query tables

I am on Windows XP running Oracle 10G XE Edition.
After running a defrag & cleanup process, I have not been able to access any of the objects on the database.
A quick check
set lines110
col strtd hea 'STARTED'
col instance_name for a8 hea 'INSTANCE'
col host_name for a15 hea 'HOSTNAME'
col version for a10
select instance_name, version, host_name, status
, database_status, to_char(startup_time,'DD-MON-YYYY HH:MI:SS') strtd
from v$instance;
returns this
INSTANCE VERSION HOSTNAME STATUS DATABASE_STATUS STARTED
-------- ---------- --------------- ------------ ----------------- ----------------------------------------------------
xe 10.2.0.1.0 DT8775C MOUNTED ACTIVE 03-DEC-2010 11:38:00
If I use this command, it throws the following error.
SQL> ALTER DATABASE OPEN;
ALTER DATABASE OPEN
*
*ERROR at line 1:*
ORA-16014: log 2 sequence# 679 not archived, no available destinations
ORA-00312: online log 2 thread 1:
'D:\ORACLEEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_2_4JD5RZC0_.LOG'
How can I fix this situation?
There are zero files in the
"D:\ORACLEEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\" folder.
I'm pretty sure this belongs on SERVERFAULT, but to get you going for now:
It appears the database is in ARCHIVELOG mode and you have not supplied a location to store the archived log files. A quick fix, assuming you don't need the recovery protection that archive logging gives you is to try this:
sqlplus / as sysdba
SQL> shutdown immediate;
SQL> startup mount;
SQL> ALTER DATABASE NOARCHIVELOG;
SQL> ALTER DATABASE OPEN;
If you do want to keep your archived redo logs, then you'll need entries like this in your database parameters:
alter system set log_archive_dest_1='location=d:\oraclexe\app\oracle\...';
alter system set log_archive_dest_state_1=enable;
Sounds like in your cleanup process you may have deleted the .LOG files. I assume you've emptied the trash and can't restore them?

How to determine the Schemas inside an Oracle Data Pump Export file

I have an Oracle database backup file (.dmp) that was created with expdp.
The .dmp file was an export of an entire database.
I need to restore 1 of the schemas from within this dump file.
I don't know the names of the schemas inside this dump file.
To use impdp to import the data I need the name of the schema to load.
So, I need to inspect the .dmp file and list all of the schemas in it, how do I do that?
Update (2008-09-18 13:02) - More detailed information:
The impdp command i'm current using is:
impdp user/password#database directory=DPUMP_DIR
dumpfile=EXPORT.DMP logfile=IMPORT.LOG
And the DPUMP_DIR is correctly configured.
SQL> SELECT directory_path
2 FROM dba_directories
3 WHERE directory_name = 'DPUMP_DIR';
DIRECTORY_PATH
-------------------------
D:\directory_path\dpump_dir\
And yes, the EXPORT.DMP file is in fact in that folder.
The error message I get when I run the impdp command is:
Connected to: Oracle Database 10g Enterprise Edition ...
ORA-31655: no data or metadata objects selected for the job
ORA-39154: Objects from foreign schemas have been removed from import
This error message is mostly expected. I need the impdp command be:
impdp user/password#database directory=DPUMP_DIR dumpfile=EXPORT.DMP
SCHEMAS=SOURCE_SCHEMA REMAP_SCHEMA=SOURCE_SCHEMA:MY_SCHEMA
But to do that, I need the source schema.
impdp exports the DDL of a dmp backup to a file if you use the SQLFILE parameter. For example, put this into a text file
impdp '/ as sysdba' dumpfile=<your .dmp file> logfile=import_log.txt sqlfile=ddl_dump.txt
Then check ddl_dump.txt for the tablespaces, users, and schemas in the backup.
According to the documentation, this does not actually modify the database:
The SQL is not actually executed, and the target system remains unchanged.
If you open the DMP file with an editor that can handle big files, you might be able to locate the areas where the schema names are mentioned. Just be sure not to change anything. It would be better if you opened a copy of the original dump.
Update (2008-09-19 10:05) - Solution:
My Solution: Social engineering, I dug real hard and found someone who knew the schema name.
Technical Solution: Searching the .dmp file did yield the schema name.
Once I knew the schema name, I searched the dump file and learned where to find it.
Places the Schemas name were seen, in the .dmp file:
<OWNER_NAME>SOURCE_SCHEMA</OWNER_NAME>
This was seen before each table name/definition.
SCHEMA_LIST 'SOURCE_SCHEMA'
This was seen near the end of the .dmp.
Interestingly enough, around the SCHEMA_LIST 'SOURCE_SCHEMA' section, it also had the command line used to create the dump, directories used, par files used, windows version it was run on, and export session settings (language, date formats).
So, problem solved :)
Assuming that you do not have the log file from the expdp job that generated the file in the first place, the easiest option would probably be to use the SQLFILE parameter to have impdp generate a file of DDL (based on a full import). Then you can grab the schema names from that file. Not ideal, of course, since impdp has to read the entire dump file to extract the DDL and then again to get to the schema you're interested in, and you have to do a bit of text file searching for the various CREATE USER statements, but it should be doable.
The running the impdp command to produce an sqlfile, you will need to run it as a user which has the DATAPUMP_IMP_FULL_DATABASE role.
Or... run it as a low privileged user and use the MASTER_ONLY=YES option, then inspect the master table. e.g.
select value_t
from SYS_IMPORT_TABLE_01
where name = 'CLIENT_COMMAND'
and process_order = -59;
col object_name for a30
col processing_status head STATUS for a6
col processing_state head STATE for a5
select distinct
object_schema,
object_name,
object_type,
object_tablespace,
process_order,
duplicate,
processing_status,
processing_state
from sys_import_table_01
where process_order > 0
and object_name is not null
order by object_schema, object_name
/
http://download.oracle.com/otndocs/products/database/enterprise_edition/utilities/pdf/oow2011_dp_mastering.pdf
Step 1: Here is one simple example. You have to create a SQL file from the dump file using SQLFILE option.
Step 2: Grep for CREATE USER in the generated SQL file (here tables.sql)
Example here:
$ impdp directory=exp_dir dumpfile=exp_user1_all_tab.dmp logfile=imp_exp_user1_tab sqlfile=tables.sql
Import: Release 11.2.0.3.0 - Production on Fri Apr 26 08:29:06 2013
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Username: / as sysdba
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Job "SYS"."SYS_SQL_FILE_FULL_01" successfully completed at 08:29:12
$ grep "CREATE USER" tables.sql
CREATE USER "USER1" IDENTIFIED BY VALUES 'S:270D559F9B97C05EA50F78507CD6EAC6AD63969E5E;BBE7786A5F9103'
Lot of datapump options explained here http://www.acehints.com/p/site-map.html
You need to search for OWNER_NAME.
cat -v dumpfile.dmp | grep -o '<OWNER_NAME>.*</OWNER_NAME>' | uniq -u
cat -v turn the dumpfile into visible text.
grep -o shows only the match so we don't see really long lines
uniq -u removes duplicate lines so you see less output.
This works pretty well, even on large dump files, and could be tweaked for usage in a script.
My solution (similar to KyleLanser's answer) (on a Unix box):
strings dumpfile.dmp | grep SCHEMA_LIST
In my case, based on Aldur's and slafs' answers I came up with this expression that should tell you just the name of the original schema:
cat -v file.dmp | grep 'SCHEMA_LIST' | uniq -u | grep -o -P '(?<=SCHEMAS\=).*(?=content)'
Tested for a DMP file from Oracle 19.8 version.

Resources