How do I check the NLS_LANG of the client? - oracle

I'm working on Windows OS, I know that this setting is stored in the registry. The problem is that the registry path changes from version to version, browsing though that bunch of registry keys is definitly not a good idea.
I can get the NLS_LANG of the server with SELECT USERENV ('language') FROM DUAL.
I'd like to compare that with the client setting and show a warning when they don't match, just like Pl/Sql Developer does.

This is what I do when I troubleshoot encoding-issues. (The NLS_LANG value read by sqlplus):
SQL>/* It's a hack. I don't know why it works. But it does!*/
SQL>#[%NLS_LANG%]
SP2-0310: unable to open file "[NORWEGIAN_NORWAY.WE8MSWIN1252]"
You will have to extract the NLS_LANG value in current ORACLE_HOME from the registry.
All client-side tools (sqlplus, sqlldr, exp, imp, oci, etc...) read this value from registry
and determine if any character transcoding should occur.
ORACLE_HOME and registry section:
C:\>dir /s/b oracle.key
C:\Oracle10\BIN\oracle.key
C:\>type C:\Oracle10\BIN\oracle.key
SOFTWARE\ORACLE\KEY_OraClient10204_Home
In times like these I turn to IPython to demonstrate an idea:
A couple of lookups and you are there!
In [36]: OHOMES_INSTALLED = !where oci.dll
In [37]: OHOMES_INSTALLED
Out[37]:
['C:\\Oracle10\\BIN\\oci.dll',
'C:\\oraclexe\\app\\oracle\\product\\11.2.0\\server\\bin\\oci.dll']
In [38]: ORACLE_HOME = os.path.dirname(OHOMES_INSTALLED[0])
In [39]: ORACLE_HOME
Out[39]: 'C:\\Oracle10\\BIN'
In [40]: f = open(os.path.join(ORACLE_HOME, "oracle.key"))
In [41]: SECTION = f.read()
In [42]: SECTION
Out[42]: 'SOFTWARE\\ORACLE\\KEY_OraClient10204_Home\n'
In [43]: from _winreg import *
In [44]: aReg = ConnectRegistry(None,HKEY_LOCAL_MACHINE)
In [46]: aKey = OpenKey(aReg,SECTION.strip())
In [47]: val = QueryValueEx(aKey, "NLS_LANG")
In [48]: print val
(u'NORWEGIAN_NORWAY.WE8MSWIN1252', 1)

According to Jocke's answer (thanks Jocke), I tested the following query:
SELECT DISTINCT client_charset FROM v$session_connect_info
WHERE sid = sys_context('USERENV','SID');
It perfectly does the job, but I'm unsure if any user will have the necessary rights.

I am not sure if this works every time but for me in sql*plus:
variable n varchar2(200)
execute sys.dbms_system.get_env('NLS_LANG', :n )
print n
AMERICAN_AMERICA.WE8ISO8859P1
Just build a function-wrapper, grant execute to the users who needs it, and there you go.

Related

How to switch to the only PDB in Oracle 12+?

Using Oracle's vagrant boxes, you can easily add scripts that are run post installation by putting them in the userscripts directory. I want to create my standard users, which is easy (CREATE USER etc...). However, those user needs to be created in the PDB and not in CDB$ROOT.
So, how do I switch from sys / as sysdba, which is connected to CDB$ROOT, to the one and only PDB in the database? The name of the PDB should not be hardcoded, as it is controlled by a parameter in the Vagrantfile. The script should run successfully without intervention.
I got so far, this code is working, but butt-ugly:
COLUMN pdb_name NEW_VALUE mypdb
SELECT pdb_name
FROM (
SELECT pdb_name,
RANK() OVER (ORDER BY CREATION_SCN) r
FROM dba_pdbs p1
WHERE pdb_name <> 'PDB$SEED'
)
WHERE r = 1;
ALTER SESSION SET CONTAINER=&mypdb;
There must be an easier way...
If it is true that this is the "one and only" pdb, why all the ordering? Don't you just need
COLUMN pdb_name NEW_VALUE mypdb
SELECT pdb_name
FROM dba_pdbs p1
WHERE pdb_name <> 'PDB$SEED'
But since you are using the vagrant file, you could have your scripts do
grep ORACLE_PDB Vagrantfile | awk ...
to get the name of the PDB and then set TWO_TASK or similar to that.

SQLPLUS connection to different dbs

Hello i want to connect to following dbs in loop and execute statements on each:
conn support/support#sp0666to
conn support/support#sp0667to
conn support/support#sp0668to
Is there any way to do this in sqlplus?
Thank you for your answers in advance!
Create one script (doWork.sql) that contains the majority of what you want to do:
conn &1/&2#&3
select EMPLOYEE, AUTHORIZED, TIME, DAT, WORKSTATION
from EMPLOYEE
where status = 25;
In a separate script (goToWork.sql):
set lines 1500 pages 10000
set colsep ';'
set sqlprompt ''
set heading on
set headsep off
set newpage none column tm new_value file_time noprint
select to_char(sysdate, 'DDMMYYYY_HH24.MI') tm from dual;
accept user
accept pass
spool C:\Users\NANCHEV\Desktop\parked.csv
##doWork &user &pass sp0666to
##doWork &user &pass sp0667to
##doWork &user &pass sp0668to
spool off;
exit
If you want separate files, then move the two spool commands to the doWork.sql file.
Assuming you want to run the same set of queries for each database, I'd create a script file (e.g. main_statements.sql) containing those statements.
Then, if the list of databases was static, I'd create a second script file (e.g. run_me.sql) in the same directory, with contents along the lines of:
connect &&user/&&password#db1
##main_statements.sql
connect &&user/&&password#db2
##main_statements.sql
connect &&user/&&password#db3
##main_statements.sql
...
If, however, the databases are static but the list is contained in a database somewhere, then I'd write a script (e.g. run_me.sql) that generates a script, something like:
set echo off
set feedback off
set verify off
spool databases_to_run_through.sql
select 'connect '||username||'/'||password||'#'||database_name||chr(10)||
'##main_statements.sql'
from list_of_databases_to_query;
spool off;
##databases.run_through.sql
N.B. untested. Also, I have assumed that your table contains the usernames and passwords for each db that needs to be connected to; if that's not the case, you'll have to work out how to handle them; maybe they're all the same (in which case, you can hardcode them - or better yet, use substitution variables (e.g. &&username) to avoid having to store them in a plain file. You'd then have to enter them at runtime.
You'll also need to run the script from the same directory, otherwise you could end up with the generated script not being created in the same directory as your main_statements.sql equivalent script.
Yes it's possible, you can use oracle DBLink to connect to different dbs just like your example.

Import / export oracle scheme with correct character set

I have exported a scheme successfully. On the import however the log says that the character sets don't match. The strange thing is that on the server the export was done the character set is the same as on the target database.
This is from the source:
SQL> select * from v$NLS_PARAMETERS
2 ;
**NLS_CHARACTERSET
WE8MSWIN1252**
**NLS_NCHAR_CHARACTERSET
AL16UTF16**
And this is from the log of the import:
Importvorgang mit Zeichensatz WE8MSWIN1252 und Zeichensatz AL16UTF16 NCHAR durchgeführt
Export-Client verwendet Zeichensatz US7ASCII (mögliche Zeichensatzkonvertierung)
Why is the dump recognized as US7ASCII set? The source and target both are non-US machines.
Thank you
Yes, Looks like issue with char set of client session. Set it to globally supported and recommended UTF8 format.
Pls take the export again and try importing. (Do the following before export):
In Windows
set NLS_LANG=AMERICAN_AMERICA.UTF8
In Unix
export NLS_LANG=AMERICAN_AMERICA.UTF8
These days DB char set is also recommended to be 'AL32UTF8'.

How to determine the Schemas inside an Oracle Data Pump Export file

I have an Oracle database backup file (.dmp) that was created with expdp.
The .dmp file was an export of an entire database.
I need to restore 1 of the schemas from within this dump file.
I don't know the names of the schemas inside this dump file.
To use impdp to import the data I need the name of the schema to load.
So, I need to inspect the .dmp file and list all of the schemas in it, how do I do that?
Update (2008-09-18 13:02) - More detailed information:
The impdp command i'm current using is:
impdp user/password#database directory=DPUMP_DIR
dumpfile=EXPORT.DMP logfile=IMPORT.LOG
And the DPUMP_DIR is correctly configured.
SQL> SELECT directory_path
2 FROM dba_directories
3 WHERE directory_name = 'DPUMP_DIR';
DIRECTORY_PATH
-------------------------
D:\directory_path\dpump_dir\
And yes, the EXPORT.DMP file is in fact in that folder.
The error message I get when I run the impdp command is:
Connected to: Oracle Database 10g Enterprise Edition ...
ORA-31655: no data or metadata objects selected for the job
ORA-39154: Objects from foreign schemas have been removed from import
This error message is mostly expected. I need the impdp command be:
impdp user/password#database directory=DPUMP_DIR dumpfile=EXPORT.DMP
SCHEMAS=SOURCE_SCHEMA REMAP_SCHEMA=SOURCE_SCHEMA:MY_SCHEMA
But to do that, I need the source schema.
impdp exports the DDL of a dmp backup to a file if you use the SQLFILE parameter. For example, put this into a text file
impdp '/ as sysdba' dumpfile=<your .dmp file> logfile=import_log.txt sqlfile=ddl_dump.txt
Then check ddl_dump.txt for the tablespaces, users, and schemas in the backup.
According to the documentation, this does not actually modify the database:
The SQL is not actually executed, and the target system remains unchanged.
If you open the DMP file with an editor that can handle big files, you might be able to locate the areas where the schema names are mentioned. Just be sure not to change anything. It would be better if you opened a copy of the original dump.
Update (2008-09-19 10:05) - Solution:
My Solution: Social engineering, I dug real hard and found someone who knew the schema name.
Technical Solution: Searching the .dmp file did yield the schema name.
Once I knew the schema name, I searched the dump file and learned where to find it.
Places the Schemas name were seen, in the .dmp file:
<OWNER_NAME>SOURCE_SCHEMA</OWNER_NAME>
This was seen before each table name/definition.
SCHEMA_LIST 'SOURCE_SCHEMA'
This was seen near the end of the .dmp.
Interestingly enough, around the SCHEMA_LIST 'SOURCE_SCHEMA' section, it also had the command line used to create the dump, directories used, par files used, windows version it was run on, and export session settings (language, date formats).
So, problem solved :)
Assuming that you do not have the log file from the expdp job that generated the file in the first place, the easiest option would probably be to use the SQLFILE parameter to have impdp generate a file of DDL (based on a full import). Then you can grab the schema names from that file. Not ideal, of course, since impdp has to read the entire dump file to extract the DDL and then again to get to the schema you're interested in, and you have to do a bit of text file searching for the various CREATE USER statements, but it should be doable.
The running the impdp command to produce an sqlfile, you will need to run it as a user which has the DATAPUMP_IMP_FULL_DATABASE role.
Or... run it as a low privileged user and use the MASTER_ONLY=YES option, then inspect the master table. e.g.
select value_t
from SYS_IMPORT_TABLE_01
where name = 'CLIENT_COMMAND'
and process_order = -59;
col object_name for a30
col processing_status head STATUS for a6
col processing_state head STATE for a5
select distinct
object_schema,
object_name,
object_type,
object_tablespace,
process_order,
duplicate,
processing_status,
processing_state
from sys_import_table_01
where process_order > 0
and object_name is not null
order by object_schema, object_name
/
http://download.oracle.com/otndocs/products/database/enterprise_edition/utilities/pdf/oow2011_dp_mastering.pdf
Step 1: Here is one simple example. You have to create a SQL file from the dump file using SQLFILE option.
Step 2: Grep for CREATE USER in the generated SQL file (here tables.sql)
Example here:
$ impdp directory=exp_dir dumpfile=exp_user1_all_tab.dmp logfile=imp_exp_user1_tab sqlfile=tables.sql
Import: Release 11.2.0.3.0 - Production on Fri Apr 26 08:29:06 2013
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Username: / as sysdba
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Job "SYS"."SYS_SQL_FILE_FULL_01" successfully completed at 08:29:12
$ grep "CREATE USER" tables.sql
CREATE USER "USER1" IDENTIFIED BY VALUES 'S:270D559F9B97C05EA50F78507CD6EAC6AD63969E5E;BBE7786A5F9103'
Lot of datapump options explained here http://www.acehints.com/p/site-map.html
You need to search for OWNER_NAME.
cat -v dumpfile.dmp | grep -o '<OWNER_NAME>.*</OWNER_NAME>' | uniq -u
cat -v turn the dumpfile into visible text.
grep -o shows only the match so we don't see really long lines
uniq -u removes duplicate lines so you see less output.
This works pretty well, even on large dump files, and could be tweaked for usage in a script.
My solution (similar to KyleLanser's answer) (on a Unix box):
strings dumpfile.dmp | grep SCHEMA_LIST
In my case, based on Aldur's and slafs' answers I came up with this expression that should tell you just the name of the original schema:
cat -v file.dmp | grep 'SCHEMA_LIST' | uniq -u | grep -o -P '(?<=SCHEMAS\=).*(?=content)'
Tested for a DMP file from Oracle 19.8 version.

Howto import an oracle dump in an different tablespace

I want to import an oracle dump into a different tablespace.
I have a tablespace A used by User A. I've revoked DBA on this user and given him the grants connect and resource. Then I've dumped everything with the command
exp a/*** owner=a file=oracledump.DMP log=log.log compress=y
Now I want to import the dump into the tablespace B used by User B. So I've given him the grants on connect and resource (no DBA). Then I've executed the following import:
imp b/*** file=oracledump.DMP log=import.log fromuser=a touser=b
The result is a log with lots of errors:
IMP-00017: following statement failed with ORACLE error 20001: "BEGIN DBMS_STATS.SET_TABLE_STATS
IMP-00003: ORACLE error 20001 encountered
ORA-20001: Invalid or inconsistent input values
After that, I've tried the same import command but with the option statistics=none. This resulted in the following errors:
ORA-00959: tablespace 'A_TBLSPACE' does not exist
How should this be done?
Note: a lot of columns are of type CLOB. It looks like the problems have something to do with that.
Note2: The oracle versions are a mixture of 9.2, 10.1, and 10.1 XE. But I don't think it has to do with versions.
You've got a couple of issues here.
Firstly, the different versions of Oracle you're using is the reason for the table statistics error - I had the same issue when some of our Oracle 10g Databases got upgraded to Release 2, and some were still on Release 1 and I was swapping .DMP files between them.
The solution that worked for me was to use the same version of exp and imp tools to do the exporting and importing on the different Database instances. This was easiest to do by using the same PC (or Oracle Server) to issue all of the exporting and importing commands.
Secondly, I suspect you're getting the ORA-00959: tablespace 'A_TBLSPACE' does not exist because you're trying to import a .DMP file from a full-blown Oracle Database into the 10g Express Edition (XE) Database, which, by default, creates a single, predefined tablespace called USERS for you.
If that's the case, then you'll need to do the following..
With your .DMP file, create a SQL file containing the structure (Tables):
imp <xe_username>/<password>#XE file=<filename.dmp> indexfile=index.sql full=y
Open the indexfile (index.sql) in a text editor that can do find and replace over an entire file, and issue the following find and replace statements IN ORDER (ignore the single quotes.. '):
Find: 'REM<space>' Replace: <nothing>
Find: '"<source_tablespace>"' Replace: '"USERS"'
Find: '...' Replace: 'REM ...'
Find: 'CONNECT' Replace: 'REM CONNECT'
Save the indexfile, then run it against your Oracle Express Edition account (I find it's best to create a new, blank XE user account - or drop and recreate if I'm refreshing):
sqlplus <xe_username>/<password>#XE #index.sql
Finally run the same .DMP file you created the indexfile with against the same account to import the data, stored procedures, views etc:
imp <xe_username>/<password>#XE file=<filename.dmp> fromuser=<original_username> touser=<xe_username> ignore=y
You may get pages of Oracle errors when trying to create certain objects such as Database Jobs as Oracle will try to use the same Database Identifier, which will most likely fail as you're on a different Database.
If you're using Oracle 10g and datapump, you can use the REMAP_TABLESPACE clause. example:
REMAP_TABLESPACE=A_TBLSPACE:NEW_TABLESPACE_GOES_HERE
For me this work ok (Oracle Database 10g Express Edition Release 10.2.0.1.0):
impdp B/B full=Y dumpfile=DUMP.dmp REMAP_TABLESPACE=OLD_TABLESPACE:USERS
But for new restore you need new tablespace
P.S. Maybe useful http://www.oracle-base.com/articles/10g/OracleDataPump10g.php
What version of Oracle are you using? If its 10g or greater, you should look at using Data Pump instead of import/export anyway. I'm not 100% sure if it can handle this scenario, but I would expect it could.
Data Pump is the replacement for exp/imp for 10g and above. It works very similar to exp/imp, except its (supposedly, I don't use it since I'm stuck in 9i land) better.
Here is the Data Pump docs
The problem has to do with the CLOB columns. It seems that the imp tool cannot rewrite the create statement to use another tablespace.
Source: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:66890284723848
The solution is:
Create the schema by hand in the correct tablespace. If you do not have a script to create the schema, you can create it by using the indexfile= of the imp tool.
You do have to disable all constraints your self, the oracle imp tool will not disable them.
After that you can import the data with the following command:
imp b/*** file=oracledump.dmp log=import.log fromuser=a touser=b statistics=none ignore=y
Note: I still needed the statistics=none due to other errors.
extra info about the data pump
As of Oracle 10 the import/export is improved: the data pump tool ([http://www.oracle-base.com/articles/10g/OracleDataPump10g.php][1])
Using this to re-import the data into a new tablespace:
First create a directory for the temporary dump:
CREATE OR REPLACE DIRECTORY tempdump AS '/temp/tempdump/';
GRANT READ, WRITE ON DIRECTORY tempdump TO a;
Export:
expdp a/* schemas=a directory=tempdump dumpfile=adump.dmp logfile=adump.log
Import:
impdp b/* directory=tempdump dumpfile=adump.dmp logfile=bdump.log REMAP_SCHEMA=a:b
Note: the dump files are stored and read from the server disk, not from the local (client) disk
my solution is to use GSAR utility to replace tablespace name in the DUMP file. When you do replce, make sure that the size of the dump file unchanged by adding spaces.
E.g.
gsar -f -s"TSDAT_OV101" -r"USERS " rm_schema.dump rm_schema.n.dump
gsar -f -s"TABLESPACE """USERS """ ENABLE STORAGE IN ROW CHUNK 8192 RETENTION" -r" " rm_schema.n1.dump rm_schema.n.dump
gsar -f -s"TABLESPACE """USERS """ LOGGING" -r" " rm_schema.n1.dump rm_schema.n.dump
gsar -f -s"TABLESPACE """USERS """ " -r" " rm_schema.n.dump rm_schema.n1.dump
I wanna improve for two users both in different tablespaces on different servers (databases)
1.
First create a directories for the temporary dump for both servers (databases):
server #1:
CREATE OR REPLACE DIRECTORY tempdump AS '/temp/old_datapump/';
GRANT READ, WRITE ON DIRECTORY tempdump TO old_user;
server #2:
CREATE OR REPLACE DIRECTORY tempdump AS '/temp/new_datapump/';
GRANT READ, WRITE ON DIRECTORY tempdump TO new_user;
2.
Export (server #1):
expdp tables=old_user.table directory=tempdump dumpfile=adump.dmp logfile=adump.log
3.
Import (server #2):
impdp directory=tempdump dumpfile=adump_table.dmp logfile=bdump_table.log
REMAP_TABLESPACE=old_tablespace:new_tablespace REMAP_SCHEMA=old_user:new_user
The answer is difficult, but doable:
Situation is: user A and tablespace X
import your dump file into a different database (this is only necessary if you need to keep a copy of the original one)
rename tablespace
alter tablespace X rename to Y
create a directory for the expdp command en grant rights
create a dump with expdp
remove the old user and old tablespace (Y)
create the new tablespace (Y)
create the new user (with a new name) - in this case B - and grant rights (also to the directory created with step 3)
import the dump with impdp
impdp B/B directory=DIR dumpfile=DUMPFILE.dmp logfile=LOGFILE.log REMAP_SCHEMA=A:B
and that's it...
Because I wanted to import (to Oracle 12.1|2) a dump that was exported from a local development database (18c xe), and I knew that all my target databases will have an accessible tablespace called DATABASE_TABLESPACE, I just created my schema/user to use a new tablespace of that name instead of the default USERS (to which I have no access on the target databases):
-- don't care about the details
CREATE TABLESPACE DATABASE_TABLESPACE
DATAFILE 'DATABASE_TABLESPACE.dat'
SIZE 10M
REUSE
AUTOEXTEND ON NEXT 10M MAXSIZE 200M;
ALTER DATABASE DEFAULT TABLESPACE DATABASE_TABLESPACE;
CREATE USER username
IDENTIFIED BY userpassword
CONTAINER=all;
GRANT create session TO username;
GRANT create table TO username;
GRANT create view TO username;
GRANT create any trigger TO username;
GRANT create any procedure TO username;
GRANT create sequence TO username;
GRANT create synonym TO username;
GRANT create synonym TO username;
GRANT UNLIMITED TABLESPACE TO username;
An exp created from this makes imp happy on my target.
---Create new tablespace:
CREATE TABLESPACE TABLESPACENAME DATAFILE
'D:\ORACL\ORADATA\XE\TABLESPACEFILENAME.DBF' SIZE 350M AUTOEXTEND ON NEXT 2500M MAXSIZE UNLIMITED
LOGGING
PERMANENT
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT MANUAL
FLASHBACK ON;
---and then import with below command
CREATE USER BVUSER IDENTIFIED BY VALUES 'bvuser' DEFAULT TABLESPACE TABLESPACENAME
-- where D:\ORACL is path of oracle installation

Resources