I will be deploying an application database on an existing Oracle Server. I need to store English characters on this database, but my client asked me if this would be a problem, since his Oracle database is using a “KS5601” character map, with “ANSI” encoding. My question is, if I create a new database on an existing Oracle Server instance, would this database have its own encoding, or it will have to follow the current encoding of the server?
If I have to use the KS5601 (Korean Character Map), would I be able to store English alphanumeric characters?
KS5601 is not a database character set. What is the existing database's character set
SELECT *
FROM v$nls_parameters
WHERE parameter LIKE '%CHARACTERSET'
What do you mean when you say "create a new database on an existing Oracle Server Instance"? If you mean that you are creating a new database and a new instance on the same physical server (so you'll have a completely separate SGA & PGA and completely separate set of background processes), you can create the new database with whatever character set you'd like. If you mean that you are creating a new schema in an existing database, you would need to use whatever character set the database uses.
Related
The current encoding of oracle instance is WE8ISO8859P1 and needs to be moved to UTF-8. I have found some challenges in my database instance due to its current setup and requirements of my business.
This instance in question has 100+ schema users with number of tables created under each schema. We can say logically each schema exists for application or specific system within the enterprise. The requirement is to move only certain schema and their table objects to new character set of UTF-8.
Also, remember the reason for this migration right now is compliance with Restful POST calls that will perform CRUD operations in UTF format.
NLS_NCHAR_CHARACTERSET- AL16UTF16
NLS_CHARACTERSET- WE8ISO8859P1
I did do some research earlier and here are my findings.
• Oracle doesn’t support character encoding at the tablespace , table or column level. Therefore options start reducing for any charset migration strategy at schema level only.
• We have CRUD calls by other enterprise legacy systems to this schema. Therefore should not have a any wider impact. Off course, other schema's in this instance doesn't have any requirements of UTF-8.
The only way I see a solution is
OPTION 1 - Move the target schema and thier objects into new database instance with NLS_LANGUAGE to be UTF-8.
OPTION 2 - Converting all the relevant columns to NCHAR and NVARCHAR but with loss of length and truncation.
Both the approaches lead to big impact and not able to conclude what is best. Any suggestions are welcome that solves my charset migration without impact and changes to other schema in the instance.
I'm using Oracle Database 12c on RHEL System, and I wanna to set the parameters NLS_CHARACTERSET=AL32UTF8 and NLS_NCHAR_CHARACTERSET=AL16UTF16 by default for all created databases with sql create statement to avoid to add them every time when I issue the create query.
I have tried to add these parameters to initSID.ora but it seems they are not valid as initialization parameters.
Thanks.
I installed Oracle on my system, so now orcl is the SID, which is the unique identifier of my database instance.
Now starter db was created as part of the installation. I created 2 users user1 and user2 using the system account.
Using SQL developer I am accessing the users, this shows me 2 different connections with all the database objects like tables, stored procedures views etc.
so
When using these 2 users, am I accessing the same database? I am giving all the ddl commands by logging into the user1 or user 2, does all this data goes into the same .dbf file?
The database instance can be connected to only one database, then does this essentially mean that everytime I create a new database, to make a database instance to point to that, I need to do a configuration change?
In my experience with Oracle, the typical unit of division is a schema. Schemas in Oracle are used more like you would use databases in SQL Server or PostgreSQL. They represent both users and a logical separation of objects. Physical separation would usually be done using tablespaces. Tablespaces are a group of physical files where data is stored. Schemas can share or use different tablespaces. Having one tablespace per schema is uncommon; they usually share a few tablespaces or often even just one.
With that in mind, to answer your questions more directly,
1) Like in any other database, you can specify the schema the object belongs to:
CREATE TABLE MY_SCHEMA.TABLE_X ( X NUMBER )
If the schemas on two CREATE statements are different, then it will create different objects. What's different in Oracle is that the default schema changes for every user. The default schema is always the currently connected schema/user. So if you omit the schema like so:
CREATE TABLE TABLE_X ( X NUMBER )
then the implied schema is the currently connected schema/user. So if I'm logged in as MY_SCHEMA, then the above is equivalent to the first example. When connecting as two different users, then the implied schema will be different and the DDL is not equivalent between the two users. So running the same statement would create two different objects if you do not specify a schema.
The two objects may be stored in the same physical file if they are in the same tablespace. (They are most likely in the USERS tablespace if you did not create one explicitly and did not specify a different default tablespace when creating the schemas.) Regardless, they are still two completely separate objects.
If you specify the schema explicitly like in the first example, then the DDL is equivalent regardless of who executes it (although permissions may prevent some users from executing it). So it would result in creating the object once, and attempting to create it a second time would result in an error unless you're using CREATE OR REPLACE or something similar.
2) I don't know the answer to this question, but as I said, in Oracle, the basic unit of separation is usually the schema, not a database. I believe the question you're asking is a large part of the reason why the schemas are used in the way they are. Having multiple actual databases on the same machine/instance is far more difficult in Oracle than in other databases (if not impossible), so it's much simpler to have a single database with many schemas.
I'm trying to figure out where our project went wrong.
A long time ago, our database administrator created a user and a schema for the project we were working on.
We gave that user to a contractor who created the tables and installed the application.
Today I discovered that our database doesn't support UTF-8 characters and we need it to.
select value from nls_database_parameters
where parameter='NLS_CHARACTERSET'
The result is : WE8ISO8859P1
My question is, was the mistake made when the user was created, or was the mistake done by the contractor who created the tables?
Thanks
The character set is an attribute of the database. So whoever created the database presumably chose the wrong character set. There are no character set related settings when you create a user or create a table (other than determining whether to use the database character set (CHAR/ VARCHAR2) or the national character set (NCHAR/ NVARCHAR2) data types).
Changing the character set of an existing database may take a bit of effort. The Globalization Guide has a section on character set migration. Depending on the Oracle version (the procedure is different in 10g and 11g) and what data already exists, doing an export & import to a new database may be the easiest option.
I should add that the order of operations you specified in your post doesn't make sense. The database has to be created before the user or the schema can be created. So it doesn't make sense that the DBA could have created the user and the schema a long time ago and the contractor created the database more recently. Are you possibly using the terms "database" and "schema" in a non-Oracle context?
I have Oracle database with following settings
NLS_CHARACTERSET EE8MSWIN1250
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_LANGUAGE AMERICAN
I've created test table with one column of type NVARCHAR2, where I'm going to store cyrillic.
I use SQL Developer to connect DB.
The problem is when I put a cyrillic chain into DB using SQL Developer cell, the data is stored correctly. But when I use INSERT query with the same data using N'' or not the data is stored as question marks.
Interesting thing is that query generated by SQL Developer, and written by me is identical.
I solved this problem by changing NLS_CHARACTERSET to UTF8, but on production server I can't do such a thing.
IMO it must be some way to store cyrillic into that DB in proper way using query if SQL Developer can do that.
Regards
Depending on the ODBC/JDBC in use, localization settings on your computer may override any config values in the database. Try using ALTER SESSION and set the proper NLS parameters before executing your query, and see if that helps. SQL developer might do this behind the scenes when you edit the data cell.