How to prevent DataGrip from converting gregorian characters to question marks? - datagrip

I added a new row in a table. The table contains a column named NAME_IN_GEORGIAN, where we insert product names translated to our native language (georgian characters: "აბგდევზთიკლმნოპჟრსტუფქღყშჩცძწჭხჯჰ").
I've been using pl/sql and did not have a problem at all, but when I tried to insert a new word in datagrip, after commit it had been saved as question marks.
Is there anything i can do to insert georgian characters ?

Related

How to create Oracle Spatial Index?

I am trying to create an Oracle Spatial index but seeing strange behavior.
I have a table in my schema as follows:
CREATE TABLE "Event" (
"EventID" NUMBER(32,0) GENERATED ALWAYS AS IDENTITY INCREMENT BY 1 START WITH 1 NOT NULL,
"Name" NVARCHAR2(30),
"Location" "SDO_GEOMETRY" NOT NULL,
CONSTRAINT "PK_EVENT" PRIMARY KEY ("EventID")
) ;
This works fine and I know I have to create an entry in user_sdo_geom_metadata, that works as you would expect with the following:
insert into user_sdo_geom_metadata (table_name,column_name,diminfo,srid) values ('Event','Location',
sdo_dim_array(sdo_dim_element('X',-180.0,180.0, 0.005),sdo_dim_element('Y',-90.0,90.0, 0.005)), 4326);
This reports success and when I do a select on user_sdo_geom_metadata I see the row. However, when I try to create the spatial index with:
CREATE INDEX "EVINDEX" ON "Event" ("Location") INDEXTYPE IS MDSYS.SPATIAL_INDEX_V2
I get the following error:
SQL Error [29855] [99999]: ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
ORA-13203: failed to read USER_SDO_GEOM_METADATA view
ORA-13203: failed to read USER_SDO_GEOM_METADATA view
ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 10
The weird thing is the Index looks like it's been created.
select * from all_indexes where table_name='Event';
Shows the index??? The other odd thing is when I do a select * on ALL_SDO_GEOM_METADATA, no rows are returned??? I'm connecting as a user with almost every privilege and role but not as SYSDBA. I can't get my head around this one.
UPDATE
Incredibly, this seems to be a case sensitivity issue. If you change the table and column names to all UPPERCASE it works. It seems my neverending disappointment in Oracle has a whole new chapter. Going to try to struggle through this somehow, but like most things with Oracle, it's one unrelenting slog to get anything done :(
The documentation says:
The table name cannot contain spaces or mixed-case letters in a quoted string when inserted into the USER_SDO_GEOM_METADATA view, and it cannot be in a quoted string when used in a query (unless it is in all uppercase characters).
and
The column name cannot contain spaces or mixed-case letters in a quoted string when inserted into the USER_SDO_GEOM_METADATA view, and it cannot be in a quoted string when used in a query (unless it is in all uppercase characters).
However, it also says:
All letters in the names are converted to uppercase before the names are stored in geometry metadata views or before the tables are accessed. This conversion also applies to any schema name specified with the table name.
which you can see if you query the user_sdo_geom_metadata view after your insert; the mixed-case names have become uppercase EVENT and LOCATION.
But then:
Note: Letter case conversion does not apply if you use mixed case (“CamelCase”) names enclosed in quotation marks. However, be aware that many experts recommend against using mixed-case names.
And indeed, rather unintuitively, it seems to work if you include the quotes in the user_sdo_geom_metadata insert:
insert into user_sdo_geom_metadata (table_name,column_name,diminfo,srid)
values (
'"Event"',
'"Location"',
sdo_dim_array(sdo_dim_element('X',-180.0,180.0, 0.005),
sdo_dim_element('Y',-90.0,90.0, 0.005)), 4326
);
db<>fiddle
So it appears that the values from the view are at some point concatenated into a dynamic SQL statement, which would explain some of the behaviour.

Special character conversion issue in Datastage

In Datastage, we have source system as Oracle and target system as Netezza. In Oracle the column datatype is varchar whereas in Netezza it is nvarchar. Most of the characters are Latin and Dutch.
We are getting character in our table row which is exactly opposite to the one mentioned in bracket (`) means it's heading towards right and slanting on left(mostly dutch). We feel it is Dutch character which represent apostrophe. The table consists of million records and many values in table have this special character. We want to process the value as it is but we are getting garbage value. Can anyone help us in which conversion function we should try?
I tried iso-8859-1 and iso-8859-15

national characters in Oracle

we are using Oracle 19c
there are setting in nls_database_parameters
nls_nchar_characterset is UTF8
nls_charchterset is WE8ISO8859P15
I have a table with one column of varchar2 and another column of nvarchar2
I try to insert in both column the same letter,non english ,for example ş and it is not wotking but if I try another non english letter in my language like ž then is is working in both columns. Another colleagues of mine can not insert any letter correctly using the same database user. I don't understand this behavior,what defines what you can insert as national character?
we receive a big list of different cities in different languages. What is the best way to insert all of them correctly?

How did the unicode characters endup in the database table column?

Recently I came across a unicode character (\u2019) in a database table column while parsing using Python.
Question: What are the reasons that can result in unicode characters showing up in the database table? Is it data entry issue?
Appreciate any input.
When you set up your Oracle Database you choose a character set which will be used in the SQL char datatypes (char, varchar2 etc).
Suppose you chose your character set and you have a table with a column of VARCHAR2 type. Suddenly you need to store some string with non-ASCII symbols not supported by your database (chosen character set). You may convert this string into ASCII string by calling ASCIISTR function for example and store it in your VARCHAR2 column (but it's not a good idea because many SQL built-in functions don't understand '\u2019' (they think it's just 6 symbols)). That's how Unicode may appear in your table column (ASCIISTR converts non-ascii symbols into unicode representation such as '\u2019').
Another option is special Oracle nchar datatypes which were designed to store UNICODE without altering global database settings.
Here is the link with Oracle documentation: https://docs.oracle.com/cd/B19306_01/server.102/b14225/ch6unicode.htm

Insert Unicode string to DB using Linq

When I try to excecute this:
INSERT INTO [DB_NAME].[dbo].[Table]
([Column])
VALUES('some_hebrew_characters')
I get only questions mark in the column. If I change it to N'some_hebrew_characters' - then it's OK. Why is this happening? How can I translate it to Linq?
How can I make this table to treat all data as Unicode by default? My colum collation is Hebrew_CS_AI, and server is SQL 2008 R2.
Thanks!
---EDIT----
something I just noticed:
even if I run this
SELECT 'some_hebrew_characters'
Im getting questions mark in my results grid
Didn't you forget to mark your column as NVARCHAR also?
Probably that's your editor's default enncoding is not unicode.
To be sure, save your query as a unicode file in SQL SERVER Management Studio and re-run it.
I think if you get results through Linq there would be right.
you need to prefix the '' with the letter N
when inserting a value that contains unicode characters, you need to do this:
insert into table_name(unicode_field) values (N'会意字')
without the N prefix, they'll be passed as ASCII characters.
Also, be sure that the column you're inserting to, supports unicode characters - i.e. nchar, nvarchar, ntext.

Resources