Does FlameRobin have problems with the insertion of umlauts? - utf-8

I have a table with a field
VALCONTENT BLOB SUB_TYPE TEXT SEGMENT SIZE 80
When I browse the table, right click on an entry, select "Edit blob" the content is shown.
If I enter "normal" test ("Hello world"), I can click "Save" and things work.
If I use umlauts ("Hällö Wörld"), I get an error message:
IBPP::SQLExcetption, Contenten: Statement: Execure (Update MyTable set
foo= ? where ..." Message isc_dsql_execute2 failed, -303, incompatible
column, malformed string
Am I doing something wrong or is FlameRobin not able to handle UTF8?
I am using Firebird 4.0 64bit, FlameRobin 0.9.3 Unicode x64 (all just downloaded).
Extracting the DDL with "iSQL -o" shows in the first line
/* CREATE DATABASE 'E:\foo.fdb' PAGE_SIZE 16384 DEFAULT CHARACTER SET
UTF8; */

I can reproduce the issue (with blob character set UTF8 and connection character set UTF8), which suggests this is a bug in FlameRobin. I recommend reporting it on https://github.com/mariuz/flamerobin/issues. I'm not sure what the problem is. Updating does seem to work fine when using connection character set WIN1252.
Consider using a different tool, maybe DBeaver, or IBExpert, etc.

Related

Autocommit ODBC api not working through IBM iAccess to unixODBC to ruby-odbc

I am currently using ODBC to access an IBM as400 machine through Rails -> (small as400 odbc adapter) -> odbc_adapter (gem 4.2.4) -> ruby-odbc (gem 0.999991) -> unixODBC (2.3.4, ubuntu 18.04) -> IBMiAccess (latest). By some miracle, this all works pretty well, except for recently we were having problems with strings containing specific characters causing an error to be raised in ruby-odbc.
Retrieving a record with the special character '¬' fails with:
ActiveRecord::StatementInvalid (ArgumentError: negative string size (or size too big): SELECT * from data WHERE id = 4220130.0)
Seems the special character ends up taking up 2 bytes and whatever conversions are happening don't handle this correctly.
Strings without special characters are being returned with encoding Encoding:ASCII-8BIT.
There is a utf8 version of ruby-odbc, which I was able to load by requiring it in our iSeries adapter, and then requiring odbc_adapter:
require 'odbc_utf8' # force odbc_adapter to use the utf8 version
require 'odbc_adapter'
This allows the utf8 version of odbc-ruby to occupy the ODBC module name, which the odbc_adapter will just use. Though there was a problem:
odbc_adapter calls .get_info for a number of fields on raw_connection (odbc-ruby), and these strings come back wrong, for example the string "DB2/400 SQL" which is from ODBC::SQL_DBMS_NAME looks like: "D\x00B\x002\x00/\x004\x000\x000\x00 \x00S\x00Q\x00L\x00", with an encoding of Encoding:ASCII-8BIT. odbc_adapter uses a regex to map dbms to our adapter, which doesn't match: /DB2\/400 SQL/ =~ (this_string) => null.
I'm not super familiar with string encodings, but was able to hack in a .gsub("\0", "") here to fix this detection. After this, I can return records with special characters in their strings. They are returned without error, with visible special characters in Encoding:UTF-8.
Of course, now querying on special characters fails:
ActiveRecord::StatementInvalid (ODBC::Error: HY001 (30027) [IBM][System i Access ODBC Driver]Memory allocation error.: SELECT * from data WHERE (mystring like '%¬%'))
but I'm not too concerned with that. The problem now is that it seems the UTF8 version of ruby-odbc sets the ODBC version to 3, where on the non-utf8 version it was 2:
Base.connection.raw_connection.odbc_version => 3
And this seems to prevent autocommit from working (works on version 2):
Base.connection.raw_connection.autocommit => true
Base.connection.raw_connection.autocommit = false
ODBC::Error (IM001 (0) [unixODBC][Driver Manager]Driver does not support this function)
This function is used to start/end transactions in the odbc_adapter, and seems to be a standard feature of odbc:
https://github.com/localytics/odbc_adapter/blob/master/lib/odbc_adapter/database_statements.rb#L51
I poked around in the IBMiAccess documentation, and found something about transaction levels and a new "trueautocommit" option, but I can't seem to figure out if this trueautocommit replaces autocommit, or even if autocommit is no longer supported in this way.
https://www.ibm.com/support/pages/ibm-i-access-odbc-commit-mode-data-source-setting-isolation-level-and-autocommit
Of course I have no idea of how to set this new 'trueautocommit' connection setting via the ruby-odbc gem. It does support .set_option on the raw_connection, so I can call something like .set_option(ODBC::SQL_AUTOCOMMIT, false), which fails in exactly the same way. ODBC::SQL_AUTOCOMMIT is just a constant for 102, which I've found referenced in a few places regarding ODBC, so I figure if I could figure out the constant for TRUEAUTOCOMMIT, I might be able to set it in this way, but can't find any documentation for this.
Any advice for getting .autocommit working in this configuration?
Edit: Apparently you can use a DSN for odbc, so I've also tried creating one in /etc/odbc.ini, along with the option for "TrueAutoCommit = 1" but this hasn't changed anything as far as getting .autocommit to work.

Password parameters Oracle Apex

I need to create an validation on my password item how check if the password have the minimal parameters to be a strong password, i usually put
8 characters minimum (1 alphabet 1 number and 1 special character)
i tried a plug-in how do this check for me, but i getting trouble with it, because i use a computation to apply the MD5 Hash code on the intuped pass, avoiding to save the password raw in the bank, but when i submit the page, and the computation do the hash, the plug-in do his validation again and stop to recognize the password and in special the special characters,
I use this plug-in
(http://apex-plugin.com/oracle-apex-plugins/item-plugin/password-item_204.html )
after that found this bar how do an score and change an display item based on the strength of the password using javascript and html, in both cases i tried to get the logical and apply to an validation (PL/SQL returning in a error text) but i'm no getting any progress, there is another way how can i do that?
https://patelkartik.blogspot.com/2010/10/password-strength-meter-in-apex.html (Site of the bar)
Thank to read til here
My version of apex is the 5.1 and i'm running the Database 11g XE o the windows

SAS session encoding LATIN1 failing for unicode characters

My SAS session details are:-
ENCODING=LATIN1
LOCALE=EN_US
I am reading encrypted customer names, decrypting them using a java code that has
been implemented in the sas code using javaobj.
data outlib.X1 ( ENCODING = 'asciiany' );
set inlib.INP1 (ENCODING = 'asciiany' ) end=eof;
length decryptedvalue $100;
declare javaobj jObj("<jarname>");
jObj.callstringmethod("<method>", FIRST_NAME , decryptedvalue);
run;
The jObj.callstringmethod returns the decrypted first_name value in the string decryptedvalue. I do a proc export at the end of my SAS code and store all the decrypted names as csv files.
In the last run some of the names have special characters e.g. RÉAL.
This is causing SAS execution to fail with following error :-
ERROR: Transcoding failure at line 30 column 2.
ERROR: DATA STEP Component Object failure. Aborted during the EXECUTION phase.
Is there some way to make SAS session (LATIN1) accept these unicode characters? Can i set
the encoding of the decryptedvalue variable? I dont want to run my entire SAS session in unicode using sas_u8. I only want to accept these characters
Even reading these problematic values as blank is ok.
I have already tried following things :-
Set inencoding= utf8 for the libname
Make ENCODING = 'asciiany' for the input dataset.
Any inputs would be useful.
Thanks in advance!
SAS Tech support suggested this :-
Adding export EASYSOFT_UNICODE=YES in the sasenv_local file .
We are now able to run the entire SAS code (including SQL queries) in the sas u8 session.
Thanks to all for your support.

SQL native client data type compatibility - incompatible with SQLOLEDB

We have ran into a problem with our legacy application when switching to use SQL Native Client (SQLNCLI) as the provider for ADO.
Our original connection string was:
Provider=SQLOLEDB; Server=host; Database=bob; Integrated Security=SSPI;
we changed this to:
Provider=SQLNCLI11; Server=host; Database=bob; Integrated Security=SSPI; DataTypeCompatibility=80;
What we've found is that when calling a stored procedure, using a parameter of adDBTimeStamp, Native Client appears to treat the timestamp as a smalldatetime, rather than a datetime. This is causing us problems as we use 31 Dec 9999 as a "top-end" date in some comparisons, and Native Client fails with an "Invalid date format" error where SQLOLEDB had no issues.
Now it looks like we may just be able to change from adDBTimeStamp to adDate as the datatype when creating the parameter, however I was wondering if there was something we were missing in the connection string before we go ahead and make the code change.
VBScript code to reproduce below. For the avoidance of doubt, date format is UK (dd/mm/yyyy) before someone suggests we should be using 12/31/9999 :-) But also to confirm, the CDate doesn't fail.
Set db = CreateObject("ADODB.Command")
' If Provider=SQLOLEDB then there is no error.
db.ActiveConnection = "Provider=SQLNCLI11; Server=host; Database=bob; Integrated Security=SSPI; DataTypeCompatibility=80;"
db.CommandText = "usp_FetchData"
db.CommandType = &H0004
' 135 is adDBTimeStamp
db.Parameters.Append db.CreateParameter("#screenDate", 135, 1, 20, CDate("31/12/9999"))
Set rs = CreateOBject("ADODB.RecordSet")
' Then this line fails with "Invalid date format" from SQLNCLI
rs.Open db
WScript.Echo rs.RecordCount
Winding the datetime back to 2078 (within the date range of smalldatetime) makes the error go away.
As mentioned, if a non-code-change fix can be found, that's what we'd prefer, before we go and have to change adDBTimeStamp to adDate. I'd have expected DataTypeCompatiblity=80 to behave as SQLOLEDB; unfortantly my Google-fu has failed when finding out exactly what type mapping SQLNCLI uses.
A solution may have finally been found: via the MSDN page Data Type Support for OLE DB Date and Time Improvements, there is this snippet towards the end:
Data Type Mapping in ITableDefinition::CreateTable
The following type mapping is used with DBCOLUMNDESC structures used
by ITableDefinition::CreateTable:
[...table of conversions...]
When an application specifies DBTYPE_DBTIMESTAMP in wType, it can
override the mapping to datetime2 by supplying a type name in
pwszTypeName. If datetime is specified, bScale must be 3. If
smalldatetime is specified, bScale must be 0. If bScale is not
consistent with wType and pwszTypeName,DB_E_BADSCALE is returned.
In testing, it appears this need to set the scale applies to parameters to SELECT and commands as well as to CreateTable. Therefore if we change the script above to:
Set param = db.CreateParameter("#screenDate", 135, 1, 20, CDate("31/12/9999"))
param.NumericScale = 3
db.Parameters.Append param
...then the error goes away and the stored procedure is executed. We're in the early stages of testing, however welcome feedback from others if this has caused issues.

MSSQL-Server/ruby-gem sequel: How to read UTF-8 values?

I use the ruby-gem sequel to read utf-8-encoded data from a MSSQL-Server table.
The fields of the table are defined as nvarchar, they look correct in the Microsoft Server Management Studio (Cyrillic is Cyrillic, Chinese looks chinese).
I connect my database with
db = Sequel.connect(
:adapter=>'ado',
:host =>connectiondata[:server],
:database=>connectiondata[:dsn],
#Login via SSO
)
sel = db[:TEXTE].filter(:language=> 'EN')
sel.each{|data|
data.each{|key, val|
puts "#{val.encoding}: #{val.inspect}" #-> CP850: ....
puts val.encode('utf-8')
}
}
This works fine for English, German returns also a useable result:
CP850: "(2 St\x81ck) f\x81r
(2 Stück) für ...
But the result is converted to CP850, it is not the original UTF-8.
Cyrillic languages (I tested with Bulgarian) and Chinese produce only '?'
(reasonable, because CP850 doesn't include Chinese and Bulgarian characters).
I also connected via a odbc-connection:
db = Sequel.odbc(odbckey,
:db_type => 'mssql', #necessary
#:encoding => 'utf-8', #Only MySQL-Adapter
)
The result is ASCII-8BIT, I have to convert the data with force_encoding to CP1252 (not CP850!).
But Cyrillic and Chinese is still not possible.
What I tried already:
The MySQL-adapter seems to have an encoding option, with MSSQL I detected no effect.
I did similar tests with sqlite and sequel and I had no problem with unicode.
I installed SQLNCLI10.dll and used it as provider. But I get a Invalid connection string attribute-error (same with sqlncli).
So my closing question: How can I read UTF-8 data in MS-SQL via ruby and sequel?
My environment:
Client:
Windows 7
Ruby 1.9.2
sequel-3.33.0
Database:
SQL Server 2005
Database has collation Latin1_General_CI_AS
After preparing my question I found a solution. I will post it as an answer.
But I still hope, there is a better way.
If you can avoid it, you really don't want to use the ado adapter (it's OK for read-only workloads, but I wouldn't recommend it for other workloads). I would try the tinytds adapter, as I believe that will handle encodings properly, and it defaults to UTF-8.
Sequel itself does not do any transcoding, it leaves the handling of encodings to the lower level driver.
After preparing my question I found a solution on my own.
When I add a
Encoding.default_external='utf-8'
to my code, I get the correct results.
As a side effect each File.open expects now also UTF-8-encoded files (This can be overwritten by additional parameters in File.open).
As an alternative, this works also:
Encoding.default_internal='utf-8'
As I mentioned in my question, I don't like to change global settings, only to change the behaviour of one interface.
So I still hope on a better solution.

Resources