Wrong chars using PDO ODBC connection to DB2 on Windows - windows

I’m setting up a new server, and I'm updating some old script (PHP 5+) to PHP 7.
I'm connecting to a DB2 database via PDO ODBC and reading a CHAR field with CCSID 870 and saving it on a MySQL mediumtext field in a table with CHARSET=utf8. But i got wrong characters on MySQL database and event in PHP console.
I tried to switch to odbc_connect() like the old script but the results was the same.
Even saving the field in a txt file the results is the same.
utf8_encode & utf8_decode doesn't help.
Here an example of code:
$as = new PDO("odbc:MYODBC",$user, $psw);
$as->setAttribute(PDO::ATTR_DEFAULT_FETCH_MODE, PDO::FETCH_ASSOC);
$res = $as->query("SELECT FIELD FROM MYTABLE");
$rows = $res->fetchAll();
$mysql = new PDO("mysql:host=srvip;dbname=mydbname;charset=utf8",$user, $psw);
$mysql->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
$mysql->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);
$mysql->setAttribute(PDO::ATTR_DEFAULT_FETCH_MODE, PDO::FETCH_ASSOC);
$ins = $mysql->prepare("INSERT INTO my_MySQL_TABLE (FIELD) VALUES (?)");
$ins->execute(array(trim($rows[0]["FIELD"])));
I expect the results on MySQL to be Wąż, but the actual output is W?? or WÈØ.
Edit on 2019-06-06
| Source | String | HEX |
|------------------|--------|------------|
| DB2 | Wąż | E6A0B2 |
| MySQL | W?? | 573F3F |
| MySQL C/P Insert | Wąż | 57C485C5BC |
The last version is a simple copy-paste to MySQL using a GUI
Edit on 2019-06-07
C:\Users\ME\>echo %DB2CODEPAGE%
1208
C:\Users\ME\>acs.exe /PLUGIN=cldownload /system=MYSYS /sql="SELECT FIELD as char,HEX(FIELD) as hex FROM TABLE" /display=1
CHAR HEX
W?? E6A0B2
If I use /clientfile=test.txt instead of /display=1 Notepad++ show me the file as UTF-8

Related

Laravel connection with postgres doesn't work properly

I'm just setting up a new Laravel installation with a postgressql server using a role with NO superuser privileges (test). I'm using Manjaro to test and I've installed php 8 and enabled/installed php-pgsql with pgsql and pdo_pgsql extensions uncommented at /etc/php/php.ini
Laravel does seem to detect the table but it can't run migrations. These are the commands I'm using:
php artisan migrate:install (this one works)
php artisan migrate:status (this one doesn't work, it can't find the migrations table)
Also this is my .env (the relevant piece):
DB_CONNECTION=pgsql
DB_HOST=127.0.0.1
DB_PORT=5432
DB_DATABASE=test
DB_USERNAME=test
DB_PASSWORD=
This is what I get from the test user within laravel:
test=> \l
postgres | postgres | UTF8 | es_CL.utf8 | es_CL.utf8 |
template0 | postgres | UTF8 | es_CL.utf8 | es_CL.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | es_CL.utf8 | es_CL.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
test | test | UTF8 | es_CL.utf8 | es_CL.utf8 |
For command migrate:status, Laravel calls this method in internal class PostgresGrammar in namespace Illuminate\Database\Schema\Grammars.
/**
* Compile the query to determine if a table exists.
*
* #return string
*/
public function compileTableExists()
{
return "select * from information_schema.tables where table_schema = ? and table_name = ? and table_type = 'BASE TABLE'";
}
If you see it select records from information_schema which needs proper permissions to work. The user, in your case test, needs to be the owner of the schema, which is test in your case, to get any rows from information_schema.tables which are related to your schema.
run this command to change the owner of the schema:
alter schema `test` owner to `test`;
Or the user needs to be a member of a group that owns the schema.
Note:
Usage grant is not sufficient. The user does not need to be a superuser. The user can own a table in a schema, but that is not sufficient either.
Refs:
What permissions are required to return rows from information_schema.schemata?
PostgresGrammar Implementation

Snowflake Not Accepting File Format In Bulk Load

I am creating some new ETL tasks for our data pipeline. We have currently have several hundred loading data from various S3 buckets.
So it would go like this:
create or replace stage ETL_STAGE url='s3://bucketname/'
file_format = csv_etl;
create or replace file format csv_etl
type = 'CSV'
field_delimiter = ','
skip_header = 1
FIELD_OPTIONALLY_ENCLOSED_BY='"'
copy into db.schema.table
from #ETL_STAGE/Usage
pattern='/.*[.]csv'
on_error = 'continue'
However, whenever I use this my file format is not only not not escaping the enclosed double quotes it is not even skipping the header so I get this:
Pretty perplexed by this as I am 99% certain the formatting options are correct here.
+-------------------+----------+----------------+---------------------+-------------------+
| "Usage Task Name" | "Value" | "etl_uuid" | "etl_deviceServer" | "etl_timestamp" |
| "taskname" | "0" | "adfasdfasdf" | "hostserverip" | "2020-04-06 2124" |
+-------------------+----------+----------------+---------------------+-------------------+
Run below command by including file_format. This applied the file format while loading file:
copy into db.schema.table
from #ETL_STAGE/Usage
pattern='/.*[.]csv'
on_error = 'continue'
file_format = csv_etl;

PostgreSQL on Windows 10. Locale is broken

I have installed PostgreSQL version 11 on Windows 10 PC two times. First time it was from official installer for Windows, second time it was a set of packages for cygwin.
The problem is, I can't get any database locale settings to work correctly. In any of the two above cases, the Postgres cluster was initialized with initdb command.
With cygwin install, the command has -E UTF8, --locale=uk_UA.utf8 and same for collation and ctypes. cygwin seemed to recognize the command, the cluster was created. Then I've created database with appropriate settings and some tables in it.
The output of simple query were plain wrong for my locale. It was $ sign instead of грн for monetary, . insteed of , for fraction and so on. The official installer gave the same results, with locale set up and displayed correctly.
Same initdb and create database are giving me correct results with linux OS.
initdb
initdb —pgdata=... \
—locale=uk_UA.utf8
—lc-collate=...
—lc-type=...
—lc-monetary=...
—lc-numeric=...
—lc-time=...
—encoding=UTF-8
Here, basically I’ve repeated the uk_UA.utf8 locale.
Also tried with “uk-x-icu” locale, as windows version compiled with icu library as it seems.
The queries
create database db
template = template0
encoding = 'UTF8'
lc_collate = 'uk_UA.utf8'
... = 'uk_UA.utf8'
lc_ctype = 'uk_UA.utf8'
connection_limit = -1
is_template = false
;
create table c_types (
id serial,
c_date date,
c_text text,
c_time time,
c_timestamp timestamp,
c_money money,
c_float float
);
insert into c_types(c_date,c_text,c_time,c_timestamp,c_money,c_float) values
('2019-09-01', 'text0', '00:00:01', timestamp '2019-09-01 20:00:00', 1000.0001, 1000.0001),
('2019-09-01', 'text1', '00:00:02', timestamp '2019-09-01 21:00:00', 2000.0001, 2000.0001)
;
select * from c_types;
Correct output(Linux)
# id | c_date | c_text | c_time | c_timestamp | c_money | c_float
#----+------------+--------+----------+---------------------+---------------+-----------
# 1 | 2019-09-01 | text0 | 00:00:01 | 2019-09-01 20:00:00 | 1 000,00грн. | 1000.0001
# 2 | 2019-09-01 | text1 | 00:00:02 | 2019-09-01 21:00:00 | 2 000,00грн. | 2000.0001
This post shows that lc_numeric does not do influence separator in numerics as is
https://stackoverflow.com/a/41759744/8339821
Influenced functions are to_number, to_char etc
https://stackoverflow.com/a/8935028/8339821
The question is, how can I set up Postgres for my locale?

use smo to clone azure SQL database?

I'm writing a program to test update scripts for Azure sql.
The idea is to
- first clone a database (or fill a clone with the source schema and content)
- then run the update script on the clone
Locally I have this working, but for azure I have the probem that I don't see any file names. If I restore one database to another on the same azure "server", don't I have to rename the data files during restore too?
For local restore I do this:
restore.Devices.AddDevice(settings.BackupFileName, DeviceType.File);
restore.RelocateFiles.Add(new RelocateFile("<db>", Path.Combine(settings.DataFileDirectory, settings.TestDatabaseName + ".mdf")));
restore.RelocateFiles.Add(new RelocateFile("<db>_log", Path.Combine(settings.DataFileDirectory, settings.TestDatabaseName + "_1.ldf")));
restore.SqlRestore(srv);
Is something similar required for cloning a database on azure?
Lots of Greetings!
Volker
You can create a database as a copy of [source]:
CREATE DATABASE database_name [ COLLATE collation_name ]
| AS COPY OF [source_server_name].source_database_name
{
(<edition_options> [, ...n])
}
<edition_options> ::=
{
MAXSIZE = { 100 MB | 500 MB | 1 | 5 | 10 | 20 | 30 … 150…500 } GB
| EDITION = { 'web' | 'business' | 'basic' | 'standard' | 'premium' }
| SERVICE_OBJECTIVE =
{ 'basic' | 'S0' | 'S1' | 'S2' | 'S3'
| 'P1' | 'P2' | 'P3' | 'P4'| 'P6' | 'P11'
| { ELASTIC_POOL(name = <elastic_pool_name>) } }
}
[;]

While importing mysqldump file ERROR 1064 (42000) near ' ■/ ' at line 1

Cannot import the below dump file created by mysqldump.exe in command line of windows
/*!40101 SET #saved_cs_client = ##character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `attachment_types` (
`ID` int(11) NOT NULL AUTO_INCREMENT,
`DESCRIPTION` varchar(50) DEFAULT NULL,
`COMMENTS` varchar(256) DEFAULT NULL,
PRIMARY KEY (`ID`),
UNIQUE KEY `UK_ATTACHMENT_TYPES___DESCRIPTION` (`DESCRIPTION`)
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=latin1;
While importing the file in command line
mysql --user=root --password=root < mysqldumpfile.sql
It throws error
ERROR 1064 (42000) near ' ■/ ' at line 1
Somebody please help me.
Finally I got a solution
We need two options
--default-character-set=utf8: This insures UTF8 is used for each
field
--result-file=file.sql: This option prevents the dump data
from passing through the Operating System which likely does not
use UTF8. Instead it passes the dump data directly to the file
specified.
Using these new options your dump command would look something like this:
mysqldump -u root -p --default-character-set=utf8 --result-file=database1.backup.sql database1
While Importing you can optionally use:
mysql --user=root --password=root --default_character_set utf8 < database1.backup.sql
Source:http://nathan.rambeck.org/blog/1-preventing-encoding-issues-mysqldump
It seems that the input file (mysqldumpfile.sql) was created in UTF-8 encoding so these first 3 bytes "at line 1" invisible to you in the .SQL file is the byte order mark (BOM) sequence
So try to change default character set to UTF-8
mysql --user=root --password=root --default_character_set utf8 < mysqldumpfile.sql
If you need to import database, this is the import command required on Windows:
mysql --user=root --password=root --default_character_set utf8 database2 < database1.backup.sql

Resources