PostgreSQL on Windows 10. Locale is broken - windows

I have installed PostgreSQL version 11 on Windows 10 PC two times. First time it was from official installer for Windows, second time it was a set of packages for cygwin.
The problem is, I can't get any database locale settings to work correctly. In any of the two above cases, the Postgres cluster was initialized with initdb command.
With cygwin install, the command has -E UTF8, --locale=uk_UA.utf8 and same for collation and ctypes. cygwin seemed to recognize the command, the cluster was created. Then I've created database with appropriate settings and some tables in it.
The output of simple query were plain wrong for my locale. It was $ sign instead of грн for monetary, . insteed of , for fraction and so on. The official installer gave the same results, with locale set up and displayed correctly.
Same initdb and create database are giving me correct results with linux OS.
initdb
initdb —pgdata=... \
—locale=uk_UA.utf8
—lc-collate=...
—lc-type=...
—lc-monetary=...
—lc-numeric=...
—lc-time=...
—encoding=UTF-8
Here, basically I’ve repeated the uk_UA.utf8 locale.
Also tried with “uk-x-icu” locale, as windows version compiled with icu library as it seems.
The queries
create database db
template = template0
encoding = 'UTF8'
lc_collate = 'uk_UA.utf8'
... = 'uk_UA.utf8'
lc_ctype = 'uk_UA.utf8'
connection_limit = -1
is_template = false
;
create table c_types (
id serial,
c_date date,
c_text text,
c_time time,
c_timestamp timestamp,
c_money money,
c_float float
);
insert into c_types(c_date,c_text,c_time,c_timestamp,c_money,c_float) values
('2019-09-01', 'text0', '00:00:01', timestamp '2019-09-01 20:00:00', 1000.0001, 1000.0001),
('2019-09-01', 'text1', '00:00:02', timestamp '2019-09-01 21:00:00', 2000.0001, 2000.0001)
;
select * from c_types;
Correct output(Linux)
# id | c_date | c_text | c_time | c_timestamp | c_money | c_float
#----+------------+--------+----------+---------------------+---------------+-----------
# 1 | 2019-09-01 | text0 | 00:00:01 | 2019-09-01 20:00:00 | 1 000,00грн. | 1000.0001
# 2 | 2019-09-01 | text1 | 00:00:02 | 2019-09-01 21:00:00 | 2 000,00грн. | 2000.0001
This post shows that lc_numeric does not do influence separator in numerics as is
https://stackoverflow.com/a/41759744/8339821
Influenced functions are to_number, to_char etc
https://stackoverflow.com/a/8935028/8339821
The question is, how can I set up Postgres for my locale?

Related

Laravel connection with postgres doesn't work properly

I'm just setting up a new Laravel installation with a postgressql server using a role with NO superuser privileges (test). I'm using Manjaro to test and I've installed php 8 and enabled/installed php-pgsql with pgsql and pdo_pgsql extensions uncommented at /etc/php/php.ini
Laravel does seem to detect the table but it can't run migrations. These are the commands I'm using:
php artisan migrate:install (this one works)
php artisan migrate:status (this one doesn't work, it can't find the migrations table)
Also this is my .env (the relevant piece):
DB_CONNECTION=pgsql
DB_HOST=127.0.0.1
DB_PORT=5432
DB_DATABASE=test
DB_USERNAME=test
DB_PASSWORD=
This is what I get from the test user within laravel:
test=> \l
postgres | postgres | UTF8 | es_CL.utf8 | es_CL.utf8 |
template0 | postgres | UTF8 | es_CL.utf8 | es_CL.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | es_CL.utf8 | es_CL.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
test | test | UTF8 | es_CL.utf8 | es_CL.utf8 |
For command migrate:status, Laravel calls this method in internal class PostgresGrammar in namespace Illuminate\Database\Schema\Grammars.
/**
* Compile the query to determine if a table exists.
*
* #return string
*/
public function compileTableExists()
{
return "select * from information_schema.tables where table_schema = ? and table_name = ? and table_type = 'BASE TABLE'";
}
If you see it select records from information_schema which needs proper permissions to work. The user, in your case test, needs to be the owner of the schema, which is test in your case, to get any rows from information_schema.tables which are related to your schema.
run this command to change the owner of the schema:
alter schema `test` owner to `test`;
Or the user needs to be a member of a group that owns the schema.
Note:
Usage grant is not sufficient. The user does not need to be a superuser. The user can own a table in a schema, but that is not sufficient either.
Refs:
What permissions are required to return rows from information_schema.schemata?
PostgresGrammar Implementation

Snowflake Not Accepting File Format In Bulk Load

I am creating some new ETL tasks for our data pipeline. We have currently have several hundred loading data from various S3 buckets.
So it would go like this:
create or replace stage ETL_STAGE url='s3://bucketname/'
file_format = csv_etl;
create or replace file format csv_etl
type = 'CSV'
field_delimiter = ','
skip_header = 1
FIELD_OPTIONALLY_ENCLOSED_BY='"'
copy into db.schema.table
from #ETL_STAGE/Usage
pattern='/.*[.]csv'
on_error = 'continue'
However, whenever I use this my file format is not only not not escaping the enclosed double quotes it is not even skipping the header so I get this:
Pretty perplexed by this as I am 99% certain the formatting options are correct here.
+-------------------+----------+----------------+---------------------+-------------------+
| "Usage Task Name" | "Value" | "etl_uuid" | "etl_deviceServer" | "etl_timestamp" |
| "taskname" | "0" | "adfasdfasdf" | "hostserverip" | "2020-04-06 2124" |
+-------------------+----------+----------------+---------------------+-------------------+
Run below command by including file_format. This applied the file format while loading file:
copy into db.schema.table
from #ETL_STAGE/Usage
pattern='/.*[.]csv'
on_error = 'continue'
file_format = csv_etl;

Wrong chars using PDO ODBC connection to DB2 on Windows

I’m setting up a new server, and I'm updating some old script (PHP 5+) to PHP 7.
I'm connecting to a DB2 database via PDO ODBC and reading a CHAR field with CCSID 870 and saving it on a MySQL mediumtext field in a table with CHARSET=utf8. But i got wrong characters on MySQL database and event in PHP console.
I tried to switch to odbc_connect() like the old script but the results was the same.
Even saving the field in a txt file the results is the same.
utf8_encode & utf8_decode doesn't help.
Here an example of code:
$as = new PDO("odbc:MYODBC",$user, $psw);
$as->setAttribute(PDO::ATTR_DEFAULT_FETCH_MODE, PDO::FETCH_ASSOC);
$res = $as->query("SELECT FIELD FROM MYTABLE");
$rows = $res->fetchAll();
$mysql = new PDO("mysql:host=srvip;dbname=mydbname;charset=utf8",$user, $psw);
$mysql->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
$mysql->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);
$mysql->setAttribute(PDO::ATTR_DEFAULT_FETCH_MODE, PDO::FETCH_ASSOC);
$ins = $mysql->prepare("INSERT INTO my_MySQL_TABLE (FIELD) VALUES (?)");
$ins->execute(array(trim($rows[0]["FIELD"])));
I expect the results on MySQL to be Wąż, but the actual output is W?? or WÈØ.
Edit on 2019-06-06
| Source | String | HEX |
|------------------|--------|------------|
| DB2 | Wąż | E6A0B2 |
| MySQL | W?? | 573F3F |
| MySQL C/P Insert | Wąż | 57C485C5BC |
The last version is a simple copy-paste to MySQL using a GUI
Edit on 2019-06-07
C:\Users\ME\>echo %DB2CODEPAGE%
1208
C:\Users\ME\>acs.exe /PLUGIN=cldownload /system=MYSYS /sql="SELECT FIELD as char,HEX(FIELD) as hex FROM TABLE" /display=1
CHAR HEX
W?? E6A0B2
If I use /clientfile=test.txt instead of /display=1 Notepad++ show me the file as UTF-8

monetdb remote table: cannot register

I have two nodes and am attempting to create a remote table.  To set up I do the following:
on each host:
$ monetdbd create /opt/mdbdata/dbfarm
$ monetdbd set listenaddr=0.0.0.0 /opt/mdbdata/dbfarm
$ monetdbd start /opt/mdbdata/dbfarm
On the first host:
$ monetdb create w0
$ monetdb release w0
On second:
$ monetdb create mst
$ monetdb release mst
$ mclient -u monetdb -d mst
password:
Welcome to mclient, the MonetDB/SQL interactive terminal (Dec2016-SP4)
Database: MonetDB v11.25.21 (Dec2016-SP4), 'mapi:monetdb://nkcdev11:50000/mst'
Type \q to quit, \? for a list of available commands
auto commit mode: on
sql>create table usr ( id integer not null, name text not null );
operation successful (0.895ms)
sql>insert into usr values(1,'abc'),(2,'def');
2 affected rows (0.845ms)
sql>select * from usr;
+------+------+
| id   | name |
+======+======+
|    1 | abc  |
|    2 | def  |
+------+------+
2 tuples (0.652ms)
sql>
On first:
$ mclient -u monetdb -d w0
password:
Welcome to mclient, the MonetDB/SQL interactive terminal (Dec2016-SP4)
Database: MonetDB v11.25.21 (Dec2016-SP4), 'mapi:monetdb://nkcdev10:50000/w0'
Type \q to quit, \? for a list of available commands
auto commit mode: on
sql>create remote table usr_rmt ( id integer not null, name text not null ) on 'mapi:monetdb://nkcdev11:50000/mst';
operation successful (1.222ms)
sql>select * from usr_rmt;
(mapi:monetdb://monetdb#nkcdev11/mst) Cannot register  
project (
table(sys.usr_rmt) [ usr_rmt.id NOT NULL, usr_rmt.name NOT NULL ] COUNT 
) [ usr_rmt.id NOT NULL, usr_rmt.name NOT NULL ] REMOTE mapi:monetdb://nkcdev11:50000/mst
sql>
$
$ monetdb discover
             location
mapi:monetdb://nkcdev10:50000/w0
mapi:monetdb://nkcdev11:50000/mst
Can anyone nudge me in the right direction?
[EDIT - Solved]
The problem was self-inflicted, the remote table name must be exactly the same as the local table name, I had usr_rmt as the remote table name.
at first sight what you are trying to do ought to work.
Recently, I had similar problems with remote table access, though that was with the non-released version, see bug 6289. (The MonetDB version number mentioned in that bug report is incorrect.) What you are experiencing may or may not be the same underlying issue.
After the weekend I will check if I can reproduce your example on, on -SP4 and on the development version.
Joeri

use smo to clone azure SQL database?

I'm writing a program to test update scripts for Azure sql.
The idea is to
- first clone a database (or fill a clone with the source schema and content)
- then run the update script on the clone
Locally I have this working, but for azure I have the probem that I don't see any file names. If I restore one database to another on the same azure "server", don't I have to rename the data files during restore too?
For local restore I do this:
restore.Devices.AddDevice(settings.BackupFileName, DeviceType.File);
restore.RelocateFiles.Add(new RelocateFile("<db>", Path.Combine(settings.DataFileDirectory, settings.TestDatabaseName + ".mdf")));
restore.RelocateFiles.Add(new RelocateFile("<db>_log", Path.Combine(settings.DataFileDirectory, settings.TestDatabaseName + "_1.ldf")));
restore.SqlRestore(srv);
Is something similar required for cloning a database on azure?
Lots of Greetings!
Volker
You can create a database as a copy of [source]:
CREATE DATABASE database_name [ COLLATE collation_name ]
| AS COPY OF [source_server_name].source_database_name
{
(<edition_options> [, ...n])
}
<edition_options> ::=
{
MAXSIZE = { 100 MB | 500 MB | 1 | 5 | 10 | 20 | 30 … 150…500 } GB
| EDITION = { 'web' | 'business' | 'basic' | 'standard' | 'premium' }
| SERVICE_OBJECTIVE =
{ 'basic' | 'S0' | 'S1' | 'S2' | 'S3'
| 'P1' | 'P2' | 'P3' | 'P4'| 'P6' | 'P11'
| { ELASTIC_POOL(name = <elastic_pool_name>) } }
}
[;]

Resources