While importing mysqldump file ERROR 1064 (42000) near ' ■/ ' at line 1 - windows

Cannot import the below dump file created by mysqldump.exe in command line of windows
/*!40101 SET #saved_cs_client = ##character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `attachment_types` (
`ID` int(11) NOT NULL AUTO_INCREMENT,
`DESCRIPTION` varchar(50) DEFAULT NULL,
`COMMENTS` varchar(256) DEFAULT NULL,
PRIMARY KEY (`ID`),
UNIQUE KEY `UK_ATTACHMENT_TYPES___DESCRIPTION` (`DESCRIPTION`)
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=latin1;
While importing the file in command line
mysql --user=root --password=root < mysqldumpfile.sql
It throws error
ERROR 1064 (42000) near ' ■/ ' at line 1
Somebody please help me.

Finally I got a solution
We need two options
--default-character-set=utf8: This insures UTF8 is used for each
field
--result-file=file.sql: This option prevents the dump data
from passing through the Operating System which likely does not
use UTF8. Instead it passes the dump data directly to the file
specified.
Using these new options your dump command would look something like this:
mysqldump -u root -p --default-character-set=utf8 --result-file=database1.backup.sql database1
While Importing you can optionally use:
mysql --user=root --password=root --default_character_set utf8 < database1.backup.sql
Source:http://nathan.rambeck.org/blog/1-preventing-encoding-issues-mysqldump

It seems that the input file (mysqldumpfile.sql) was created in UTF-8 encoding so these first 3 bytes "at line 1" invisible to you in the .SQL file is the byte order mark (BOM) sequence
So try to change default character set to UTF-8
mysql --user=root --password=root --default_character_set utf8 < mysqldumpfile.sql

If you need to import database, this is the import command required on Windows:
mysql --user=root --password=root --default_character_set utf8 database2 < database1.backup.sql

Related

MySQL import error in GitHub Actions with laravel

I am trying to import an SQL file with the following command using GitHub actions workflow .sql file contains some test data which will be further used to run unit test cases in laravel.
I am unable to understand if the file path is wrong or import command is wrong. I have saved SQL file on the root directory of laravel.
I am facing the following error:
Error: Process completed with exit code 1.
Below is the command used in the .yml file which is used in the workflow for laravel.
- name: Importing MYSQL file
env:
DB_HOST: 127.0.0.1
DB_CONNECTION: mysql
DB_DATABASE: test
DB_PORT: ${{ job.services.mysql.ports[3306] }}
DB_USER: root
DB_PASSWORD: password
run: mysql -u root -p password -h localhost --port=3306 test < request_data.sql
Implementing database seeds will be a bit time-consuming that's why I'm using this way to import .sql data. Also, I am a bit new to this workflow thing. let me know if there is some issue in running the command or if it is not possible to import an existing .sql file.
Please note the migrations I have run using the workflow file are running successfully.
The following code is used to create MySQL service:
jobs:
phpunit:
runs-on: ubuntu-latest
services:
mysql:
image: mysql:5.7
env:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: test
ports:
- 33306:3306
options: --health-cmd="mysqladmin ping" --health-interval=10s --health-timeout=5s --health-retries=3
Here's the workflow error:
Thanks in advance!
I tried to reproduce your scenario with a small .sql file but it's working fine with the mysql:5.7 Docker image.
Here's the complete workflow:
name: MySQL Import Test
on:
workflow_dispatch:
jobs:
import:
runs-on: ubuntu-latest
services:
mysql:
# https://hub.docker.com/_/mysql
image: mysql:5.7
env:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: test
ports:
- 33306:3306
options: --health-cmd="mysqladmin ping" --health-interval=10s --health-timeout=5s --health-retries=3
steps:
- name: Import MySQL file
env:
SQL: |
SET SQL_MODE = "NO_AUTO_VALUE_ON_ZERO";
START TRANSACTION;
SET time_zone = "+00:00";
CREATE TABLE `person` (
`id` int(11) NOT NULL,
`name` varchar(255) NOT NULL,
`email` varchar(255) NOT NULL
);
INSERT INTO `person` (`id`, `name`, `email`)
VALUES
(111, 'abc', 'abc#email.com'),
(222, 'def', 'def#email.com'),
(333, 'ghi', 'ghi#email.com'),
(444, 'jkl', 'jkl#email.com');
ALTER TABLE `person` ADD PRIMARY KEY (`id`);
COMMIT;
run: |
mysql --host 127.0.0.1 --port 33306 -uroot -ppassword -e "SHOW DATABASES LIKE 'test';" 2>/dev/null
echo "$SQL" > person.sql
echo "--- SQL ---"
cat person.sql
echo "--- --- ---"
echo "Importing from person.sql file"
mysql --host 127.0.0.1 --port 33306 -uroot -ppassword test < person.sql 2>/dev/null
echo "Checking the imported data"
mysql --host 127.0.0.1 --port 33306 -uroot -ppassword test <<< 'SELECT id,name,email FROM person;' 2>/dev/null
Output
Database (test)
test
--- SQL ---
SET SQL_MODE = "NO_AUTO_VALUE_ON_ZERO";
START TRANSACTION;
SET time_zone = "+00:00";
CREATE TABLE `person` (
`id` int(11) NOT NULL,
`name` varchar(255) NOT NULL,
`email` varchar(255) NOT NULL
);
INSERT INTO `person` (`id`, `name`, `email`)
VALUES
(111, 'abc', 'abc#email.com'),
(222, 'def', 'def#email.com'),
(333, 'ghi', 'ghi#email.com'),
(444, 'jkl', 'jkl#email.com');
ALTER TABLE `person` ADD PRIMARY KEY (`id`);
COMMIT;
--- --- ---
Importing from person.sql file
Checking the imported data
id name email
111 abc abc#email.com
222 def def#email.com
333 ghi ghi#email.com
444 jkl jkl#email.com
Apart from that, the command i.e. sudo /etc/init.d/mysql start (and its other variants e.g. service and systemctl) is for the preinstalled MySQL. Here's the relevant issue: https://github.com/actions/runner-images/issues/576
In your scenario, as the requirement is to test on a Docker container, issuing commands for the locally installed MySQL (which is disabled by default) is not required at all.

Wrong chars using PDO ODBC connection to DB2 on Windows

I’m setting up a new server, and I'm updating some old script (PHP 5+) to PHP 7.
I'm connecting to a DB2 database via PDO ODBC and reading a CHAR field with CCSID 870 and saving it on a MySQL mediumtext field in a table with CHARSET=utf8. But i got wrong characters on MySQL database and event in PHP console.
I tried to switch to odbc_connect() like the old script but the results was the same.
Even saving the field in a txt file the results is the same.
utf8_encode & utf8_decode doesn't help.
Here an example of code:
$as = new PDO("odbc:MYODBC",$user, $psw);
$as->setAttribute(PDO::ATTR_DEFAULT_FETCH_MODE, PDO::FETCH_ASSOC);
$res = $as->query("SELECT FIELD FROM MYTABLE");
$rows = $res->fetchAll();
$mysql = new PDO("mysql:host=srvip;dbname=mydbname;charset=utf8",$user, $psw);
$mysql->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
$mysql->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);
$mysql->setAttribute(PDO::ATTR_DEFAULT_FETCH_MODE, PDO::FETCH_ASSOC);
$ins = $mysql->prepare("INSERT INTO my_MySQL_TABLE (FIELD) VALUES (?)");
$ins->execute(array(trim($rows[0]["FIELD"])));
I expect the results on MySQL to be Wąż, but the actual output is W?? or WÈØ.
Edit on 2019-06-06
| Source | String | HEX |
|------------------|--------|------------|
| DB2 | Wąż | E6A0B2 |
| MySQL | W?? | 573F3F |
| MySQL C/P Insert | Wąż | 57C485C5BC |
The last version is a simple copy-paste to MySQL using a GUI
Edit on 2019-06-07
C:\Users\ME\>echo %DB2CODEPAGE%
1208
C:\Users\ME\>acs.exe /PLUGIN=cldownload /system=MYSYS /sql="SELECT FIELD as char,HEX(FIELD) as hex FROM TABLE" /display=1
CHAR HEX
W?? E6A0B2
If I use /clientfile=test.txt instead of /display=1 Notepad++ show me the file as UTF-8

syntax error near unexpected token 'fi'"

if [ $pass = "123456" ] ; then mysql -u root -p$pass "create database newdb"; use newdb; CREAT TABLE user ( name char (30) , lname char (40) );
echo "creat table succesful" fi
When I run it it outputs start and then says Syntax error:
"(" unexpected (expecting "fi")
How could I fix this?
Change it to
if [ $pass = "123456" ] ; then mysql -u root -p$pass "create database newdb"; use newdb; CREAT TABLE user ( name char (30) , lname char (40) );
echo "creat table succesful"; fi
that is a semicolon before the fi.
Also the entire command sequence to mysql probably has to be in double quotes.

monetdb remote table: cannot register

I have two nodes and am attempting to create a remote table.  To set up I do the following:
on each host:
$ monetdbd create /opt/mdbdata/dbfarm
$ monetdbd set listenaddr=0.0.0.0 /opt/mdbdata/dbfarm
$ monetdbd start /opt/mdbdata/dbfarm
On the first host:
$ monetdb create w0
$ monetdb release w0
On second:
$ monetdb create mst
$ monetdb release mst
$ mclient -u monetdb -d mst
password:
Welcome to mclient, the MonetDB/SQL interactive terminal (Dec2016-SP4)
Database: MonetDB v11.25.21 (Dec2016-SP4), 'mapi:monetdb://nkcdev11:50000/mst'
Type \q to quit, \? for a list of available commands
auto commit mode: on
sql>create table usr ( id integer not null, name text not null );
operation successful (0.895ms)
sql>insert into usr values(1,'abc'),(2,'def');
2 affected rows (0.845ms)
sql>select * from usr;
+------+------+
| id   | name |
+======+======+
|    1 | abc  |
|    2 | def  |
+------+------+
2 tuples (0.652ms)
sql>
On first:
$ mclient -u monetdb -d w0
password:
Welcome to mclient, the MonetDB/SQL interactive terminal (Dec2016-SP4)
Database: MonetDB v11.25.21 (Dec2016-SP4), 'mapi:monetdb://nkcdev10:50000/w0'
Type \q to quit, \? for a list of available commands
auto commit mode: on
sql>create remote table usr_rmt ( id integer not null, name text not null ) on 'mapi:monetdb://nkcdev11:50000/mst';
operation successful (1.222ms)
sql>select * from usr_rmt;
(mapi:monetdb://monetdb#nkcdev11/mst) Cannot register  
project (
table(sys.usr_rmt) [ usr_rmt.id NOT NULL, usr_rmt.name NOT NULL ] COUNT 
) [ usr_rmt.id NOT NULL, usr_rmt.name NOT NULL ] REMOTE mapi:monetdb://nkcdev11:50000/mst
sql>
$
$ monetdb discover
             location
mapi:monetdb://nkcdev10:50000/w0
mapi:monetdb://nkcdev11:50000/mst
Can anyone nudge me in the right direction?
[EDIT - Solved]
The problem was self-inflicted, the remote table name must be exactly the same as the local table name, I had usr_rmt as the remote table name.
at first sight what you are trying to do ought to work.
Recently, I had similar problems with remote table access, though that was with the non-released version, see bug 6289. (The MonetDB version number mentioned in that bug report is incorrect.) What you are experiencing may or may not be the same underlying issue.
After the weekend I will check if I can reproduce your example on, on -SP4 and on the development version.
Joeri

mysqlimport with Error 13

I use the following script to import data from a csv file:
#!/bin/bash
# show commands being executed, per debug
set -x
# define database connectivity
_db="xxx"
_db_user="xxx"
_db_password="xxx"
_table="movie"
# define directory containing CSV files
_csv_directory="/tmp"
_csv_file='xxxxxx.csv'
_header_columns_string='link,description,duration,thumbnaillink,iframe,tags,category'
# import csv into mysql
mysqlimport --fields-terminated-by=';' --lines-terminated-by="\n" --columns=$_header_columns_string -u $_db_user -p$_db_password $_db $_table $_csv_directory/$_csv_file
exit
When I execute the script as root via bash import.sh I get the following error message:
+ _db=mydatabase
+ _db_user=xxx
+ _db_password=xxx
+ _table=movie
+ _csv_directory=/tmp
+ _csv_file=xxxxxx.csv
+ _header_columns_string=link,description,duration,thumbnaillink,iframe,tags,category
+ mysqlimport --local '--fields-terminated-by=;' '--lines-terminated-by=\n' --columns=link,description,duration,thumbnaillink,iframe,tags,category -u xxx -pxxx mydatabase movie /tmp/xxxxxx.csv
mysqlimport: Error: 13, Can't get stat of '/var/lib/mysql/mydatabase/movie' (Errcode: 2), when using table: movie
+ exit
but the database and the table exist.
The csv file exists and can be read, the table can be selected and I can manually insert data-rows into the db.
What am I doing wrong?
Try doing it from inside SQL:
load data local infile 'FILENAME.CSV' into table TABLENAME
fields terminated by ',' optionally enclosed by '"' ignore 1 lines;
You run this from the command line or from a shell script like this
db='mysql -hX -uX -pX --database=X'
cat MYSCRIPT.SQL | $db

Resources