I am trying to push my local database to heroku, but it looks like it fails along the way. Any idea what I could be doing wrong? Here is the stack trace:
>> heroku pg:push my_db_name my_app_name::BROWN
Password:
pg_dump: reading schemas
pg_dump: reading user-defined tables
pg_dump: reading extensions
...
pg_dump: saving encoding = UTF8
pg_dump: saving standard_conforming_strings = on
pg_dump: saving database definition
pg_dump: [custom archiver] WARNING: ftell mismatch with expected position -- ftell used
pg_dump: dumping contents of table XXX
pg_dumpp: [gc_ursetsotmo raer:c h[iavrecrh]i ver] WARNdINGi:d fntoetl lf imnids mmaatgcihc wsittrhi negx pienc tfeidl ep ohseiatdieorn
-- ftell used
pg_dump: dumping contents of table XXX
pg_dump: [custom archiver] WARNING: ftell mismatch with expected position -- ftell used
pg_dump: dumping contents of table ZZZ
pg_dump: [custom archiver] could not write to output file: Invalid argument
Password:
This is a known problem on Windows 7. Unfortunately there is no solution. But there is a work around:
https://devcenter.heroku.com/articles/heroku-postgres-import-export
My guess would be that you are out of free disk space to store your backupset.
Related
I have an old source database in which apparently custom collation UTF8_CI_AI_NUMERIC_SORT was created. I'm running it on docker via image jacobalberty/firebird:2.5-ss. Originally database was created on a Windows machine.
When I try to do a query on the table where this collation was used, I get the error:
SQL> select * from "InvoiceService";
Statement failed, SQLSTATE = 22021
COLLATION UTF8_CI_AI_NUMERIC_SORT for CHARACTER SET UTF8 is not installed
Show collations returns the following:
SQL> show collations;
UTF8_CI_AI_NUMERIC_SORT, CHARACTER SET UTF8, FROM EXTERNAL ('UNICODE'), CASE INSENSITIVE, ACCENT INSENSITIVE, 'NUMERIC-SORT=1'
I tried the following fixes:
add entry to fbintl.conf:
<charset UTF8>
intl_module fbintl
collation UTF8_CI_AI_NUMERIC_SORT
</charset>
Then run the sp_register_character_set("UTF8", 4) procedure, and receiving error about duplicate collations (because UTF8_CI_AI_NUMERIC_SORT is already defined in the DB).
Dropping collation
SQL> drop collation UTF8_CI_AI_NUMERIC_SORT;
Statement failed, SQLSTATE = 42000
unsuccessful metadata update
-Collation UTF8_CI_AI_NUMERIC_SORT is used in table InvoiceService (field name NAME) and cannot be dropped
Adding new column in which different collation would be used, but can't even add it:
SQL> ALTER TABLE "InvoiceService" ADD NAME2 VARCHAR(600) CHARACTER SET UTF8;
Statement failed, SQLSTATE = 22021
unsuccessful metadata update
-InvoiceService
-COLLATION UTF8_CI_AI_NUMERIC_SORT for CHARACTER SET UTF8 is not installed
With using gbak restoring only metadata, fixing the schema and then inserting only the data, but gbak does not support restoring data only
...
I'm out of ideas now. What else could I try?
So, I finally managed to solve the problem. What I did was to create a DB backup with
gbak -v -t -user SYSDBA /path/to/source.fdb /path/to/backup.fbk
Then use the 3.0 version of Docker image with Firebird DB (jacobalberty/firebird:3.0) and restore from backup with
gbak -create /path/to/backup.fbk /path/to/restored3.fdb
Note that the same backup-restore procedure without switching the Docker image did not work.
I didn't have to do anything else. There's only a slight difference in SHOW COLLATIONS; output:
// originally:
UTF8_CI_AI_NUMERIC_SORT, CHARACTER SET UTF8, FROM EXTERNAL ('UNICODE'), CASE INSENSITIVE, ACCENT INSENSITIVE, 'NUMERIC-SORT=1'
// restored DB
UTF8_CI_AI_NUMERIC_SORT, CHARACTER SET UTF8, FROM EXTERNAL ('UNICODE'), CASE INSENSITIVE, ACCENT INSENSITIVE, 'COLL-VERSION=58.0.6.50;NUMERIC-SORT=1'
I am trying to load the text file in MYSQL but I got below error.
Error Code: 1064
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'Rank=#Rank' at line 7
LOAD DATA LOCAL INFILE 'F:/keyword/Key_2018-10-06_06-44-09.txt'
INTO TABLE table
FIELDS TERMINATED BY '\t'
LINES TERMINATED BY '\r\n'
IGNORE 0 LINES
(#dump_date,#Rank)
SET dump_date=#dump_date,Rank=#Rank;
But the above query working in windows server. And same time not working in Linux server .
I am going to suggest here that you try executing that command from the command line in a single line:
LOAD DATA LOCAL INFILE 'F:/keyword/Key_2018-10-06_06-44-09.txt' INTO TABLE
table FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\r\n' IGNORE 0 LINES
(#dump_date,#Rank) SET dump_date=#dump_date,Rank=#Rank;
For formatting reasons, I have added newlines above, but don't do that when you run it from the Linux prompt, just use a single line. Anyway, the text should nicely wrap around when you type it.
After executing a pg_dump from PostgeSQL, I attempted to import the .sql file into CockroachDB, but received the following errors:
ERROR: unknown variable: "STATEMENT_TIMEOUT"
ERROR: unknown variable: "LOCK_TIMEOUT"
ERROR: unknown variable: "IDLE_IN_TRANSACTION_SESSION_TIMEOUT"
SET
SET
ERROR: unknown variable: "CHECK_FUNCTION_BODIES"
SET
ERROR: unknown variable: "ROW_SECURITY"
SET
ERROR: unknown variable: "DEFAULT_TABLESPACE"
ERROR: unknown variable: "DEFAULT_WITH_OIDS"
CREATE TABLE
ERROR: syntax error at or near "OWNER"
Any guidance?
CockroachDB has special support for using psql, which supports the COPY command (which is faster than batched INSERT statements).
You'll need to do two things:
Clean up the SQL file
Import it into CockroachDB (which sounds like you tried, but I'll include the steps here for anyone else who needs them):
Clean up the SQL File
After generating the .sql file, you need to perform a few editing steps before importing it:
Remove all statements from the file besides the CREATE TABLE and COPY statements.
Manually add the table's PRIMARY KEY constraint to the CREATE TABLE statement.
This has to be done manually because PostgreSQL attempts to add the primary key after creating the table, but CockroachDB requires the primary key be defined upon table creation.
Review any other constraints to ensure they're properly listed on the table.
Remove any unsupported elements, such as arrays.
Import Data
After reformatting the file, you can import it through psql:
$ psql -p [port] -h [node host] -d [database] -U [user] < [file name].sql
For reference, CockroachDB uses these defaults:
[port]: 26257
[user]: root
I was getting an error while trying to insert the data from a backup table.
SQL Error: ORA-30036: unable to extend segment by 8 in undo tablespace 'UND_TBS'
30036. 00000 - "unable to extend segment by %s in undo tablespace '%s'"
To solve this issue, I have creted a new empty datafile to the respective path.
After that I am getting the below given error while trying to do any select/delete operation.
Error at Command Line:1 Column:0
Error report:
SQL Error: ORA-01115: IO error reading block from file (block # )
ORA-01110: data file 82: '/pathOfDataFile/my_newly_created_datafile.dbf'
ORA-27072: File I/O error
Additional information: 7
Additional information: 16578
01115. 00000 - "IO error reading block from file %s (block # %s)"
*Cause: Device on which the file resides is probably offline
*Action: Restore access to the device
I checked the status of the file which is ONLINE.
Any idea how can I fix this error?
I'm trying to create an external table in Hive, but keep getting the following error:
create external table foobar (a STRING, b STRING) row format delimited fields terminated by "\t" stored as textfile location "/tmp/hive_test_1375711405.45852.txt";
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask (state=08S01,code=1)
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask (state=08S01,code=1)
Aborting command set because "force" is false and command failed: "create external table foobar (a STRING, b STRING) row format delimited fields terminated by "\t" stored as textfile location "/tmp/hive_test_1375711405.45852.txt";"
The contents of /tmp/hive_test_1375711405.45852.txt are:
abc\tdef
I'm connecting via the beeline command line interface, which uses Thrift HiveServer2.
System:
Hadoop 2.0.0-cdh4.3.0
Hive 0.10.0-cdh4.3.0
Beeline 0.10.0-cdh4.3.0
Client OS - Red Hat Enterprise Linux Server release 6.4 (Santiago)
The issue was that I was pointing the external table at a file in HDFS instead of a directory. The cryptic Hive error message really threw me off.
The solution is to create a directory and put the data file in there. To fix this for the above example, you'd create a directory under /tmp/foobar and place hive_test_1375711405.45852.txt in it. Then create the table like so:
create external table foobar (a STRING, b STRING) row format delimited fields terminated by "\t" stored as textfile location "/tmp/foobar";
We faced similar problem in our company (Sentry, hive, and kerberos combination). We solved it by removing all privileges from non fully defined hdfs_url. For example, we changed GRANT ALL ON URI '/user/test' TO ROLE test; to GRANT ALL ON URI 'hdfs-ha-name:///user/test' TO ROLE test;.
You can find the privileges for a specific URI in the Hive database (mysql in our case).