I am trying to upload my local database to production and I keep running into the following error while processing:
$ heroku pg:transfer --from postgres://username:password#localhost/blog_develo
pment --to olive --confirm appname
Source database: blog_development on localhost:5432
Target database: HEROKU_POSTGRESQL_OLIVE_URL on afternoon-taiga-2766.herokuapp.com
pg_dump: reading schemas
pg_dump: reading user-defined tables
pg_dump: reading extensions
pg_dump: reading user-defined functions
pg_dump: reading user-defined types
pg_dump: reading procedural languages
pg_dump: reading user-defined aggregate functions
pg_dump: reading user-defined operators
pg_dump: reading user-defined operator classes
pg_dump: reading user-defined operator families
pg_dump: reading user-defined text search parsers
pg_dump: reading user-defined text search templates
pg_dump: reading user-defined text search dictionaries
pg_dump: reading user-defined text search configurations
pg_dump: reading user-defined foreign-data wrappers
pg_dump: reading user-defined foreign servers
pg_dump: reading default privileges
pg_dump: reading user-defined collations
pg_dump: reading user-defined conversions
pg_dump: reading type casts
pg_dump: reading table inheritance information
pg_dump: reading rewrite rules
pg_dump: finding extension members
pg_dump: finding inheritance relationships
pg_dump: reading column info for interesting tables
pg_dump: finding the columns and types of table "schema_migrations"
pg_dump: finding the columns and types of table "articles"
pg_dump: finding default expressions of table "articles"
pg_dump: flagging inherited columns in subtables
pg_dump: reading indexes
pg_dump: reading indexes for table "schema_migrations"
pg_dump: reading indexes for table "articles"
pg_dump: reading constraints
pg_dump: reading triggers
pg_dump: reading large objects
pg_dump: reading dependency data
pg_dump: saving encoding = WIN1252
pg_dump: saving standard_conforming_strings = on
pg_dump: saving database definition
pg_dump: [custom archiver] WARNING: ftell mismatch with expected position -- ftell used
pg_dump: [custom archiver] WARNING: ftell mismatch with expected position -- ftell used
pg_dump: dumping contents of table articles
pg_dump: [custom archiver] WARNING: ftell mismatch with expected position -- ftell used
pg_dump: dumping contents of table schema_migrations
pg_restore: [archiver] did not find magic string in file header
This is a very simple app that I've just created in order to practice using an postgresql db (It only has two tables, articles and the migration). Has anyone seen this error before. And I'm trying to use pg:transfer to upload to the database. Thanks for the help.
EDIT
Database.yml
development:
adapter: postgresql
encoding: unicode
database: blog_development
pool: 5
username: benjaminw
password:
test:
adapter: postgresql
encoding: unicode
database: blog_test
pool: 5
username: benjaminw
password:
production:
adapter: postgresql
encoding: unicode
database: blog_production
pool: 5
username: blog
password:
And here is the Gemfile.
source 'https://rubygems.org'
gem 'rails', '3.2.13'
# Bundle edge Rails instead:
# gem 'rails', :git => 'git://github.com/rails/rails.git'
gem 'pg'
#gem 'activerecord-postgresql-adapter'
#gem 'sequel'
# Gems used only for assets and not required
# in production environments by default.
group :assets do
gem 'sass-rails', '~> 3.2.3'
gem 'coffee-rails', '~> 3.2.1'
# See https://github.com/sstephenson/execjs#readme for more supported runtimes
# gem 'therubyracer', :platforms => :ruby
gem 'uglifier', '>= 1.0.3'
end
gem 'jquery-rails'
Related
I want to dump db schema so I can later use it with DBIx:Class.
Connection itself is apparently ok, however there are complaints about moniker clashes that I've tried to resolve by using naming => {ALL=>'v8', force_ascii=>1}
perl -MDBIx::Class::Schema::Loader=make_schema_at,dump_to_dir:./lib -E "$|++;make_schema_at('EC::Schema', { debug => 1, naming => {ALL=>'v8', force_ascii=>1} }, [ 'dbi:Oracle:', 'XX/PP#TNS' ])"
output (ending up without content in ./lib)
Bad table or view 'V_TRANSACTIONS', ignoring: DBIx::Class::Schema::Loader::DBI::_sth_for(): DBI Exception: DBD::Oracle::db prepare failed: ORA-04063: view "ss.vv" has errors (DBD ERROR: error possibly near <*> indicator at char 22 in 'SELECT * FROM ..
Unable to load schema - chosen moniker/class naming style results in moniker clashes. Change the naming style, or supply an explicit moniker_map: tables "ss"."AQ$_PRARAC_ASY_QUEUE_TABLE_S", "ss"."AQ$PRARAC_ASY_QUEUE_TABLE_S" reduced to the same source moniker 'AqDollarSignPraracAsyQueueTableS';
Any suggestions how to solve clashes or use some other dump schema method are welcome.
I have a parquet file which has a column "FIXED_LEN_BYTE_ARRAY / UUID", when I feed it to parquet-mr library, I get this exception:
Exception - caused by: org.apache.parquet.io.ParquetDecodingException: The requested schema is not compatible with the file schema. incompatible types: required binary
Identity (STRING) != required fixed_len_byte_array(16) Identity (UUID)
at org.apache.parquet.io.ColumnIOFactory$ColumnIOCreatorVisitor.incompatibleSchema(ColumnIOFactory.java:101)
at org.apache.parquet.io.ColumnIOFactory$ColumnIOCreatorVisitor.visit(ColumnIOFactory.java:93)
at org.apache.parquet.schema.PrimitiveType.accept(PrimitiveType.java:602)
at org.apache.parquet.io.ColumnIOFactory$ColumnIOCreatorVisitor.visitChildren(ColumnIOFactory.java:83)
at org.apache.parquet.io.ColumnIOFactory$ColumnIOCreatorVisitor.visit(ColumnIOFactory.java:57)
at org.apache.parquet.schema.MessageType.accept(MessageType.java:55)
at org.apache.parquet.io.ColumnIOFactory.getColumnIO(ColumnIOFactory.java:162)
at org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:135)
***
Btw,
I am using latest parquet-mr library i.e. 1.12.0
When i feed same file to parquet cpp library, it is able to decode it. So, I just want find out, is there any known issue in parquet-mr library w.r.t UUID?
-DevD
I am using hive version 3.1.0 in my project I have created one external table using below command.
CREATE EXTERNAL TABLE IF NOT EXISTS testing(ID int,DEPT int,NAME string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE;
I am trying to create an index for the same external table using the below command.
CREATE INDEX index_test ON TABLE testing(ID)
AS 'org.apache.hadoop.hive.ql.index.compact.CompactIndexHandler'
WITH DEFERRED REBUILD ;
But I am getting below error.
Error: Error while compiling statement: FAILED: ParseException line 1:7 cannot recognize input near 'create' 'index' 'user_id_user' in ddl statement (state=42000,code=40000)
According to Hive documentation, Hive indexing is removed since version 3.0
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Indexing#LanguageManualIndexing-IndexingIsRemovedsince3.0
I'm using pg_dump to backup and recreate a database's structure. To that end I'm calling pg_dump -scf backup.sql for the backup. But it fails with the following error:
pg_dump: [archiver (db)] query failed: ERROR: Cannot support lock statement yet
pg_dump: [archiver (db)] query was: LOCK TABLE my_schema.my_table IN ACCESS SHARE MODE
I couldn't find reference this particular error anywhere. Is it possible to get around this?
edit: for a little more context, I ran it in verbose mode, and this is what is displayed before the errors:
pg_dump: last built-in OID is 16383
pg_dump: reading extensions
pg_dump: identifying extension members
pg_dump: reading schemas
pg_dump: reading user-defined tables
I had some ruby code in a ruby script that connected to a mysql DB. When I connected I then did a MySQL2.query("BEGIN") to start a transaction and if I wanted to rollback I did MySQL2.query("ROLLBACK") or if all was well and I wanted to commit I did MySQL2.query("COMMIT").
I have now moved to a postgres database and while the PG.exec("BEGIN"), PG.exec("ROLLBACK") and PG.exec("COMMIT") do not seem to error I do get a warning 'there is no transaction in progress' when I commit so it seems it is doing auto commit (i.e. committing each SQL INSERT/UPDATE as it is done). Basically I want to be able to manually rollback or commit.
I think maybe I need to turn autocommit off but cant work out how, I tried #dbase.exec('SET AUTOCOMMIT TO OFF') but got the error 'lib/database.rb:28:in `exec': ERROR: unrecognized configuration parameter "autocommit" (PG::UndefinedObject)'.
Ive done a fair amount of googling without any luck;(.
I am using postgres 9.5 and ruby 2.4.1.
PostgreSQL does not have a setting to disable autocommit:
https://stackoverflow.com/a/17936997/3323777
You just should use BEGIN/COMMIT/ROLLBACK.
BTW which adapter gem do you use? PG.exec syntax seems strange.
Consider the following snippet (pg 0.20 used):
conn = PGconn.open(:dbname => 'database', user: "user", password: "...")
conn.exec("DELETE FROM setting_entries")
conn.exec("INSERT INTO setting_entries(name) VALUES ('1')")
conn.exec("BEGIN")
conn.exec("DELETE FROM setting_entries")
conn.exec("INSERT INTO setting_entries(name) VALUES ('1')")
conn.exec("ROLLBACK")
And this is the postgre log:
(0-22/70) 2017-08-31 12:37:12 MSK [19945-1] user#database LOG: statement: DELETE FROM setting_entries
(0-22/71) 2017-08-31 12:37:12 MSK [19945-2] user#database LOG: statement: INSERT INTO setting_entries(name) VALUES ('1')
(0-22/72) 2017-08-31 12:37:12 MSK [19945-3] user#database LOG: statement: DELETE FROM setting_entries
(5948637-22/72) 2017-08-31 12:37:12 MSK [19945-4] user#database LOG: statement: INSERT INTO setting_entries(name) VALUES ('1')
As you can see, the transaction ids are the same (/72) for the last two lines.
To be sure you could write a unit test where you would make two updates within a transaction, rollback them and see if both would roll back.