H2 dump tables content only - h2

I try to backup the tables content of a H2 DB.
I run : SCRIPT TO '/opt/data/2019-10-10_tr.sql' TABLE EVENEMENT, PASSAGE, COURSE, LIGNE but the file generated contains some information like :
SET DB_CLOSE_DELAY -1;
;
CREATE USER IF NOT EXISTS SA SALT '7ab09337026fac20' HASH 'c...fa387' ADMIN;
CREATE SEQUENCE PUBLIC.HIBERNATE_SEQUENCE START WITH 5664;
CREATE MEMORY TABLE PUBLIC.COURSE(
...
Which I don't want (that's why I wanted to dump only the tables). I don't want them because when I run RUNSCRIPT FROM '/opt/data/2019-10-10_tr.sql' I have an exception :
CREATE SEQUENCE PUBLIC.HIBERNATE_SEQUENCE START WITH 5664 [90035-197]: org.h2.jdbc.JdbcSQLException: Sequence "HIBERNATE_SEQUENCE" already exists; SQL statement:
CREATE SEQUENCE PUBLIC.HIBERNATE_SEQUENCE START WITH 5664 [90035-197]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:357)
And I have this exception because the database is initialized by ddl : <property name="hibernate.hbm2ddl.auto" value="create-drop" />
I don't wish to change this ; basically by saving only database content and restore it in an existing db : it should work, isn't it ? So the question is what is wrong with my SCRIPT syntax though it doesn't save the tables content only ?

SCRIPT command is not designed to export only the data, it is designed to export the schema with or without the data. Currently there is no built-in command for that purpose.
You can try to add the ˙DROP˙ clause to this command to generate commands for dropping existing tables, but you still may have a problem with sequences and your tables will be redefined, so all changes in autogenerated schema will be lost.
You can filter out all non-INSERT commands from the script with your own code.
You can export the complete script and execute DROP ALL OBJECTS before RUNSCRIPT and overwrite everything with it.

Related

Laravel Execute sql queries from an sql file

Because for testing there's no testing database I use a manually generated sql script to clean a database clone of my production database. Assuming that my the legacy database is the following:
ohimesama
id: PK
namae: Varchar (200)
oujisama:
id: pk
namae: Varchar(200)
ohimesamagasuki:
id: pk
ohimesama_id: fk ohimesama
oujisama_id: fk oujisama
And the test database cleanup sql (cleanup.sql) script is:
DELETE * from ohimesama where namae not in ['Gardinelia', 'Jasmine'];
DELETE * from oujisama where namae not in ['Gaouron', 'Sasuke','Aladin'];
DELETE * from ohimesamagasuki where ohimesama_id not in (SELECT id from ohimesama) and oujisma not in
(SELECT id from oujisama);
And because I want to be able to execute all theese commands with one transaction IU want to be able to read the cleanup.sql file and execute the sql commands using Laravel Database Layer without the need for writing it manually.
How I can do that?
As seen in this medium article you can use this one single liner:
DB::unprepared(file_get_contents('cleanup.sql'));
The only issue is that sql commands are not chunked so in large sql files it may cause a slowdown. Also file_get_contents has a read limit as well.
In case of large sql files is reccomended to manually read and chunk it into selerate sql commands.
Also if a single command fails to get executed does not proceed to the next one as you would in via mysql or psql commands in a shell environment

Read data from a flat file (Datastore) in an ODI procedure

I'm trying read a file from a PL/SQL procedure but I am geting ORA-00942 Table or view does not exist error.
Caused by: Error : 942, Position : 21, Sql =
SELECT UBIC_ID FROM LIST_UBICS
, Error Msg = ORA-00942: table or view does not exist
I have a file with an id per line. This file is called list_ubics.csv. I have a File model and a datastore pointing to the file called LIST_UBIC with the UBIC_ID Field.
I created a Task in a new Procedure with this SQL:
SELECT UBIC_ID From LIST_UBICS
LIST_UBICS is my datastore I don't have any table with these name.
I want read these file and make some processing for each line but I don't see any way in the docs to read a text file that works for me.
How can I read this file?
Thanks in advance for any help.
An ODI Procedure written in PL/SQL (Oracle Technology) will be pushed down on the database. The database executing doesn't know about the File Datastore and can not execute SQL statement against it.
If the goal is to load the file with ODI it can be done using an interface (11g) or a mapping (12c) with LKM File to SQL. That will copy the content of the file into a table in the database and any SQL statement can then be executed against it.
Alternatively, it is possible to create a directory in the database, land the file there and create an external table on top of it. Queries can be used on external table but not DML operations. More information here : https://oracle-base.com/articles/9i/external-tables-9i
I just found out last week that there is one more solution and probably the best!
The solution is written here-part-1 and here-part-2 and if you follow the steps exactly, it will work (I implemented it).
Anyway, I will summarize the main idea and steps.
We can use a variable into a package. There is a code (see code at the end) that reads a column from the given file. Making a for statement in the package, will help us read every row, by changing the value of "CRFILE_FIRST_ROW" variable from the code below, with a sequential number starting with 1 (
So, everything is easy as above. Besides the "CRFILE_FIRST_ROW", there are more variable that can be changed, like: CRFILE_FORMAT=D (format:decimal), CRFILE_SEP_FIELD=0x0009 (hexadecimal fileformat) and so on.
Also, as you can see in the original posts (above links), you can generate your view code; you don't need to copy paste from below.
View code:
select TES.C1 C1
from location_of_file/objects_to_import.txt TES
/*$$SNPS_START_KEYSNP$CRDWG_TABLESNP$CRTABLE_NAME=TESTSNP$CRLOAD_FILE=location_of_file/objects_to_import.txtSNP$CRFILE_FORMAT=DSNP$CRFILE_SEP_FIELD=0x0009SNP$CRFILE_SEP_LINE=0x000ASNP$CRFILE_FIRST_ROW=#UTILS.IMPORT_OBJ_READ_INCRSNP$CRFILE_ENC_FIELD=SNP$CRFILE_DEC_SEP=SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=C1SNP$CRTYPE_NAME=STRINGSNP$CRORDER=1SNP$CRLENGTH=50SNP$CRPRECISION=50SNP$CRACTION_ON_ERROR=NULLSNP$CR$$SNPS_END_KEY*/

How can I convert an Oracle impdp sqlfile to a PostgreSQL script to import data into PostgreSQL? [duplicate]

I am trying to import the dump file to .sql file using SQLFILE parameter.
I used the command "impdp username/password DIRECTORY=dir DUMPFILE=sample.dmp SQLFILE=sample.sql LOGFILE=sample.log"
I expected this to return a sql file with contents inside the table. But it created a sql file with only DDL queries.
For export I used, "expdp username/password DIRECTORY=dir DUMPFILE=sample.dmp LOGFILE=sample.log FULL=y"
Dump file size is 130 GB. So, I believe the dump has been exported correctly.
Am I missing something in the import command? Is there any other parameter should I use to get the contents?
Thanks in advance!
Your expectation was wrong, I'm afraid. You're asking it to do something it isn't designed for.
The documentation for SQLFILE says:
Purpose
Specifies a file into which all of the SQL DDL that Import would have executed, based on other parameters, is written.
So it will only ever contain DDL.
There isn't a mechanism to turn a .dmp file into a .sql containing insert statements. If you need to put the data into a table, just use the native import.
Individual insert statements - if you could generate them, which SQL Developer will do as a separate task unrelated to your data pump export - would be slower, would have problems with LOBs, and would have to be careful about the order they were run unless integrity constraints were disabled. Data pump takes care of all of that for you.

external tables: how to make sure i don't load same file/data

I want to use an external table to load a csv file as it's very convenient, but the problem is how do i make sure i don't load the same file twice in a row? i can't validate the data loaded because it can be the same information as before; i need to find a way to make sure the user doesnt load the same file as 2h ago for example.
I thought about uploading the file with a different name each time and issuing an alter table command to change the name of the file in the definition of the external table, but it sounds kinda risky.
I also thought about marking each row in the file with a sequence to help differentiate files, but i doubt the client would accept it as they would need to manually do this (the file is exported from somewhere).
Is there any better way to make sure i don't load the same file in the external table except changing the file's name and executing an alter on the table?
Thank you
when you bring the data from external table to your database you can use MERGE command instead of insert. it let you don't worry about duplicate data
see the blog about The Oracle Merge Command
What's more, we can wrap up the whole transformation process into this
one Oracle MERGE command, referencing the external table and the table
function in the one command as the source for the MERGED Oracle data.
alter session enable parallel dml;
merge /*+ parallel(contract_dim,10) append */
into contract_dim d
using TABLE(trx.go(
CURSOR(select /*+ parallel(contracts_file,10) full (contracts_file) */ *
from contracts_file ))) f
on d.contract_id = f.contract_id
when matched then
update set desc = f.desc,
init_val_loc_curr = f.init_val_loc_curr,
init_val_adj_amt = f.init_val_adj_amt
when not matched then
insert values ( f.contract_id,
f.desc,
f.init_val_loc_curr,
f.init_val_adj_amt);
So there we have it - our complex ETL function all contained within a
single Oracle MERGE statement. No separate SQL*Loader phase, no
staging tables, and all piped through and loaded in parallel
I can only think of a solution somewhat like this:
Have a timestamp encoded in the datafile name (like: YYYYMMDDHHMISS-file.csv), where YYYYMMDDHHMISS is the timestamp.
Create a table with the fields timestamp (as above).
Create a shell scripts that:
extracts the timestamp from the datafilename.
calls an sqlscript with the timestamp as the parameter, and return 0 if that timestamp does not exist, <>0 if the timestamp already exist, and in that case exit the script with the error: File: YYYYMMDDHHMISS-file.csv already loaded.
copy the YYYYMMDDDHHMISS-file.csv to input-file.csv.
run the sql loader script that loads the input-file.csv file
when succes: run a second sql script with parameter timestamp that inserts the record in the database to indicate that the file is loaded and move the original file to a backup folder.
when failure: report the failure of the load script.

How to copy data from one database/table to another database/table in oracle using toad

I am trying to copy a table data from dev box db to uat db which are 2 different data bases . I am trying in toad.All the connection details are correct but its not working and throwing the following error.
[Error] Execution (12: 1): ORA-00900: invalid SQL statement
This is what i am trying
copy from abc/cde#//abc.abc.com:1521/devbox to abc/cde#//abc.abc.com/uatbox
INSERT TOOL_SERVICE_MAPPING (*)
USING (SELECT * FROM TOOL_SERVICE_MAPPING)
If your table doesn't have a huge number of rows you can use Toad's Export function: it creates an insert statement for each row. You can then run these statements in destination DB to re-create your table's data.
Here are the steps:
A. Create a copy of the table in destination DB
in source DB in a schema browser window click on the table you want to copy, select "script" tab in the right part of the window: you will find the script to re-create your table; copy this script
paste the script in a new SQL editor window in destination DB and run it. This should create the new table
B. Copy data in new table
in a schema browser window right click on table name in source DB
select "Export Data" from context menu
write "where" statement of your export query (leave it blank if you want to copy the entire table)
select destination: clipboard
click "ok" (now insert statements are stored in your clipboard)
paste insert statements in a new SQL editor window in destination DB
run statements as script (shortcut F5)
copy is a SQL*Plus command, not a SQL statement. I would be surprised if Toad had implemented that particular SQL*Plus command (it does implement many of the simpler commands). If you want to use the copy command, you would need to use SQL*Plus, not Toad.
If you want to use Toad, you would need to use a SQL statement to copy the data. You could create a database link in the destination database that points to the source database and then
INSERT INTO tool_service_mapping
SELECT *
FROM tool_service_mapping#<<db link to source database>>
The easyest and most error-free way I have experienced so far is: Database->Compare->Schemas
It's not too complicated as it looks (lots of checkboxes), but you tick boxes for objects you need to be created in an empty database, and at the end of comparison you end up with SQL script including all objects (triggers, views, sequences, packges) that you selected (checkboxes).
I clearly see all tables, triggers, data, etc in generated sql script and even can tick these I don't wish to create (if any)... Before executing script, TOAD asks you to confirm against which database you are running the script - saved me few times... As ackward as it looks, it works perfectly.
I have arround 200 tables I don't know if this is suitable for huge databases.

Resources