Because for testing there's no testing database I use a manually generated sql script to clean a database clone of my production database. Assuming that my the legacy database is the following:
ohimesama
id: PK
namae: Varchar (200)
oujisama:
id: pk
namae: Varchar(200)
ohimesamagasuki:
id: pk
ohimesama_id: fk ohimesama
oujisama_id: fk oujisama
And the test database cleanup sql (cleanup.sql) script is:
DELETE * from ohimesama where namae not in ['Gardinelia', 'Jasmine'];
DELETE * from oujisama where namae not in ['Gaouron', 'Sasuke','Aladin'];
DELETE * from ohimesamagasuki where ohimesama_id not in (SELECT id from ohimesama) and oujisma not in
(SELECT id from oujisama);
And because I want to be able to execute all theese commands with one transaction IU want to be able to read the cleanup.sql file and execute the sql commands using Laravel Database Layer without the need for writing it manually.
How I can do that?
As seen in this medium article you can use this one single liner:
DB::unprepared(file_get_contents('cleanup.sql'));
The only issue is that sql commands are not chunked so in large sql files it may cause a slowdown. Also file_get_contents has a read limit as well.
In case of large sql files is reccomended to manually read and chunk it into selerate sql commands.
Also if a single command fails to get executed does not proceed to the next one as you would in via mysql or psql commands in a shell environment
Related
I want to create a backup of all SQL scripts (views) that are saved on snowflake (not data). How can I do it? Obviously manual copy and pasting is not an answer.
Expected result: I have all views (sql scripts) that are in snowflake database on my local machine, file per view.
Expected result perfect version: I have all views (sql scripts) that are in snowflake on my local machine, where folders would correspond to schemas in snowflake and files would correspond to views in snowflake (files are also placed in correct folders).
SHOW VIEWS includes the DDL in the text column. To get all views in the database:
show views in database my_database;
select "text" from table(result_scan(-1));
You can invoke it from the CLI with SNOWSQL.
You can run:
SELECT get_ddl('schema',{schema_name});
This will get you the DDL of all objects in the schema, which you can then save to a file in a folder.
You just download the full database DDL
Option 1
select GET_DDL('database','databasename') and copy and save it to your machine
Option 2
Write a Python Script to get a list of schemas, views, tables, stored procs etc. and save it to its corresponding folder on your local machine. Something like this, you just have to extend it to get your perfect version output. Just install the snowflake Python connector to run the following code.
import snowflake.connector
con = snowflake.connector.connect(
user='YourUsername',
password='YourPassword',
account='your snowflakeaccount',
database='databasename',
warehouse='datawarehousename',
role='dbrole'
)
cur = con.cursor()
try:
cur.execute("SELECT TABLE_SCHEMA,TABLE_NAME,TABLE_TYPE from information_schema.tables")
for (TABLE_SCHEMA,TABLE_NAME,TABLE_TYPE) in cur:
print('{0}, {1}'.format(TABLE_SCHEMA,TABLE_NAME,TABLE_TYPE))
#Have another loop to get the DDL for each object and save it to a file/folder structure, something like this..
#cur2.execute("SELECT GET_DDL('object type information from previous query','object name'")
finally:
cur.close()
Great answers above, but you could take it a step further and manage all your DDL through a version controlled tool like DBT.
That way, you would have a mechanism not only to store your DDL in text files, but also to run that DDL (instead of relying on error-prone manual processes).
Otherwise, how would you know whether or not your text files are up-to-date?
I have a large table (3.5MM records) that I need to copy from one schema/database to another schema/database. I tried TOAD's copy data from table feature, but got errors and it never fully copied, in part because the connection keeps getting dropped. I'm trying the object copy feature of SQLDeveloper, and after 11 minutes, it's still copying. I tried the SQLPlus COPY statement but got a syntax error (help needed). I'm still open to extracting the data as INSERT statements that I can just run directly.
1) SQLPLUS Copy as follows:
copy from report_new/mypassword#(DESCRIPTION= (ADDRESS=(PROTOCOL=TCP)(HOST=10.15.15.20)(PORT=1541))(CONNECT_DATA=(SERVICE_NAME=STAGE))) to report/mypassword#(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.18.22.25)(PORT=1550))(CONNECT_DATA=(SERVICE_NAME=DEV))) CREATE USER_USAGE_COUNT USING SELECT * FROM _USER_USAGE_COUNT
The above gives me
SQL> start copy_user_count_table.sql
SP2-0758: FROM clause missing username
2) I tried TOAD
The TOAD "Copy data to another schema" fails due to the connection getting
dropped. I set the commit threshold first to 5000 then to 500.
3) I'm trying SQLDeveloper's copy function, but I think it's not going to finish anytime soon and it gives me no real progress indications. For all I know, it could be hung but that it just doesn't want to tell me.
4) I thought about creating a datalink, but I don't have the authority to create one, and it's in a corporate environment wherein the DBA's don't respond in under 3 days.
Todo: Should I write my own Java code to just do this one record at a time?? I shouldn't have to do this, but somehow it's easier to send a man to the moon than to copy data from one schema to another.
You can use the copy command of sqlcl which is part of newer SQLdeveloper releases. The sqlcl is found in the Sqldeveloper\bin directory and is named sql.exe (Windows) or sql (Unix/Linux/Mac). The steps to follow are:
Connect to Destination database with sqlcl
sql username/password#destindationdb
Use the copy command
copy from username#sourcedatabase create newtablename using select * from sourcetable;
I have created a temporary table in oracle sql developer but I forgot to save it and now I want to reuse the query but I don't remember the code used then. Is there a process to get query used creation of temp table?
You can use dbms_metadata.get_ddl()
select dbms_metadata.get_ddl('TABLE', 'YOUR_TABLE_NAME_HERE')
from dual;
The result is a CLOB with the complete DDL. You might need to adjust the display in SQL Developer to make the content of that value fully visible (I don't use SQL Developer, so I don't know if that is necessary and if so, what you would need to do)
Edit:
It seems SQL Developer can't display the result of this query properly unless you use the "Run Script" option. And with that you need to use a SET LONG 60000 (or some other big number) before you run it, to see the complete source code:
I am using oracle client 11.2.0
Dll version 4.112.3.0
We have a page in our application where people can give a sql statement and retreive results. basically do an oracle command.executereader
Recently one of my team members gave an update statement as a test and it actually performed an update on a record!!!!
Anyone who has encountered this?
Regards
Sid.
It is a normal (albeit a bit unsettling) behavior. ExecuteReader is expected to execute the sql command provided as CommandText and build a DbDataReader that you use to loop over the results.
If the command doesn't return any row to read is not something that the reader should prevent in any case. And so it is not expected that it checks if your command is really a SELECT statement.
Think for example if you pass a stored procedure name or if you have multiple sql batch to execute. (INSERT followed by a SELECT)
I think that the biggest problem here is the fact that you allow an arbitrary sql command typed by your users to reach the database engine. A very big hole in security. You should, at least, execute some analysis on the query text before submitting the code to the database engine.
I agree with Steve. Your reader will execute any command, and might get a bit confused if it's not a select and doesn't return a result set.
To prevent people from modifying anything, create a new user, grant select only (no update, no delete, no insert) on your tables to that user (grant select on tablename to seconduser). Then, log in as seconduser, and, create synonyms for your tables (create synonym tablename for realowner.tablename). Have your application use the seconduser when connecting to the DB. This should prevent people from "hacking" your site. If you want to be of the safe side, grant no permissions but create session to the second user to prevent him from creating tables, dropping your views and similar stuff (I'd guess your executereader won't allow DDL, but test it to make sure).
I am trying to copy a table data from dev box db to uat db which are 2 different data bases . I am trying in toad.All the connection details are correct but its not working and throwing the following error.
[Error] Execution (12: 1): ORA-00900: invalid SQL statement
This is what i am trying
copy from abc/cde#//abc.abc.com:1521/devbox to abc/cde#//abc.abc.com/uatbox
INSERT TOOL_SERVICE_MAPPING (*)
USING (SELECT * FROM TOOL_SERVICE_MAPPING)
If your table doesn't have a huge number of rows you can use Toad's Export function: it creates an insert statement for each row. You can then run these statements in destination DB to re-create your table's data.
Here are the steps:
A. Create a copy of the table in destination DB
in source DB in a schema browser window click on the table you want to copy, select "script" tab in the right part of the window: you will find the script to re-create your table; copy this script
paste the script in a new SQL editor window in destination DB and run it. This should create the new table
B. Copy data in new table
in a schema browser window right click on table name in source DB
select "Export Data" from context menu
write "where" statement of your export query (leave it blank if you want to copy the entire table)
select destination: clipboard
click "ok" (now insert statements are stored in your clipboard)
paste insert statements in a new SQL editor window in destination DB
run statements as script (shortcut F5)
copy is a SQL*Plus command, not a SQL statement. I would be surprised if Toad had implemented that particular SQL*Plus command (it does implement many of the simpler commands). If you want to use the copy command, you would need to use SQL*Plus, not Toad.
If you want to use Toad, you would need to use a SQL statement to copy the data. You could create a database link in the destination database that points to the source database and then
INSERT INTO tool_service_mapping
SELECT *
FROM tool_service_mapping#<<db link to source database>>
The easyest and most error-free way I have experienced so far is: Database->Compare->Schemas
It's not too complicated as it looks (lots of checkboxes), but you tick boxes for objects you need to be created in an empty database, and at the end of comparison you end up with SQL script including all objects (triggers, views, sequences, packges) that you selected (checkboxes).
I clearly see all tables, triggers, data, etc in generated sql script and even can tick these I don't wish to create (if any)... Before executing script, TOAD asks you to confirm against which database you are running the script - saved me few times... As ackward as it looks, it works perfectly.
I have arround 200 tables I don't know if this is suitable for huge databases.