what is the command to get the list of sqlite databases created so far in terminal.
i googled it but all i could get is .schema and .table commands which are not working after the terminal is reopened.
.databases should do what you want.
I guess there is no way to know how many db u have in some directory ...
u can use .databasesin an opened database to know all attached databses to your database + the main one .
so that u get just 2 DBs ..
regards
Related
I have a requirement to make the read-only setting of 300+ IG columns in my application to null. I am able to query the columns from apex metadata views. I am wondering if it is OK to update the underlying APEX tables directly?
Or is it Ok to update the application export file and import it back again?
Will it have any negative implications or be considered malicious?
Or Is it not recommended at all?
Personally, I wouldn't touch Oracle metadata, that would be the last option if nothing else works and I'm very desperate.
I've edited export file quite a few times (in older Apex versions) as the export used to create an invalid file. For example, closing single quote was moved into a new line and import complained about it, e.g. the second line here - see that lonely single quote?
p_button_redirect_url => 'javascript...tree.collapse_all(''tree124124124124');
',
p_button_execute_validations => 'Y', ...
So, there was nothing to do but to edit the file and move it to the end of the first line.
As the export is a pure SQL textual file, no problem in editing it. Just make sure to save the original so that you could revert to it if necessary.
You can do it, and I do sometimes. However it is very much "at your own risk". If you get it wrong you could update data that belongs to the APEX Builder itself and stop it working. Good luck contacting Oracle support when you do that!
Have you considered doing the operation at the database level?
Is it possible to script schema of the entire database (SQL Server or Postgres) using datagrip?
I know I can get DDL for table and view and source for each stored procedure / function on it's own.
Can I get one script for all objects in database at once?
Alternatively, is there a way to search through code of all routines at once, say I need to find which ones are using #table temp table?
From 2018.2 there is a feature called SQL generator. It will generate the whole DDL for the database/schema with several available options.
The result is:
BUT:
If you want just to understand where the table is used, please use the dedicated functionality which is called Find Usages (Alt+F7 or context menu on a table name)
I was looking for this today and just found it. If you right click the schema you want to copy and choose "Copy DDL" this will copy the create script to the clipboard.
To answer your second part of the question: quick and easy way to search for #table in all of your procedures you can do the following query
SELECT *
FROM information_schema.routines
WHERE routine_definition LIKE '%#table%'
For now only dumping tables works. In 2016.3 EAP which will be available in the end of August there will be an integration with mysqldump and pg_dump.
My Hive query has been throwing an error:
syntax error near unexpected token `('
I am not sure where the error occurs in the query below.
Can you help me out?
select
A.dataA, B.dataB, count(A.nid), count(B.nid)
from
(select
nid, sum(dataA_count) as dataA
from
table_view
group by nid) A
LEFT JOIN
(select
nid, sum(dataB_count) as dataB
from
table_others
group by nid) B ON A.nid = B.nid
group by A.dataA , B.dataB;
I think you didnot close the ) at the end.
Thanks
Sometimes it have been seen that people forget to start the service metastore and thereafter as well as also to enter the hive bash shell, and start passing the commands in similar way of sqoop, when I were the newbie I were also facing these things,
so to overcome of this issue -
goto the hive directory & pass : bin/hive --service metastore & so it will start the hive metastore server for you and later
open another terminal or cli & pass : bin/hive so it will let you enter inside the hive bash shell.
sometimes when you forgot to do these steps, you got silly issues like on thread title we are discussing here,
hope it will help others, thanks.
I have gone through many posts but i didn't realize the my beeline terminal is logged off and i am trying in normal terminal
I faced an issue exactly like this:
First, you have opened 6 open parenthesis and 6 closed ones, so this is not your issue.
Last but not least, you are getting this error because your command is getting interpreted word by word. A statement, like a SQL query, is only known for the databases and if you use the language of a specific database, then only that particular database is able to understand it.
Your "before '(' ..." error means, you are using something before the ( that isnt known for the terminal or the place you are running in.
All you have to do to fix it is:
1- Wrap it with a single or double quotation
2- Use where-clause even if you don't need to(for example Apache Sqoop needs it no matter what). So check the documentation for exact way to do so, usually you can use something like where 1=1 when you dont need it(for Sqoop it was where $CONDITIONS .
3- Make sure your command runs in a designated database first, before asking any third party app to run it.
if this answer was helpful you can give it "chosen answer mark" for better reach in the community.
I use Oracle 11 on my local server, and wanna export my data using oracle exp tool:
http://docs.oracle.com/cd/B28359_01/server.111/b28319/exp_imp.htm#i1023725
I dont have any views, triggers or stored procedures, just ordinary tables and some image blobs in one table. It should be really simple to export this.
But I really didn't understand anything how to do it;
First of all, It says I should run the catexp.sql or catalog.sql before I run the exp tool..Ok but where the heck is these scripts? I searched my computer bu no such thing exists.
Second it is still not clear what needs to be done, What .exe exactly I need to run. And then it says;
exp PARAMETER=value
What the heck is parameter what the heck is value?..Is there any better documentaion or anyone can explain with simple terms the steps I need to take?
You only need to run catexp/catalog if they haven't been run already for some reason; they would normally exist and be run as pat of the database creation, so you probably don't need to worry about those.
PARAMETER is a placeholder for any of the supported parameters, as shown under 'invoking export and import'.
You need to specify an export (dump) file; the default is create a file called EXPDAT.DMP in the current directory. If you don't have permissions to write to that directory you need to specify the full path to where you want the file to be created, including its name.
There are [several export examples], including table mode and user mode. When you run interactively and don't specify OWNER or TABLES on the command line or in a parameter file you're prompted to choose the mode, which is the 'users or tables' prompt you saw. You might want something like this example:
exp blake/paper FILE=blake.dmp TABLES=(dept, manager) ROWS=y COMPRESS=y
... but with your own user/password, file name (and path), and table names.
I find it hard to generate the dbscripts from TOAD. I get errors when executing the scripts things like looping chain of synonyms or certain statement not abel to exceute etc.
Is there any seamless way in said like connecting a remote oracle schema and just duplicate to my local environment?
And also do synchronization along the way?
Syncing an entire schema, data and all, is fairly easily done with exp and imp:
$ exp username/password#source-sid CONSISTENT=Y DIRECT=Y OWNER=schema FILE=schema.exp
$ ⋮ # some command(s) to nuke objects, see below
$ imp username/password#dest-sid FROMUSER=schema FILE=schema.exp
You can import into a different schema if you want by using TOUSER in the imp command.
You'll need to get rid of all the objects if they already exist before running imp. You can either write a quick script to drop them all (look at the user_objects view), or just drop the user with cascade and re-create the user.
There is probably a better way to do this, but this is quick to implement and it works.
If you are doing a one-off copy exp/imp (or expdp/impdp in newer versions) is best.
If you are progressing changes from dev to test to prod, then you should be using formal source control, with SQL or SQL*Plus scripts.
Schema Compare for Oracle should be able to achieve this, as it's a tool dedicated specifically to solve this task.
If you want it to happen in the background, there's a command line that lets you achieve this.