DataGrip: How to get database and table size for MySQL database? - datagrip

Using PhyMyAdmin for over a decade I am not looking into datagrip. On thing that I could not find is statistics on database and table size. How much size, how many rows (overwerview), index size in bytes and so on.
Where can I find data statistics in datagrip?

It seems like there is no such feature yet.
For those who came here from Google, you can try use SQL query:
MySql:
SELECT table_schema "Data Base Name", sum( data_length + index_length ) / (1024 * 1024) "Data Base Size in MB" FROM information_schema.TABLES GROUP BY table_schema;
SELECT table_name, round(((data_length + index_length) / (1024*1024)),2) as "size in megs" FROM information_schema.tables WHERE table_schema = "named_db";
PostgreSQL:
SELECT pg_size_pretty( pg_database_size('dbname') );
SELECT pg_size_pretty( pg_total_relation_size('tablename') );
Sources:
https://jonathanstreet.com/blog/disk-space-database-table-mysql-postgresql/
How to get size of mysql database?

For the table, it is impossible now, please follow and upvote: https://youtrack.jetbrains.com/issue/DBE-4281
For the database, there is no even a feature request in DataGrip's tracker. Please add a new one.

Related

QlikView Data Load from Oracle Script

I have a small problem with loading data from an Oracle database into QlikView 11 using the following script:
SET ThousandSep='.';
SET DecimalSep=',';
SET MoneyThousandSep='.';
SET MoneyDecimalSep=',';
SET MoneyFormat='#.##0,00 €;-#.##0,00 €';
SET TimeFormat='hh:mm:ss';
SET DateFormat='DD.MM.YYYY';
SET TimestampFormat='DD.MM.YYYY hh:mm:ss[.fff]';
SET MonthNames='Jan;Feb;Mrz;Apr;Mai;Jun;Jul;Aug;Sep;Okt;Nov;Dez';
SET DayNames='Mo;Di;Mi;Do;Fr;Sa;So';
ODBC CONNECT TO [Oracle X;DBQ=db1.dc.man.lan] (XUserId is X, XPassword is Y);
SQL SELECT *
FROM UC140017."TABLE_1";
SQL SELECT *
FROM UC140017."TABLE_2";
SQL SELECT *
FROM UC140017."TABLE_3";
SQL SELECT *
FROM UC140017."TABLE_4";
SQL SELECT *
FROM UC140017."TABLE_5";
This results in the following output:
Connecting to Oracle X;DBQ=db1.dc.man.lan
Connected
TABLE_1 2.421 lines fetched
TABLE_2 1 lines fetched
TABLE_2 << TABLE_3 2 lines fetched
TABLE_2 << TABLE_4 22 lines fetched
TABLE_2 << TABLE_5 22 lines fetched
There is no reason why TABLE_3, TABLE_4 & TABLE_5 are joined to TABLE_2. This relationship doesn't exist in the database and I don't see the option to change this in QlikView. Does anyone of you know where this is coming from and has suggestions how to fix this? Thanks!
Best,
Christoph
If the columns in Table_2,Table_3,Table_4 and Table_5 are the same number and same names then QV will auto concatenate them in one table. To avoid this you can use "NoConcatenate" prefix:
SQL SELECT *
FROM UC140017."TABLE_1";
NoConcatenate
SQL SELECT *
FROM UC140017."TABLE_2";
NoConcatenate
SQL SELECT *
FROM UC140017."TABLE_3";
NoConcatenate
SQL SELECT *
FROM UC140017."TABLE_4";
NoConcatenate
SQL SELECT *
FROM UC140017."TABLE_5";
This will force QV to treat all tables as different tables. Be aware that, if this is the case, then after the reload you will have massive synthetic key.

Oracle SELECT * FROM LARGE_TABLE - takes minutes to respond

So I have a simple table with 5 or so columns, one of which is a clob containing some JSON data.
I am running
SELECT * FROM BIG_TABLE
SELECT * FROM BIG_TABLE WHERE ROWNUM < 2
SELECT * FROM BIG_TABLE WHERE ROWNUM = 1
SELECT * FROM BIG_TABLE WHERE ID=x
I expect that any fractionally intelligent relational database would return the data immediately. We are not imposing order by/group by clauses, so why not return the data as and when you find it?
Of all the forms of SELECT statements above, only 4. returned in a sub-second manner. This is unexpected for 1-3 which are returning between 1 and 10 minutes before the query shows any responses in SQL Developer. SQL Developer has the standard SQL Array Fetch Size of 50 (JDBC Fetch size of 50 rows) so at a minimum, it is taking 1-10 minutes to return 50 rows from a simple table with no joins on a super high-performance RAC cluster backed by fancy 4-tiered EMC disk subsystem.
Explain plans show a table scan. Fine, but why should I wait 1-10 minutes for the results with rownum in the WHERE clause?
What is going on here?
OK - I found the issue. ROWNUM does not operate like I thought it did and in the code above it never stops the full table scan.
This is because:
RowNum is assigned during the predicate operation (where clause evaluation) and incremented afterwards, i.e.: your row makes it into the result set and then gets rownum assigned.
In order to filter by rownum you need to already have it exist, something like ...
SELECT * FROM (SELECT * FROM BIG_TABLE) WHERE ROWNUM < 1
In effect what this means is that there is no way to filter out the top 5 rows from a table without having first selected the entire table if no other filter criteria are involved.
I solved my problem like this...
SELECT * FROM (SELECT * FROM BIG_TABLE WHERE
DATE_COL BETWEEN :Date1 AND :Date2) WHERE ROWNUM < :x;

How can you get the RAM usage of processes in an oracle11g database?

I want to measure the RAM usage of an sql statement (e.g. a simple create or insert statement) in an oracle 11g database environment.
I tried to get it by using dbms_space, but it seems like that only gets the disk space.
I also found this site:
http://www.dba-oracle.com/m_memory_usage_percent.htm
But the statement
select
*
from
v$sql
where sql_text like {my table}
dont return the create statement.
See comment above:
select operation,
options,
object_name name,
trunc(bytes/1024/1024) "input(MB)",
trunc(last_memory_used/1024) last_mem,
trunc(estimated_optimal_size/1024) opt_mem,
trunc(estimated_onepass_size/1024) onepass_mem,
decode(optimal_executions, null, null,
optimal_executions||'/'||onepass_executions||'/'||
multipasses_executions) "O/1/M"
from v$sql_plan p
, v$sql_workarea w
where p.address=w.address(+)
and p.hash_value=w.hash_value(+)
and p.id=w.operation_id(+)
and p.address= ( select address
from v$sql
where sql_text like '%my_table%' )

How do I get the number of inserts/updates occuring in an Oracle database?

How do I get the total number of inserts/updates that have occurred in an Oracle database over a period of time?
Assuming that you've configured AWR to retain data for all SQL statements (the default is to only retain the top 30 by CPU, elapsed time, etc. if the STATISTICS_LEVEL is 'TYPICAL' and the top 100 if the STATISTICS_LEVEL is 'ALL') via something like
BEGIN
dbms_workload_repository.modify_snapshot_settings (
topnsql => 'MAXIMUM'
);
END;
and assuming that SQL statements don't age out of the cache before a snapshot captures them, you can use the AWR tables for some of this.
You can gather the number of times that an INSERT statement was executed and the number of times that an UPDATE statement was executed
SELECT sum( stat.executions_delta ) insert_executions
FROM dba_hist_sqlstat stat
JOIN dba_hist_sqltext txt ON (stat.sql_id = txt.sql_id )
JOIN dba_hist_snapshot snap ON (stat.snap_id = snap.snap_id)
WHERE snap.begin_interval_time BETWEEN <<start time>> AND <<end time>>
AND txt.command_type = 2;
SELECT sum( stat.executions_delta ) update_executions
FROM dba_hist_sqlstat stat
JOIN dba_hist_sqltext txt ON (stat.sql_id = txt.sql_id )
JOIN dba_hist_snapshot snap ON (stat.snap_id = snap.snap_id)
WHERE snap.begin_interval_time BETWEEN <<start time>> AND <<end time>>
AND txt.command_type = 6;
Note that these queries include both statements that your application issues and statements that Oracle issues in the background. You could add additional criteria if you want to filter out certain SQL statements.
Similarly, you could get the total number of distinct INSERT and UPDATE statements
SELECT count( distinct stat.sql_id ) distinct_insert_stmts
FROM dba_hist_sqlstat stat
JOIN dba_hist_sqltext txt ON (stat.sql_id = txt.sql_id )
JOIN dba_hist_snapshot snap ON (stat.snap_id = snap.snap_id)
WHERE snap.begin_interval_time BETWEEN <<start time>> AND <<end time>>
AND txt.command_type = 2;
SELECT count( distinct stat.sql_id ) distinct_update_stmts
FROM dba_hist_sqlstat stat
JOIN dba_hist_sqltext txt ON (stat.sql_id = txt.sql_id )
JOIN dba_hist_snapshot snap ON (stat.snap_id = snap.snap_id)
WHERE snap.begin_interval_time BETWEEN <<start time>> AND <<end time>>
AND txt.command_type = 6;
Oracle does not, however, track the number of rows that were inserted or updated in a given interval. So you won't be able to get that information from AWR. The closest you could get would be to try to leverage the monitoring Oracle does to determine if statistics are stale. Assuming MONITORING is enabled for each table (it is by default in 11g and I believe it is by default in 10g), i.e.
ALTER TABLE table_name
MONITORING;
Oracle will periodically flush the approximate number of rows that are inserted, updated, and deleted for each table to the SYS.DBA_TAB_MODIFICATIONS table. But this will only show the activity since statistics were gathered on a table, not the activity in a particular interval. You could, however, try to write a process that periodically captured this data to your own table and report off that.
If you instruct Oracle to flush the monitoring information from memory to disk (otherwise there is a lag of up to several hours)
BEGIN
dbms_stats.flush_database_monitoring_info;
END;
you can get an approximate count of the number of rows that have changed in each table since statistics were last gathered
SELECT table_owner,
table_name,
inserts,
updates,
deletes
FROM sys.dba_tab_modifications

Improving the performance of keeping the three recent records of each account query

I've a table in an Oracle (10g XE) database, and I'm going to clean it up and only keep the three recent records of each account. Here is what I'm doing right now:
CREATE TABLE ACCOUNT_TRANSACTION_TMP NOLOGGING AS SELECT * FROM ACCOUNT_TRANSACTION WHERE 1=2;
DECLARE
CURSOR mbsacc_cur (account_id_var account_transaction.account_id%TYPE) IS
SELECT * FROM account_transaction WHERE account_id = account_id_var ORDER BY transaction_time DESC;
account_transaction_rec account_transaction%ROWTYPE;
BEGIN
FOR i IN (SELECT DISTINCT(account_id) FROM account_transaction) LOOP
OPEN mbsacc_cur(i.account_id);
LOOP
FETCH mbsacc_cur INTO account_transaction_rec;
EXIT WHEN mbsacc_cur%NOTFOUND OR mbsacc_cur%ROWCOUNT > 3;
INSERT /*+ append */ INTO account_transaction_tmp VALUES account_transaction_rec;
END LOOP;
CLOSE mbsacc_cur;
END LOOP;
END;
/
And then I'll drop the old table, rename this new one to old one and add constraints.
But the problem is the above code runs forever (~3-4 hours) for about 1 million record which approximately half of them should be removed.
Is there any way to improve the performance of this?
Instead of creating an empty table and populating in an RBAR fashion create a table with the rows you want....
CREATE TABLE ACCOUNT_TRANSACTION_TMP NOLOGGING AS
SELECT account_id, col1, col2, col3, transaction_time from
( select at.*
, row_number()
over (partition by at.account_id
order by at.transaction_time desc) as to_keep
FROM ACCOUNT_TRANSACTION at)
where to_keep <= 3
/
Then skip straight to the renaming part of your plan.
You can do that with analytics (although I am not at all well versed in it myself). Take a look at this question, which seems to address a situation similar to yours:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1212501913138

Resources