prevent autovacuum on specific table postgres 9.6 - postgres-9.6

I tried to execute the query
ALTER TABLE [table] SET (autovacuum_enabled = false, toast.autovacuum_enabled = false);
The autovacuum is still running on the table that I want to disable the autovacuum for.
Any suggestions ?

Related

Spark : How to alter oracle session (nls_date_format) from spark?

I have a dataframe that contains Timestamp column (2021-01-19 13:00:30).
When i send this dataframe to an existing table in Oracle (19c) using Spark (2) Scala jdbc, it insert 2021-01-19 13:00:30.000000 even if the column in Oracle is TIMESTAMP(0)
Exemple :
df.write.mode(SaveMode.Append).jdbc(url, tableName, connectionProperties)
I tried to alter session from Spark before sending data, by creating connection than execute the code :
connection.setAutoCommit(true)
val statement: Statement = connection.createStatement()
statement.executeQuery("alter session set nls_date_format='YYYY-MM-DD HH24:MI:SS")
But it doesn't really alter the session (I have no error).
Should I specify in df.write ... the pattern of my Timestamp ? Otherwise, am I altering correctly the session from Spark ?

Dropping an index while DML operations in progress

I am working on a script to create an index online on one of the tables that is constantly being accessed by the application. I would like to know, if there is a way for me to drop the index online as well just in case if a back out is required.
I am using Oracle Database 11g 11.2.0.4.0
The reason why I am asking this is because if I try to delete the index without taking an exclusive lock it will give ORA-00054 - resource busy. The oracle doc says I can use online for 12c, is there a way to achieve this in 11g as well? DROP INDEX [ schema. ] index [ ONLINE ] [ FORCE ] ;
Any suggestions?
You should try ddl_lock_timeout (I guess the table won't be locked forever):
DDL_LOCK_TIMEOUT specifies a time limit for how long DDL statements
will wait in a DML lock queue
alter session set ddl_lock_timeout = 1000000;
drop index idxName;
Maybe you should consider changing it to INVISIBLE first:
ALTER INDEX idName INVISIBLE;

What happened to my table in my oracle database?

I have a situation where yesterday my code was working ok, but today I find that my code fails because a SQL query fails on my Oracle database. The query fails because the table used in the query does not exists. I am no Oracle expert so I am reaching out to you Oracle experts out there. Is there a way to see in a log file or log table when my table disappeared and who dropped my table?
Thanks
Depending on previous configuration one would hope that a production database would have auditing turned on. Try
select * from sys.AUD$
The audit table can log almost every user action including dropping tables or revoking grants but has to be configured.
Assuming you have the recyclebin turned on in your database, you might be able to restore the dropped table. As the user who owns the table, you can run this query:
select * from USER_RECYCLEBIN
or if you have SYS access you can check the query:
SELECT * from DBA_RECYCLEBIN;
Then as a user owns the table, run this FLASHBACK command to restore it:
FLASHBACK TABLE <your table name> TO BEFORE DROP;
If you get ORA-38305 you might have a tablespace issue - either run it as a different user or make sure it using a locally managed tablespace.

Hive table locks

I have hive tables which are queried through queries in a file.
I had invoked an oozie workflow which invoked a hive action for mentioned file.
The job did not succeed and I killed the workflow.
But the tables are still shown as locked on Hive CLI. I am looking for a command/process that will release locks from Hive tables.
We can use the following query to release the lock
set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager;
unlock table tablename;
if you use mysql as metastore, it will store table locks info in table HIVE_LOCKS, truncate it .
mysql> select * from HIVE_LOCKS;
Empty set (0.00 sec)
mysql>
To Check the locks on table (Run in Hive)-
show locks tablename extended;
To find the activity id for long running query - (You need to pass User from above query and can verify the Agent Info from first query with the application name in below query). Run outside hive
yarn application -list | grep User
To kill the activity id -
yarn application -kill activityid
I also met a similar problem in hive3, and i read the source code in org.apache.hadoop.hive.metastore.txn.TxnHandler, i found that there is a function called performTimeOuts(), which is scheduled periodically by a daemon thread called org.apache.hadoop.hive.metastore.txn.AcidHouseKeeperService.
That daemon thread will clean outdated lock infomation automatically in the mysql table hive.hive_locks, but it is not enabled by default, so we just need to configure it in hive-site.xml, like this:
<property>
<name>metastore.task.threads.always</name>
<value>org.apache.hadoop.hive.metastore.events.EventCleanerTask,org.apache.hadoop.hive.metastore.RuntimeStatsCleanerTask,org.apache.hadoop.hive.metastore.repl.DumpDirCleanerTask,org.apache.hadoop.hive.metastore.txn.AcidHouseKeeperService</value>
</property>

Oracle Is it necessary to gather table stats for a new table with new index?

I just created a new table on 11gR2 and loaded it with data with no index. After the loading was completed, I created several indexes on the new table including primary constraint.
CREATE TABLE xxxx (col1 varchar2(20), ...., coln varhcar2(10));
INSERT INTO xxxx SELECT * FROM another_table;
ALTER TABLE xxxx ADD CONSTRAINT xxxc PRIMARY KEY(col_list);
CREATE INDEX xxxx_idx1 ON xxxx (col3,col4);
AT this point do I still need to use DBMS_STATS.GATHER_TABLE_STATS(v_owner,'XXXX') to gather table stats?
If yes, why? since Oracle says in docs "Oracle Database now automatically collects statistics during index creation and rebuild".
I don't want to wait for automatic stats gathering over night because I need to report the actual size of the table and its index immediately after the above operations. I think running DBMS_STATS.GATHER_TABLE_STATS may give me a more accurate usage data. I could be wrong though.
Thanks in advance,
In Oracle 11gR2 you still need to gather table statistics. I guess you read documentation for Oracle 12c, which automatically collects the statistics but only for direct path inserts, which is not your case, your insert is conventional. Also if you gather statistics (with default options) for brand new table that hasn't been used for queries no histograms will be generated.
Index statistics are gathered when index is built so it's not needed to gather its statistics explicitly. When you later gather table statistics you should use the DBMS_STATS.GATHER_TABLE_STATS option cascade => false so that index statistics aren't gathered twice.
You can simply check the statistics using
SELECT * FROM ALL_TAB_COL_STATISTICS WHERE TABLE_NAME = 'XXXX';

Resources