I got this error in vertica query this morning:
Error: [Vertica][VJDBC](5517) ERROR: Your Vertica license is invalid or has expired
SQLState: V2001
ErrorCode: 5517
After running the command: select GET_COMPLIANCE_STATUS();
I got this results:
Raw Data Size: 1.26TB +/- 0.09TB
License Size : 1.00TB
Utilization : 126%
Audit Time : 2015-11-05 06:42:23.380593-05
Compliance Status : ***** NOTICE OF LICENSE NON-COMPLIANCE *****
Continued use of this database is in violation of the current license agreement.
Maximum licensed raw data size: 1.00TB
Current raw data size: 1.26TB
License utilization: 126%
IMMEDIATE ACTION IS REQUIRED, PLEASE CONTACT VERTICA
No expiration date for a Perpetual license
(1 row)
I tried to:
1. Delete records
2. Drop redundant schemas
3. Purge db
And non of them got me to a working vertica with licence.
How can i update the licence or clean more data somehow?
Do you have some advice?
You need to run SELECTAUDIT_LICENSE_SIZE() to recalculate your database size for compliance purposes. This normally runs on a schedule, but you haven't hit that time yet since it just happened. Look at your Audit Time on your compliance report.
After running this, you can rerun SELECTGET_COMPLIANCE_STATUS() to see where you are at.
Related
We have a Oracle 19C database (19.0.0.0.ru-2021-04.rur-2021-04.r1) on AWS RDS which is hosted on an 4 CPU 32 GB RAM instance. The size of the database is not big (35 GB) and the PGA Aggregate Limit is 8GB & Target is 4GB. Whenever the scheduled internal Oracle Auto Optimizer Stats Collection Job (ORA$AT_OS_OPT_SY_nnn) runs then it consumes substantially high PGA memory (approx 7GB) and sometimes this makes database unstable and AWS loses communication with the RDS instance so it restarts the database.
We thought this may be linked to existing Oracle bug 30846782 (19C+: Fast/Excessive PGA growth when using DBMS_STATS.GATHER_TABLE_STATS) but Oracle & AWS had fixed it in the current 19C version we are using. There are no application level operations that consume this much PGA and the database restart have always happened when the Auto Optimizer Stats Collection Job was running. There are couple of more databases, which are on same version, where same pattern was observed and the database was restarted by AWS. We have disabled the job now on those databases to avoid further occurrence of this issue however we want to run this job as disabling it may cause old stats being available in the database.
Any pointers on how to tackle this issue?
I found the same issue in my AWS RDS Oracle 18c and 19c instances, even though I am not in the same patch level as you.
In my case, I applied this workaround and it worked.
SQL> alter system set "_fix_control"='20424684:OFF' scope=both;
However, before applying this change, I strongly suggest that you test it on your non production environments, and if you can, try to consult with Oracle Support. Dealing with hidden parameters might lead to unexpected side effects, so apply it at your own risk.
Instead of completely abandoning automatic statistics gathering, try find any specific objects that are causing the problem. If only a small number of tables are responsible for a large amount of statistics gathering, you can manually analyze those tables or change their preferences.
First, use the below SQL to see which objects are causing the most statistics gathering. According to the test case in bug 30846782, the problem seems to be only related to the number of times DBMS_STATS is called.
select *
from dba_optstat_operations
order by start_time desc;
In addition, you may be able to find specific SQL statements or sessions that generate a lot of PGA memory with the below query. (However, if the database restarts, it's possible that AWR won't save the recorded values.)
select username, event, sql_id, pga_allocated/1024/1024/1024 pga_allocated_gb, gv$active_session_history.*
from gv$active_session_history
join dba_users on gv$active_session_history.user_id = dba_users.user_id
where pga_allocated/1024/1024/1024 >= 1
order by sample_time desc;
If the problem is only related to a small number of tables with a large number of partitions, you can manually gather the stats on just that table in a separate session. Once the stats are gathered, the table won't be analyzed again until about 10% of the data is changed.
begin
dbms_stats.gather_table_stats(user, 'PGA_STATS_TEST');
end;
/
It's not uncommon for a database to spend a long time gathering statistics, but it is uncommon for a database to constantly analyze thousands of objects. Running into this bug implies there is something unusual about your database - are you constantly dropping and creating objects, or do you have a large number of objects that have 10% of their data modified every day? You may need to add a manual gather step to a few of your processes.
Turning off the automatic statistics job entirely will eventually cause many performance problems. Even if you can't add manual gathering steps, you may still want to keep the job enabled. For example, if tables are being analyzed too frequently, you may want to increase the table preference for the "STALE_PERCENT" threshold from 10% to 20%:
begin
dbms_stats.set_table_prefs
(
ownname => user,
tabname => 'PGA_STATS_TEST',
pname => 'STALE_PERCENT',
pvalue => '20'
);
end;
/
Things start not close oracle process when I shutdown system, after I exec startup appear error ORA-01157 ORA-01110.
I very sure dbf file is existed, and I use dbv see the file, every thing is normal.
Last thing, I try offline drop those dbf, but cannot recovery them.
Please give me some help, thank you very much!
mount your database :
SQL> startup mount;
Provided your database is in NOARCHIVELOG mode, Issue the following queries :
SQL> select min(first_change#) min_first_change
from v$log V1 inner join v$logfile f on ( l.group# = f.group# );
SQL> select change# ch_number from v$recover_file;
If the ch_number is greater than the min_first_change of your logs, the datafile can be recovered.
If the ch_number is less than the min_first_change of your logs,
the file cannot be recovered.
In this case;
restore the most recent full backup (and thus lose all changes to
the database since) or recreate the tablespace.
Recover the datafile(If the case in the upper yellow part isn't met):
SQL> recover datafile '/opt/oracle/resource/undotbs02.dbf';
Confirm each of the logs that you are prompted for until you receive the message Media Recovery Complete. If you are prompted for a non-existing
archived log, Oracle probably needs one or more of the online logs to proceed with the recovery. Compare the sequence number referenced in the
ORA-00280 message with the sequence numbers of your online logs. Then enter the full path name of one of the members of the redo group whose sequence
number matches the one you are being asked for. Keep entering online logs as requested until you receive the message Media Recovery Complete .
If the database is at mount point, open it :
SQL> alter database open;
If the DBF file fails to mount then check the source of DBF file, whether it is imported from other database or converted with any other tool. Generally, if the DBF file does not have a specific form then it cannot be mounted, troubleshoot Oracle DBF file by following steps
https://docs.cloud.oracle.com/iaas/Content/File/Troubleshooting/exportpaths.htm
If the database is still causing the problem then there could be problems with other components and before mounting fix them with a professional database recovery tool like https://www.filerepairtools.com/oracle-database-recovery.html
When trying to copy data from source (MSSQLSERVER) TO target (greenplum database) using talend ETL server.
Description: When executing an UPDATE statement to GreenPlum, the mentioned error is thrown.
GIVEN
No of records fetching to target is ~ 0.3 million
Update is failing with error
ERROR: CANNOT PARALLELIZE AN UPDATE STATEMENT THAT UPDATES THE DISTRIBUTION COLUMNS current transaction is aborted, commands ignored until end of transaction block
Any help on it would be much appreciated
Solution i tried :
When ON_ERROR_ROLLBACK is enabled, psql will issue a SAVEPOINT before every command you send to greenplum
gpadmin=# \set ON_ERROR_ROLLBACK interactive
But after that we tried running the same Job and it did not solved the problem.
1) Update is not supported in Hawq.
2) Update is only supported to heap but not AO table in GPDB.
GPDB/HAWQ are used as data warehouse/BI and data exploration purpose.
When executing a select that returns a large amount of columns over several tables the error "Vendor code 17002" is received. The query only returns one result. When the number of columns returned is less than 635, the query works. When another column is added the error is seen.
The following was seen in a dump file:
Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0x45] [PC:0x35797B4, _kkqstcrf()+1342]
DDE: Problem Key 'ORA 7445 [kkqstcrf()+1342]' was flood controlled (0x6) (incident: 10825)
ORA-07445: exception encountered: core dump [kkqstcrf()+1342] [ACCESS_VIOLATION] [ADDR:0x45] [PC:0x35797B4] [UNABLE_TO_READ] []
Dump file c:\app\7609179\diag\rdbms\orcl\orcl\trace\orcl_s001_9928.trc
Thu Feb 07 15:10:56 2013
ORACLE V11.2.0.1.0 - Production vsnsta=0
vsnsql=16 vsnxtr=3
Dumping diagnostics for abrupt exit from ksedmp
Windows 7, Oracle 11.2.0.1.0 Enterprise Edition, SQL Developer, Same result from Java Application.
ORA-07445 is a generic error which Oracle uses to signal unexpected behaviour in the OS i.e. a bug.
There should be some additional information in that trace file:
c:\app\7609179\diag\rdbms\orcl\orcl\trace\orcl_s001_9928.trc
Have you looked in it?
Unfortunately the nature of ORA-07445 means that the solution underlying problem is usually due to the specific combination of platform, OS and database versions. Oracle have published some advice on diagnosis but most routes lead to calling Oracle Support. Find out more.
At least you know the immediate cause. So if you don't have a Support contract there is a workaround: change you application so you don't have to select that 635th column. That is an awful lot of columns to have in a single query.
There isn't an actual limit to the number of columns permitted in a query's projection but it's possible that the total length of the statement exceeds the limit. This limit varies according to several factors and isn't ispecified in the docs. How long (how many chars) is the statement with and with out that pesky additional column? perhaps shortening some column names will do the trick.
I was trying to create Oracle tables from SAS datasets. I am successful in many cases, but am stuck for a particular dataset. I am providing the log file below. I am working with SAS 9 and Oracle 11.2.0.1.0 on Linux.
Any suggestions?
1 libname dibsdata "/data2/dibyendu/Jan_9/on_demand/";
NOTE: Libref DIBSDATA was successfully assigned as follows:
Engine: V9
Physical Name: /data2/dibyendu/Jan_9/on_demand
2 libname myora oracle user=sasuser password=XXXXXXXXXX path=CIOEDATA ;
NOTE: Libref MYORA was successfully assigned as follows:
Engine: ORACLE
Physical Name: CIOEDATA
3 data myora.on_demand;
4 set dibsdata.on_demand;
5 run;
NOTE: SAS variable labels, formats, and lengths are not written to DBMS tables.
ERROR: Error attempting to CREATE a DBMS table. ERROR: ORACLE execute error: ORA-00904: : invalid identifier..
NOTE: The DATA step has been abnormally terminated.
NOTE: The SAS System stopped processing this step because of errors.
NOTE: SAS set option OBS=0 and will continue to check statements. This might cause NOTE: No observations in data set.
WARNING: The data set MYORA.ON_DEMAND may be incomplete. When this step was stopped there were 0 observations and 48 variables.
ERROR: ROLLBACK issued due to errors for data set MYORA.ON_DEMAND.DATA.
NOTE: DATA statement used (Total process time):
real time 0.06 seconds
cpu time 0.00 seconds
ERROR: Errors printed on page 1.
2 The SAS System 17:00 Wednesday, January 9, 2013
NOTE: SAS Institute Inc., SAS Campus Drive, Cary, NC USA 27513-2414
NOTE: The SAS System used:
real time 1.24 seconds
cpu time 0.04 seconds
Oracle error ORA-00904 means you are trying to create a table with an invalid column name. Most likely you have a SAS variable with a name longer that 30 characters or is an Oracle reserved word. For example, the two variables in this SAS dataset are illegal in Oracle:
data a;
column_name_too_long_for_oracle = 1;
date = today(); /* This is a reserved word */
run;
Here is the Oracle 11g Reserved Words list. Check the variable names in your SAS dataset and rename them to something legal in Oracle. If example, if the offender is a SAS variable named DATE, you might try this:
data myora.on_demand;
set dibsdata.on_demand(rename=(DATE=PROJ_DATE));
run;