Insufficient memory error in proc sort - oracle

My data is stored in Oracle table MY_DATA. This table contains only 2 rows with 7 columns. But when I execute step:
proc sort data=oraclelib.MY_DATA nodupkey out=SORTED_DATA;
by client_number;
run;
the following error appears:
ERROR: The SAS System stopped processing this step because of insufficient memory.
If I comment nodupkey option then error disappears. If I copy dataset in work library and execute proc sort on it then everything is OK too.
My memory options:
SORTSIZE=1073741824
SUMSIZE=0
MAXMEMQUERY=268435456
LOADMEMSIZE=0
MEMSIZE=31565617920
REALMEMSIZE=0
What can be the root of the problem and how can I fix it?

My Oracle password was in grace period and when I changed it the issue disappeared.

Related

ODI-1228: Task Load data-LKM SQL to Oracle- fails on the target > connection

I'm working with Oracle Data Integrator inserting information from original source to temp table (BI_DSA.TMP_TABLE)
ODI-1228: Task Load data-LKM SQL to Oracle- fails on the target
connection BI_DSA. Caused By: java.sql.BatchUpdateException:
ORA-12899: value too large for column
"BI_DSA"."C$_0DELTA_TABLE"."FIELD" (actual: 11, maximum: 10)
I tried changing the lenght of 'FIELD' to more than 10 and reverse engineering but it didn't work.
Is this error coming from the original source? I'm doing a replica so I just have view privileges on it and I believe so because is the C$ table where the error comes from.
Thanks for the help!
Solution: I tried with the length option before like the answers suggested but didn't work, I noticed the orginal source modified their field lenght so I reverse enginereed source table and problem solved.
Greetings!
As Bobby mentioned in the comment it might come from the byte/char semantics.
The C$ tables created by the LKMs usually copy the structure of the source data. So a workaround would be to go in the model and manually increase the size of the FIELD column in the source datastore (even if it doesn't represent what is in the database). The C$ table will be created whith that size on the next run.

Duplicate Indexes error using Informatica 10.4

While running an informatica mapping in v10.4 I'm getting the following error.
The mapping essentially calls a complex stored procedure in Oracle to "swap out" a temporary file to a partitioned fact table.
CMN_1022 [
ORA-20014: FINISH_SP: ORA-20010: Duplicate Indexes: ORA-12801: error signaled in parallel query server P00I
ORA-06512: at "DIMDW.FACT_EXCHANGE_PARTITION_PKG", line 1650
ORA-20010: Duplicate Indexes: ORA-12801: error signaled in parallel query server P00I
ORA-06512: at "DIMDW.FACT_EXCHANGE_PARTITION_PKG", line 1292
ORA-12801: error signaled in parallel query server P00I
ORA-28604: table too fragmented to build bitmap index (172073921,57,56)
ORA-06512: at "DIMDW.FACT_EXCHANGE_PARTITION_PKG", line 1277
ORA-06512: at "DIMDW.FACT_EXCHANGE_PARTITION_PKG", line 1277
ORA-06512: at "DIMDW.FACT_EXCHANGE_PARTITION_PKG", line 1593
I do not know what this error means to Informatica.
Can anyone help me decipher it SPECIFIC TO INFORMATICA
The problem is specific to Oracle, so not sure how to make the answer specific to Informatica, especially without being able to see the details of what the workflow is trying to do.
The ORA-20014: FINISH_SP: ORA-20010: Duplicate Indexes: error is a custom message from the application code. The real key appears to be here: "ORA-28604: table too fragmented to build bitmap index (172073921,57,56)"
It looks like Informatica is attempting to build an index - indirectly through the DIMDW.FACT_EXCHANGE_PARTITION_PKG package - and the process is throwing an error. A simple Google search on ORA-28604 yields the following:
ORA-28604: table too fragmented to build bitmap index (%s,%s,%s)
*Cause: The table has one or more blocks that exceed the maximum number
of rows expected when creating a bitmap index. This is probably
due to deleted rows. The values in the message are:
(data block address, slot number found, maximum slot allowed)
*Action: Defragment the table or block(s). Use the values in the message
to determine the FIRST block affected. (There may be others).
Since this involves the physical fragmentation of the data in the Oracle database, you will almost certainly need to get the DBA involved to troubleshoot this further. Your Informatica workflow likely isn't going anywhere until this is corrected in the database.

What is the fastest way to take dump of a table in Oracle?

I'm trying to take a dump of a table on to remote disk mounted on my server and below is the cmd I've used.
Exporting of dump started and after 6 hours below ORA errors where thrown.
Looking for a better way:
ORA-02354: error in exporting/importing data
ORA-01555: snapshot too old: rollback segment number 17 with name "_SYSSMU17$" too small
Command used:
expdp user/password TABLES=TABLE_NAME DIRECTORY=TEST_DIR DUMPFILE=DUMP.dmp LOGFILE=LOG.log

ERROR: CANNOT PARALLELIZE AN UPDATE STATEMENT THAT UPDATES THE DISTRIBUTION COLUMNS

When trying to copy data from source (MSSQLSERVER) TO target (greenplum database) using talend ETL server.
Description: When executing an UPDATE statement to GreenPlum, the mentioned error is thrown.
GIVEN
No of records fetching to target is ~ 0.3 million
Update is failing with error
ERROR: CANNOT PARALLELIZE AN UPDATE STATEMENT THAT UPDATES THE DISTRIBUTION COLUMNS current transaction is aborted, commands ignored until end of transaction block
Any help on it would be much appreciated
Solution i tried :
When ON_ERROR_ROLLBACK is enabled, psql will issue a SAVEPOINT before every command you send to greenplum
gpadmin=# \set ON_ERROR_ROLLBACK interactive
But after that we tried running the same Job and it did not solved the problem.
1) Update is not supported in Hawq.
2) Update is only supported to heap but not AO table in GPDB.
GPDB/HAWQ are used as data warehouse/BI and data exploration purpose.

Creating index in hive 0.9

I am trying to create index on tables in Hive 0.9. One table has 1 billion rows, another has 30 Million rows. The command I used is (other than creating the table and so on)
CREATE INDEX DEAL_IDX_1 ON TABLE DEAL (ID) AS
'org.apache.hadoop.hive.ql.index.compact.CompactIndexHandler'
WITH DEFERRED REBUILD;
alter index DEAL_IDX_1 ON DEAL rebuild;
set hive.optimize.autoindex=true;
set hive.optimize.index.filter=true;
For the 30 Mill. row table, the rebuilding process looks alright (mapper and reducer both finished) until in the end it prints
Invalid alter operation: Unable to alter index.
FAILED: Execution Error, return code 1
from org.apache.hadoop.hive.ql.exec.DDLTask
Checking the log, and it had the error
java.lang.ClassNotFoundException: org.apache.derby.jdbc.EmbeddedDriver"
Not sure why this error was encountered, but anyway, I added the derby-version.jar:
add jar /path/derby-version.jar
The reported error was resolved, but still got another error:
org.apache.hadoop.hive.ql.exec.FileSinkOperator:
StatsPublishing error: cannot connect to database
Not sure how to solve the problem. I do see the created index table under hive/warehouse though.
For the 1 Billion row table, it is another story. The mapper just got stuck at 2% or so. And error showed
FATAL org.apache.hadoop.mapred.Child: Error running child :
java.lang.OutOfMemoryError: Java heap space
I attempted to enforce max heap size, as well as max mapr memory (see the settings mentioned somewhere but not in hive's configuration settings):
set mapred.child.java.opts = -Xmx6024m
set mapred.job.map.memory.mb=6000;
set mapred.job.reduce.memory.mb=4000;
However, this is not help. The mapper would still got stuck at 2% with the same error.
I had a similar problem of the index creating and in the hive/warehouse, but the process as a whole failing. My index_name was TypeTarget (yours is DEAL_IDX_1) and after many days of trying different approaches, making the index_name all lowercase (typetarget) fixed the issue. My problem was in Hive 0.10.0.
Also, the class not found and StatsPublishing issue is because by default, hive.stats.autogather is turned on. Turning that off (false) in hive-site.xml should get rid of those issues.
Hopefully this helps anyone looking for a quick fix.

Resources