Monetdb database was killed by signal SIGSEGV - monetdb

I'have a database version MonetDB Database Server Toolkit v1.1 (Jun2016-SP2) working on RHEL version 6.7
I faced with unexpected shut down with the message in the log: "database was killed by signal SIGSEGV" when trying to execute bunch of 'SELECT COUNT(1) FROM ' quires in single connection. Tables are huge 8Mln rows and 4000 columns.
Also I can't fulfill statistics table for such tables.

You are using a really old version of MonetDB, I would first upgrade to the latest version. If the problem remains, then file a bug report with enough detail to independent reconstruct the case or sent a stack trace of a debugged version.

Related

ORA-01036: illegal variable name/number with Oracle 18 and Oracle 19 Windows client

After upgrading from Oracle 11/12 to 18/19 I get this error: ORA-01036: illegal variable name/number.
The error occurred in a query like this:
SELECT * FROM (SELECT * FROM TABLE) MY_TABLE WHERE ROWNUM <= :P_ROWNUM
(Subquery + binding parameters)
The identical query works properly with the Oracle 11.2.0.4 or 12.1.0.2 client. It fails with the Oracle Client 18c or 19c.
PS: The Oracle Server is version 18c 64x for Windows.
I use Delphi 10.1.2 with ADO components (dbGO). I also tried with Delphi 13.3.3 but the behavior is the same.
It seems to be a problem in the Oracle OLE DB provider (ORAOLEDB).
If I don't use ADO but DevArt Unidac all worked as expected.
Someone can help me?
Max
Your query is fine. We ran into a similar issue when migrating from 12.1 to 19. In our case, we have a custom OLE DB provider that interfaces with OraOLEDB (and others) using the Microsoft OLE DB Provider Templates (ATL). When attempting to upgrade from 12.1.x to 19c we started seeing the strange and confusing "ORA-01036: illegal variable name/number" error for parameterized SELECT queries. The queries would succeed the first time they were executed but subsequent executions would fail when all we did was change the parameter value (the queries were executed sequentially). I went on a wild goose chase trying to diagnose the issue and finally decided it had to be an Oracle caching bug of some kind. Yesterday, I experimented with the cache-related connection string attributes and found that adding the MetaDataCacheSize attribute and setting its value to 0 (zero) resolved the issue for us. None of the current Oracle patches appear to address this issue, at least none of those that mention the ORA-01036 error.

PowerBI Desktop: intermittent ORA-03113 errors

I am connecting to oracle 12 in Oracle cloud, from PowerBI Desktop windows server 2016.
Oracle client is installed and TNS file configured.
Oracle is hosted by a vendor so my only access is to directly query the database.
In powerBI, when using an oracle connection, i get ORA-03113 errors about 50% of the time when refreshing data. There is no discernible pattern to the appearance of the error.
If i connect via a System ODBC connection set up in windows, I dont get any issues or errors, although the data load is a bit slower.
I would appreciate ideas on what may be causing this issue or what to check to help get more information.
I'm afraid your issue needs some deeper analysis as ORA-03113 might have various reasons, but typically it means that the 'oracle' executable has terminated unexpectedly once there was an existing connection. You should try to isolate the SQL command that is executing when the error occurs. It can be done either by checking the trace files on the server or by using SQL*Net trace if you don't have access to the server. If a statement can be isolated which consistently raises the ORA-3113 error, then it can be further analysed (like execution plan, triggers, etc), or maybe the best to raise an SR so Oracle Support can work on the issue. If you have access to Oracle Support you can find more information about ORA-03113 troubleshooting in MOS Doc ID 1506805.1. Let me know if I can help you any further.

Oracle Outer Join cause ORA-12537: Network Session: End of file

I have Oracle 11.2.0 (x64) installed on my personal development environment. When running SQL selections that Left Join more than 10 tables I am easily to get Oracle crash with ORA-12537: Network Session: End of file. If I use Inner Join there is no such problem. For a same particular query this happens all the time, no matter how many data is in my database (in fact not much data is there, no more than several hundred rows.)
I work with around 100 people who install Oracle with same version and roughly same settings, and apparently not seen anywhere else as far as I know.
This error can be triggered no matter if I run the query from SQL Developer or using C#.Net Data Access.
Any ideas please?
My Alert/Log.Xml has these:
Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0x45] [PC:0x35797B4, _kkqstcrf()+1342]
and
Errors in file c:\oracle\diag\rdbms\orcl1\orcl1\trace\orcl1_ora_1192.trc (incident=41028):
ORA-07445: exception encountered: core dump [kkqstcrf()+1342] [ACCESS_VIOLATION] [ADDR:0x45] [PC:0x35797B4] [UNABLE_TO_READ] []

SSIS - Data flow stuck at Execution Phase while using Attunity Oracle Source

I am using Attunity Oracle drivers to connect to Oracle Database on a remote server to retrieve data and dump into Excel File.
Everything works fine in Visual Studio BIDS. From VS I can connect directly to remote Oracle server and retrieve the data.
But when i deploy this ETL to my production server (64 Bit Windows Server 2008 & SQL Server 2012), ETL is always get stuck at Execution phase. After running for some time (20-30 mins), it gives following warning & still keeps running without giving any errors -
[SSIS.Pipeline] Information: The buffer manager detected that the system was low on virtual memory, but was unable to swap out any buffers. 0 buffers were considered and 0 were locked.
Either not enough memory is available to the pipeline because not enough is installed, other processes are using it, or too many buffers are locked.
Some more info -
I have checked server memory, only 3GB is in use out of total 12GB.
I have already set SQL server to use max 8GB.
I am using SQL Server Agent job to run the ETL periodically every 15 mins.
I have tried stopping all other ETLs on the server and tried running this ETL through Execute Package Utility but still the result is same.
I am using a date range in Oracle Query to retrieve the data, when the query for a particular date range does not return any data, ETL execution is always successful !!.
Progress log (Execute Package Utility) -
Any pointers/suggestion ??
Hope I am able to describe the issue properly.
Update (5/Mar/2014) -
I tried reducing the amount of data I am retrieving, and the ETL was successful.
I have also set the DefaultBufferSize to 10 MB(Max size).
But if the query data is exceeding DefaultBufferSize then why the package is successful on my development machine but not on the server ??
Thanks,
Prateek

Inserting rows to Oracle from Ruby

I have Oracle Instant Client 11.2.0.3 installed on a MacMini running Mountain Lion (10.8.3). I am able to create, selected and insert into tables from SQLPlus. However, using Ruby 1.9.3p327 and ruby-oci8-2.1.5, I am able to select but not insert. Insert operation returns 1 (I'm assuming that means success), an immediate select returns the row (is it cached on the client?) but the row is not effectively persisted in the database, and a subsequent select from ruby or SQLPlus returns no rows.
I've checked with Wireshark that there is data going to and coming back from the server box (Windows 7 running Oracle Server Personal Edition 11g Release 2).
Any ideas? All help will be greatly appreciated.
Best regards, Adolfo

Resources