Inserting rows to Oracle from Ruby - ruby

I have Oracle Instant Client 11.2.0.3 installed on a MacMini running Mountain Lion (10.8.3). I am able to create, selected and insert into tables from SQLPlus. However, using Ruby 1.9.3p327 and ruby-oci8-2.1.5, I am able to select but not insert. Insert operation returns 1 (I'm assuming that means success), an immediate select returns the row (is it cached on the client?) but the row is not effectively persisted in the database, and a subsequent select from ruby or SQLPlus returns no rows.
I've checked with Wireshark that there is data going to and coming back from the server box (Windows 7 running Oracle Server Personal Edition 11g Release 2).
Any ideas? All help will be greatly appreciated.
Best regards, Adolfo

Related

ORA-01036: illegal variable name/number with Oracle 18 and Oracle 19 Windows client

After upgrading from Oracle 11/12 to 18/19 I get this error: ORA-01036: illegal variable name/number.
The error occurred in a query like this:
SELECT * FROM (SELECT * FROM TABLE) MY_TABLE WHERE ROWNUM <= :P_ROWNUM
(Subquery + binding parameters)
The identical query works properly with the Oracle 11.2.0.4 or 12.1.0.2 client. It fails with the Oracle Client 18c or 19c.
PS: The Oracle Server is version 18c 64x for Windows.
I use Delphi 10.1.2 with ADO components (dbGO). I also tried with Delphi 13.3.3 but the behavior is the same.
It seems to be a problem in the Oracle OLE DB provider (ORAOLEDB).
If I don't use ADO but DevArt Unidac all worked as expected.
Someone can help me?
Max
Your query is fine. We ran into a similar issue when migrating from 12.1 to 19. In our case, we have a custom OLE DB provider that interfaces with OraOLEDB (and others) using the Microsoft OLE DB Provider Templates (ATL). When attempting to upgrade from 12.1.x to 19c we started seeing the strange and confusing "ORA-01036: illegal variable name/number" error for parameterized SELECT queries. The queries would succeed the first time they were executed but subsequent executions would fail when all we did was change the parameter value (the queries were executed sequentially). I went on a wild goose chase trying to diagnose the issue and finally decided it had to be an Oracle caching bug of some kind. Yesterday, I experimented with the cache-related connection string attributes and found that adding the MetaDataCacheSize attribute and setting its value to 0 (zero) resolved the issue for us. None of the current Oracle patches appear to address this issue, at least none of those that mention the ORA-01036 error.

Monetdb database was killed by signal SIGSEGV

I'have a database version MonetDB Database Server Toolkit v1.1 (Jun2016-SP2) working on RHEL version 6.7
I faced with unexpected shut down with the message in the log: "database was killed by signal SIGSEGV" when trying to execute bunch of 'SELECT COUNT(1) FROM ' quires in single connection. Tables are huge 8Mln rows and 4000 columns.
Also I can't fulfill statistics table for such tables.
You are using a really old version of MonetDB, I would first upgrade to the latest version. If the problem remains, then file a bug report with enough detail to independent reconstruct the case or sent a stack trace of a debugged version.

SQL data extracts works in Oracle 11g but not Oracle 12c

When I execute the following SQL using TOAD against an Oracle 11g database, the fully formed XML is returned successfully:
With T As (SELECT dbms_xmlgen.getxml('SELECT m.trans_message FROM xml_nodes_ams_in a, message m WHERE a.id = m.msg_id AND a.UPN IN(''A30971016528VE8K'',''A30971016529VE84'') ORDER BY a.upn ASC'
) As output_xml from dual
) select dbms_xmlgen.Convert(output_xml,1) from T
However, when I execute the exact same SQL against our newly installed Oracle 12c database, some of the XML data appears to be missing (around 5000 characters).
I have discussed this with the DBA who reckons its a client issue rather than a database issue as he says there is no setting against the database that would cause this.
Has anyone got any advise on how I can progress this issue?
I raised a service request with Oracle and they came back to me and advised that there is a bug with the dbms_xmlgen.Convert function within Oracle 12.1 that was fixed in Oracle 12.2. Basically the function fails with XML greater than 120 KB.

update privilege for user accessing through dblink

We have a current setup where from oracle 10 we access oracle 7 and update its records. However, since 10 can't access 7 through db link. We had to use oracle 9 to act as bridge between 10 and 7. Picture it as below
ORACLE 10g dblink to Oracle 9i dblink to Oracle 7
My issue is the user (10g) we are using is getting insufficient privilege error when try to update the records in oracle 7.
I have tried update the records from oracle 9 to 7 and there was no error. So I assume a privilege issue between 10 and 9. How do I check if my user in 10g can update records in oracle 7 via oracle 9?
My guess is that this is not possible.
https://docs.oracle.com/cd/B28359_01/server.111/b28310/ds_txns003.htm
The distributed transactions requires a global coordinator and a negotiation between it as a master and the others. Your architecture will require that 9i node is in the same time a coordinator and a coordinated node. This is just a bet. Reading carefully the doc may explain better why it cannot be possible. Making it to work will demonstrate the oposite, but I'm pessimistic about this chance.
My opinion is that you should try to do it asynchronously, not in a transaction(which will involve more work for sure...)

ADO showing "hidden columns" with SQL Native Client

I'm working on a legacy application using VB6 and Classic ASP. We're using disconnected ADO recordsets to pass data back and forth. Normally this works. But what has started happening recently is for any inner/outer join, ADO is including these in the available records to choose from. So when were specifying a column to update (in the cases it errors out, the primary key column), it in turns updates the wrong column (with the same name). I know it's normal for ADO to pull the primary keys for any joined tables, but the default for this is for ADO to hide them. In our case ADO isn't hiding them.
What I've narrowed it down to is the SQL Native Client driver is not working correctly. I can go back to the SQL Server driver (SQL 2000) and it works great, but as soon as I switch back to SQL Native Client, it exhibits the behavior above. I've checked the properties on the open connection and the properties of the recordsets themselves, they match in every instance except one (the count of how many hidden columns there are which makes sense, as SQL Native isn't hiding them).
I've tried everything from deleting the MSADC folder from IIS and re-adding it, I've uninstalled SQL Native and reinstalled it (and subsequently upgraded it to the newest version). I've recreated the ODBC connection several times as well in the process of troubleshooting it. At this point I'm at a loss.
Also one thing to add, it appears SQL Native Client works fine on our other servers and no one else is having this issue. Anyone might have an idea of what could be happening? Thanks!
Edit : Example of what's happening (this occurs on for any query (stored procedures if it matters) and with >= 1 joins of any kind)
select temp_id, temp_value on temp_test
inner join another_table on another_table.temp_id = temp_test.temp_id
inner join yet_another_table on yet_another_table.another_id = another_table.another_id
this'll produce in the ado recordset :
SQL Native Client
(0) temp_id
(1) temp_value
(2) temp_id (primary key of another_table)
(3) another_id (primary key of yet another_table)
SQL Server driver
(0) temp_id
(1) temp_value
SQL Server 2005 will show it as it should be as : temp_id, temp_value
this occurs on for any query (stored procedures if it matters)
It's not the issue described here is it? :
If a change in the connection string changes the behavior, I would suppose that you have two different schemas, and then two versions of the same stored procedure; and the one that is executed with SQL Nativ Client is the incorrect one.
I have exactly the same scenario, and have had it for over a year on our servers and servers at our client. I never found a solution and as a result we simply have to use the SQL Server driver, which is a shame as SQL Native seems to connect significantly faster.
It's nothing to do with different schemas or different versions of the same stored proceedure as suggested above. I use a file dsn and simply changing the driver name changes the behaviour to/from that mentioned above. It seems to happen to all views (probably stored proceedures too as indicated)
If anyone does find a solution I'd be keen to hear about it.
Warwick

Resources