I'm using the Microsoft-Oracle ODBC Driver to access an Oracle database in MS Access, but on about half of my linked tables, I get a [Function Sequence Error] whenever I try to pull up the Datasheet view. I've looked around for alternative drivers, but no luck.
Does anyone know how to stop getting these function sequence errors? And if I need a new driver, could you provide a link if possible to a download site? Thanks
I figured it out. The problem was that the Microsoft-Oracle ODBC driver mistakenly converted Oracle's CLOB (Character Large Object) fields into mere Text[255 char] fields in Access. And then Access freaked out whenever it tried to render these CLOBS with more than 255 characters.
So I just excluded those CLOB fields from all my queries and migration tables. It does mean that I can't migrate over fields like "Description" or "Notes," but at least I can migrate over the primary keys and relationships. That's good enough for me, for now.
Related
We've had some success with removing lob fields and avoiding row-by-row processing but with hadoop we can't seem to get around this. In some cases, the fields in question are less than 10 characters yet ssis sees them as lobs. is this an issue with hadoop, the odbc driver or ssis? what steps can we take to make a determination? Help me Obi Wan Kenobi. You're our last hope.
Hi if you can identify the columns that you need to convert add a data conversion step to those columns and then point those to the destination columns
and then set Validate External Data to False.
if this step also not working go into the advanced properties of the ODBC source and change the types back to something like DT_WSTR.
The answer you will get!!!
I am currently working on a stupid system where I have given no direct DB access but a weird SQL Workbench which can not do most of the things apart from some basic stuff. So for some reason I need to do a SELECT * on one of the tables which have 174 columns. And whenever I try that it gives me the following error:
"ERROR: Error -27 was encountered whilst running the SQL command. (-3)
Error -3 running SQL : ORACLE Driver Error [-27]: Selected data too
large for SQL Workbench"
Quick googling gave me nothing apart from (in one of the oracle documents):
In the SQL Editor, the maximum length of one row of the formatted
result is 8190 bytes. When this length is exceeded, the ORA connector
generates the above error
Now, I was wondering if anyone could give me a solution that would be a great help. One of the solution I am thinking is to increase the Maximum Length for Ora Connector/Driver. But I am novice in Oracle and do not know anything apart from querying. So haven't been able to change the Maximum Length yet.
So, please if anybody could help me out with this, that would be great.
Thanks a lot guys
Being asked to do database work trough the Uniface SQL Workbench is not a good situation. It is only a very simple thing that you can use in an emergency if nothing else is available.
You could run a couple of queries, each time with the primary key and a bunch of fields and stitch the result together in Excel.
If you have access to the Uniface Development Environment you can use it to convert your Oracle data to, for example, XML. Instructions are in the Uniface helpfile ulibrary.chm, see command line switch /cpy.
You cannot change the maximum record length of the Uniface Oracle Connector.
Our web application stores UTF-8 encoded data in VARCHAR fields. Recently, we have provided our customers access to this data via ODBC using DataDirect's OpenAccess ODBC driver. This is achieved using DataDirect's OpenAccess SDK, writing a C# .Net class to interface with the service. We only allow customers to perform SELECT queries. We also limit results to 100K rows at this time.
This solution really works great, except querying fields with some of this encoded data appeared as gibberish to the users, understandably. In the next release of our service, I would like to offer users the ability to query using unencoded strings, and subsequently see the unencoded results.
I have solved this by UTF-8 encoding the incoming query, then returning the unencoded results by flagging VARCHAR fields was WVARCHAR. This is actually working really well. It does mean, though, that every VARCHAR column comes back as a WVARCHAR despite the presence (or lack of) Unicode characters.
Many of our customers at this time have adopted the method of creating a linked server in SSMS on their own SQL Server instance, and it is my preferred method of connecting. Since our service limits results to 100K rows, I encourage everyone to use OPENQUERY to perform queries. It seems, though, that there is something I'm missing in the configuration of my Linked Server in SSMS. When string functions (LEFT, RIGHT, SUBSTRING for example) are performed against these WVARCHAR columns, SSMS returns the following error:
OLE DB provider "MSDASQL" for linked server "LOCAL" returned message
"Requested conversion is not supported.". Msg 7341, Level 16, State 2,
Line 1 Cannot get the current row value of column "[MSDASQL].ColName"
from OLE DB provider "MSDASQL" for linked server "LOCAL".
This would be returned for a query such as this:
SELECT *
FROM OPENQUERY([LOCAL], '
SELECT LEFT(FirstName, 2) AS ColName
FROM dbo.User
')
If I were to remove the LEFT function from this query, the FirstName column would be returned, properly decoded, without error.
This problem does not affect queries in, say, MS Excel for example. And on the surface the strings seem to be properly affected by their respective functions as I debug my way through the .Net class which interfaces to the DataDirect product. I've attempted to change all the Server Options on the Linked Server properties, but I've not had any luck finding the right combination. I'm just looking for the right tree to bark up here. Is it my treatment of the results, changing them to WVARCHAR? Or is it some attribute of my SSMS linked server that I need to change that I'm missing?
On linked server properties, on COLLATION COMPATIBLE set to TRUE, the problem is resolved.
I ported a Delphi 6 application to Delphi 2007 and it uses BDE to connect to
an Oracle 9i database. I am getting an
ORA-01426: numeric overflow exception
When I execute a stored procedure. This happens randomly and if I
re-run the stored procedure through the application with the same parameters
the exception does not occur.
The old Delphi 6 application works just fine.
Ideas anybody?
Showing a code example could make this easier, but here are a couple of hunches:
Are the data coming from another source (like Excel) that does not have explicit data types? Mixed or ambiguous data may be causing BDE to assign the wrong data type to a field that then is incompatible with the database field.
Could be a numeric formatting issue (some U.S.-centric components do not handle localization properly). Is your localization other than English(U.S.)? Is so, does changing it to English(U.S.) fix the problem?
If these completely miss, more details might help.
Does the D6 version of the app use the same version of BDE, Oracle, and the database? If so, then it's probably something about the data being passed (either content or mechanism).
Not knowing what those data are, nor how they are passed, makes it pretty hard to diagnose.
When you create a Microsoft Access 2003 link to an Oracle table using Oracle's ODBC driver, you are sometimes asked to state which columns are the primary key(s).
I would like to know how to change that initial assignment, or even how to get Access/ODBC to forget the assignment. In my limited testing I wonder if the assignment isn't cached by the ODBC driver itself.
The columns I initial chose are not correct.
Update: I never did get a full answer on this one, deleting the links then restoring them didn't work. I think it's an obscure bug. I've moved on and haven't had to worry about this oddity since.
You must delete the link to the table and create a new one. When a table is linked all the connection info about the table's path, structure (including primary key), permissions, passwords and statistics are stored in the Access db. If any of those items change in the linked table, refreshing links won't automatically update it on the Access side because Access continues to use the previously stored info. You must delete or drop the linked table and recreate the link, storing the current connection information.
Don't know for sure if this next bit also applies to odbc linked tables, but I suspect it does. For Jet tables, it's a good idea to periodically delete all links and recreate them to improve query performance, because if a linked table's statistics are made on a table with few records, once that table is filled with many more records, new statistics will tell Jet's optimizer whether using indexes or a full table scan would be the better course of action when running a query.
It is not possible to delete the link and then relink?