My Toad for Oracle recently updated to version 14 and since then I have been having an issue with exporting data. This applies to data sets that are a few thousand records as well as results with few million records.
In other words I'm currently attempting to export a data set with around 50k rows however when I try to export it, I get the following error message
Qry: Cannot perform this operation on a closed dataset
I also, get the above error when trying to skip to the last row of the results using toad but that also doesn't work.
I never had this issue in the past and was able to export as many rows as I wanted. Is there some sort of setting which I need to enable in Toad to allow this. I have a coworker who is able to export the data using the same query results so I'm not sure why is it specifically me that is having this issue.
Related
Yes, it's 2021 and I am still using DataSets! But, it still works mostly ...
However, I am having an issue that is slowly driving me crazy. I have several projects using PostgreSQL database and DataSets. The issue causing me grief is that while using MS SQL Server databases, tables with identity keys have no issues in updating the DataTable and returning the keys following the update. With postgres, we need to add a RETURNING id statement at the end of the INSERT query or use a SELECT currval(). Which by itself is fine. But, if you then edit the DataTable in any fashion (add a query to it for example), it trumps the insert query and turfs the parameter provided to handle the return value.
Is there any known way around this, or am I missing something simple? For the previous 20 years I worked almost exclusively with MS SQL Server and it just works ...
Thanks in advance!
I thought for sure this would be an easy issue, but I haven't been able to find anything. In SQL Server SSMS, if I run a SQL Statement, I get back all the records of that query, but in Oracle SQL Developer, I apparently can get back at most, 200 records, so I cannot really test the speed or look at the data. How can I increase this limit to be as much as I need to match how SSMS works in that regard?
I thought this would be a quick Google search to find it, but it seems very difficult to find, if it is even possible. I found one aricle on Stack Overflow that states:
You can also edit the preferences file by hand to set the Array Fetch Size to any value.
Mine is found at C:\Users<user>\AppData\Roaming\SQL
Developer\system4.0.2.15.21\o.sqldeveloper.12.2.0.15.21\product-preferences.xml on Win 7 (x64).
The value is on line 372 for me and reads
I have changed it to 2000 and it works for me.
But I cannot find that location. I can find the SQL Developer folder, but my system is 19.xxxx and there is no corresponding file in that location. I did a search for "product-preferences.xml" and couldn't find it in the SQL Developer folder. Not sure if Windows 10 has a different location.
As such, is there anyway I can edit a config file of some sort to change this setting or any other way?
If you're testing execution times you're already good. Adding more rows to the result screen is just adding fetch time.
If you want to add fetch time to your testing, execute the query as a script (F5). However, this still has a max number of rows you can print to the screen, also set in preferences.
Your best bet I think is the AutoTrace feature. You can tell it to fetch all the rows, you'll also get a ton of performance metrics and the actual execution plan.
Check that last box
Then use this button to run the scenario
I am having issues with getting the correct number of rows. I ran this code a while back and according to the data, this is the number of rows I should be getting.
Original
But i just ran it again after probably a few weeks and i am sure the data is not messed with because the tables have the same data as before
Ran Again
I am currently working on a stupid system where I have given no direct DB access but a weird SQL Workbench which can not do most of the things apart from some basic stuff. So for some reason I need to do a SELECT * on one of the tables which have 174 columns. And whenever I try that it gives me the following error:
"ERROR: Error -27 was encountered whilst running the SQL command. (-3)
Error -3 running SQL : ORACLE Driver Error [-27]: Selected data too
large for SQL Workbench"
Quick googling gave me nothing apart from (in one of the oracle documents):
In the SQL Editor, the maximum length of one row of the formatted
result is 8190 bytes. When this length is exceeded, the ORA connector
generates the above error
Now, I was wondering if anyone could give me a solution that would be a great help. One of the solution I am thinking is to increase the Maximum Length for Ora Connector/Driver. But I am novice in Oracle and do not know anything apart from querying. So haven't been able to change the Maximum Length yet.
So, please if anybody could help me out with this, that would be great.
Thanks a lot guys
Being asked to do database work trough the Uniface SQL Workbench is not a good situation. It is only a very simple thing that you can use in an emergency if nothing else is available.
You could run a couple of queries, each time with the primary key and a bunch of fields and stitch the result together in Excel.
If you have access to the Uniface Development Environment you can use it to convert your Oracle data to, for example, XML. Instructions are in the Uniface helpfile ulibrary.chm, see command line switch /cpy.
You cannot change the maximum record length of the Uniface Oracle Connector.
I'm importing data from a text file using Bulk Insert in the script component in SSIS package.
Package Ran successfully and data imported into SQL
Now how do I validate the completeness of the data?
1. I can get the row count from source and destination and compare.
but my manager wants to know how we can verify all the data has come a cross as it is without any issues.
If we are comparing 2 tables then probably a joining them on all fields and see anything missing out.
I’m not sure how to compare a text file and a sql table.
One way I could is write code to ready the file line by line and query the database for that record and compare each and every field. We have millions of records so this is not going to be a simple task.
Any other way to validate all of the data ??
Any suggestions would be much appreciated
Thanks
Ned
Well you could take the same file and do a look-up to the SQL source and if any of the columns don't match move to a row count.
Here's a generic example of how you can do this.