I get a problem when doing queries with different platforms using Oracle. The results of the query indicate the difference in output.
First I used the "Toad for Oracle Database" application and the results like this:
The results showed perfect results, as I wanted.
But when I do queries on different platforms, namely PHP Codeigniter and Navicat (it may also apply to other platforms). The results are different as in this picture:
Following are the queries that I am trying to run but do not work on different platforms.
select STANDARD_HASH(sysdate) from dual;
Try adding RAWTOHEX. This will convert the returned value from a RAW to a VARCHAR2, which should be more easily understood by all SQL clients. (As Justin pointed out this is probably a client issue, not a problem with STANDARD_HASH.)
select rawtohex(standard_hash(sysdate)) the_hash from dual;
THE_HASH
----------------------------------------
FBC14021D134F922420086D291906B0B0D783421
Related
I recently wrote a tool that extracts certain data from our DBs. It runs as PL/SQL script running in SQLDeveloper (either in a worksheet or as extension plugin) and produces its output to the SQLDeveloper log-window.
This worked all fine on my system, but now I encountered an issue when users are on systems with a different language or more specific with different default time/date/timestamp formats than on the machine on which I had been developing and testing this.
Now - I am not sure: is the format of dates, times and timestamps controlled by the DB or by SLQ-Developer? In my understanding these PL/SQL scripts are sent to the DB for execution and their output is sent back to SQL-Developer. That would mean for me, that the format of the output depends solely on the DB (or the system on which the DB executes). Or are the NLS setting of the client (SQL-Developer) somehow involved here?
To make my tool auto-adjust to these settings I will need to be able to query these formats - either from the DB in use (Oracle 12.2 or Oracle XE 18/19 in our case) or from SQLDeveloper.
Assuming, it's the DB: Is there a table that contains the default format strings that are being used for select results?
Note: The point is NOT how to format dates etc. as strings, but the other way round:
I get the the query results as strings in the log-window. These contain dates and timestamps. I now need a hint from the DB-system to figure out how to interpret these. E.g. when I get a date such as '10-11-12', is this meant to be Nov. 10th, 2012 or is meant to be Nov. 12th, 2010?
Hope I could make myself clear...
I have using the google-cloud-bigquery gem (version 0.20.2, can't upgrade at the moment).
I have an audit dataset which contains many tables of the following format:
audit.some_table20170101
audit.some_table20170102
audit.some_table20170103
etc..
I am trying to run a query which will scan all of these tables, and give me the last value of field some_field.
What I was going for is using the tables wildcard:
FROM audit.some_table*
and to hopefully
SELECT LAST(some_field) AS last_some_field
While using Bigquery web console, I was able to do so by
using backticks (FROM `audit.some_table*`), but doing the same programmatically with the gem causes a Google::Cloud::InvalidArgumentError: invalid: Invalid table name: `audit.some_table*`
Even in the web console, when I try to use the LAST command it requires using legacy SQL, which then gives an error due to the backticks of the previous section. If I disable legacy sql, LAST is no available anymore (unfamiliar command) and then I have to order by a timestamp column descending and limit 1.
Any ideas how to solve these problems and to be able to query using the above mention gem and version?
LAST is only meaningful when there is an order. Tables in BigQuery do not have inherit ordering, and if you run SELECT * FROM table you may get results in different order every time. Therefore the right thing to do it is to use ORDER BY some_value DESC LIMIT 1 construct.
The wildcard tables are indeed only available in Standard SQL, to get similar functionality with Legacy SQL you can use TABLE_DATE_RANGE function in FROM clause.
I have an Oracle bind query that is extremely slow (about 2 minutes) when it executes in my C# program but runs very quickly in SQL Developer. It has two parameters that hit the tables index:
select t.Field1, t.Field2
from theTable t
where t.key1=:key1
and t.key2=:key2
Also, if I remove the bind variables and create dynamic sql, it runs just like it does in SQL Developer.
Any suggestion?
BTW, I'm using ODP.
If you are replacing the bind variables with static varibles in sql developer, then you're not really running the same test. Make sure you use the bind varibles, and if it's also slow you're just getting bit by a bad cached execution plan. Updating the stats on that table should resolve it.
However if you are actually using bind variables in sql developers then keep reading. The TLDR version is that parameters that ODP.net run under sometimes cause a slightly more pessimistic approach. Start with updating the stats, but have your dba capture the execution plan under both scenarios and compare to confirm.
I'm reposting my answer from here: https://stackoverflow.com/a/14712992/852208
I considered flagging yours as a duplicate but your title is a little more concise since it identifies the query does run fast in sql developer. I'll welcome advice on handling in another manner.
Adding the following to your config will send odp.net tracing info to a log file:
This will probably only be helpful if you can find a large gap in time. Chances are rows are actually coming in, just at a slower pace.
Try adding "enlist=false" to your connection string. I don't consider this a solution since it effecitively disables distributed transactions but it should help you isolate the issue. You can get a little bit more information from an oracle forumns post:
From an ODP perspective, all we can really point out is that the
behavior occurs when OCI_ATR_EXTERNAL_NAME and OCI_ATR_INTERNAL_NAME
are set on the underlying OCI connection (which is what happens when
distrib tx support is enabled).
I'd guess what you're not seeing is that the execution plan is actually different (meaning the actual performance hit is actually occuring on the server) between the odp.net call and the sql developer call. Have your dba trace the connection and obtain execution plans from both the odp.net call and the call straight from SQL Developer (or with the enlist=false parameter).
If you confirm different execution plans or if you want to take a preemptive shot in the dark, update the statistics on the related tables. In my case this corrected the issue, indicating that execution plan generation doesn't really follow different rules for the different types of connections but that the cost analysis is just slighly more pesimistic when a distributed transaction might be involved. Query hints to force an execution plan are also an option but only as a last resort.
Finally, it could be a network issue. If your odp.net install is using a fresh oracle home (which I would expect unless you did some post-install configuring) then the tnsnames.ora could be different. Host names in tnsnams might not be fully qualified, creating more delays resolving the server. I'd only expect the first attempt (and not subsequent attempts) to be slow in this case so I don't think it's the issue but I thought it should be mentioned.
Are the parameters bound to the correct data type in C#? Are the columns key1 and key2 numbers, but the parameters :key1 and :key2 are strings? If so, the query may return the correct results but will require implicit conversion. That implicit conversion is like using a function to_char(key1), which prevents an index from being used.
Please also check what is the number of rows returned by the query. If the number is big then possibly C# is fetching all rows and the other tool first pocket only. Fetching all rows may require many more disk reads in that case, which is slower. To check this try to run in SQL Developer:
SELECT COUNT(*) FROM (
select t.Field1, t.Field2
from theTable t
where t.key1=:key1
and t.key2=:key2
)
The above query should fetch the maximum number of database blocks.
Nice tool in such cases is tkprof utility which shows SQL execution plan which may be different in cases above (however it should not be).
It is also possible that you have accidentally connected to different databases. In such cases it is nice to compare results of queries.
Since you are raising "Bind is slow" I assume you have checked the SQL without binds and it was fast. In 99% using binds makes things better. Please check if query with constants will run fast. If yes than problem may be implicit conversion of key1 or key2 column (ex. t.key1 is a number and :key1 is a string).
SELECT LASTNAME, STATE, COUNT(*)
FROM TEST
WHERE STATE IN ('NY','NJ','CA')
GROUP BY STATE,LASTNAME
HAVING COUNT(*)>1;
Similar query in MS SQL server and Sybase used to process internally as follows,
On Test table where clause is applied and resultset(internal table) is made, then a group by is applied and another internal resultset is made and then finally Having is applied and final result set is shown to the user.
Does Oracle use the resultset approach as well or is it something different?
By the way I tried to Google around, checked Oracle documentation, couldn't find the detail I was looking for. Sybase documentation is pretty clear on such things.
You can count on Oracle to materialize the result set in this case. To be sure, check the plan. I'm betting you will find a hash group by and then a filter operation at the top.
There may be some rare variations of your case that could be solved by walking a suitable index in order to avoid the intermediate result set, but I have never come across it in an execution plan.
We are planning to migrate our DB to Oracle.We need to manually check each of the embedded SQL is working in Oracle as few may follow different SQL rules.Now my need is very simple.
I need to browse through a file which may contain queries like this.
String sql = "select * from test where name="+test+"and age="+age;
There are nearly 1000 files and each file has different kind of queries like this where I have to pluck the query alone which I have done through an unix script.But I need to convert these Java based queries to Oracle compatible queries.
ie.
select * from test where name="name" and age="age"
Basically I need to check the syntax of the queries by this.I have seen something like this in TOAD but I have more than 1000 files and can't manually change each one.Is there a way?
I will explain more i the question is not clear
For performance and security reasons you should use PreparedStatement.bind(...) rather than string concatenation to build your SQL strings.
I don't know of a way to tackle this problem other than fixing the code that needs to be fixed. If you can find common patterns then you can automate some of the editing using find/replace or sed or some other tool, as long as you diff the result before checking it in.
If there are thousands of files I guess that there is a reasonable sized team that built the code this way. It seems fair to share the workload out amongst the people that built the system, rather than dump it all on one person. Otherwise you will end up as the "SQL fixing guy" and nobody else on the team will have any incentive to write SQL code in a more portable way.
Does your current application execute SQL through a common class? Could you add some logging to print out the raw SQL in this common class? From that output you could write a small script to run each statement against Oracle.