How can I disable removing leading zero for decimal, when it convert to char?
For example:
select to_char(0.1) from dual;
return .1, not 0.1.
I know than i can use format parameter, but i want that it work correct for implicit conversion.
I have a lot of plsql code for get data from database.
In some case there are conversions from float to varchar2.
For example use dbms_sql.define_array with varchar2 table when column type of request is number.
I can try find all such things and correct them (and I do it), but IMHO it is better way to set up rule for such conversions.
Thanks in advance.
Afaik you cannot alter the default format for implicit conversion.
In case you limit yourself to using the conversion in the context of inserts/updates to a given table, you might mimic the intended behaviour using database triggers. I do not see any advantage compared to an express format model in your queries for that use case.
There are some official oracle docs claiming that you can override built-in functions by re-implementing them in java. I have never done that, I do not know whether this is a generally viable method, and I would strongly discourage any attempt at tinkering with the database API (therefore, nolink is included) !
Assuming that what you aim at is feasible somehow, reconsider why you do not want to expressly indicate the conversion format. The few additional keystrokes are hardly a nuisance for the developer while the idiom contributes to a self-documenting code base.
Spend your development resources more wisely than performing micro-optimizations !
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Can, in today's drivers, JDBCTypes of the most popular DBMSs and set always to JDBCType.OTHER? Can input/output classes be used interchangeably within the character stream/binary stream/number/time point type classes?
I am asking first, specifically about JDBCType/java.sql.Types constants and, second, about the methods used for retrieving (i.e. the Java class to which the column maps). My experience with several databases was that in general, they will happily convert anything to anything else if only it makes sense. Mind you, I did not perform any kinds of exhaustive tests, it is just an experience that I could almost always access a ResultSet column as any type I wanted specifically in that context, not worrying about JDBC<=>DBMS SQL mapping. Obviously, I am not advocating here timestamp<=>string, integer<=>string conversions and similar; I am also aware for example, of the subtle differences between accessing a column as ZonedDateTime with getObject, and converting manually the result of getTimestamp, but that's a difference between the concepts behind these classes.
Explicitly, applications use JDBCType in practice only in PreparedStatement.setNull and CallableStatement.registerOutParameter, and here my experience was also that as long as I stuck to number/string/time/binary distinction I could pick the class according to wishes.
So, overall, they seem to me like a relic of questionable example from 15(?) years ago when we didn't have the experience and knowledge about building modern server side applications and most of Java EE was based on fantasies.
The type constants themselves are also used for metadata introspection. For example, for DatabaseMetaData.getColumns, the result set column DATA_TYPE contains the java.sql.Types code, and ResultSetMetaData.getColumnType returns the java.sql.Types code, and that applies to other metadata methods and objects. Metadata introspection might not be very important in normal programs, but it is used a lot by database tools and data access libraries or frameworks.
Other usage really depends on the driver and database system. Some drivers (and database systems) will always determine the type of columns on prepare time, and when setting values the driver will convert the value to the expected type of the parameter (as long as such a conversion is possible or specified by JDBC). That is, if the parameter is VARCHAR, and you set a long, then the driver will convert the long to string and use the string as the parameter value. For those database systems, the java.sql.Types and java.sql.JDBCType doesn't have a lot of value (beyond metadata introspection), and will usually be ignored.
In other database systems and their drivers, parameters don't necessarily have an expected type at prepare time (or the prepare phase can be skipped if the driver supplies the type information, or the database system allows you to override parameter types), and the type will be determined by explicitly setting the value (so, setting a string will determine the type as VARCHAR, while setting a long will determine the type as BIGINT, etc). In those cases the type constants will have use, for example in setNull, or in setObject, as it will specify the type of the parameter, which could infer specific conversions or behaviour on the driver or on the database. It might also be necessary for situations where a parameter is passed to a polymorphic or overloaded function (that is, the actual type of the parameters determines what the function does and what it returns).
The registerOutParameter in CallableStatement is actually a special case of that. For the first type of drivers, this is usually technically unnecessary (as the types would be determined by the prepare), while for the second type it can be either necessary, to leave conversion of values to a specific type to the database engine, or useful to be able to execute stored procedures without explicit prepare: you tell the driver which OUT types to expect, and it can then execute the procedure without having prepared it first. In that last case, on execute it will - for example - send the statement text, and the parameters and a descriptor of the OUT types expected. If there is a mismatch (incompatible types, too few or too many parameters or OUT types, etc, the database system would then reject execution, or - though I don't know if this exists in practice - the combination of actual parameter types and expected OUT types could select a specific stored procedure implementation (if a database system supports overloaded stored procedures).
I try to save value 0.1 to database, but it became .1. and when i try to use it for java double type it became error. Do i need to use format method in java to be able to use it?
Sorry for sounding like a nagging teacher but you really need to learn some basics of computing: when stored as a number (either a float or a variable-length decimal) there is no difference between 0.1 and .1 - it's just the matter of the display format, not the underlying value. E.g. go and read http://en.wikipedia.org/wiki/IEEE_floating_point to see how floats are typically encoded as. Also, see Oracle Floats vs Number to understand how Oracle FLOATs are actually NUMERIC, not IEEE floats.
Secondly, always post as much source code as possible - with no information there's no way anyone can even make an educated guess of the problem.
I have some code that is responsible for converting data from an IDataReader into an IronPython.Runtime.List of PythonTuples. This same code is employed for several different kinds of database connections (including Access, Oracle and MySql).
Oracle's OracleDecimal datatype causes an overflow when calling dataReader.GetValues() when the cursor contains a value with a large precision. This issue has been well documented, and the solutions always involve using specific methods on the OracleDataAdapter. I only have an IDataReader interface.
Is there any way around this issue without binding my code specifically to ODP.NET? Surely there must be some way to get at this data in a provider-agnostic way?
The only provider agnostic method that I am aware of is to round the values in your select statement. I've found that rounding to 15 decimal places usually does the trick.
It may not be exactly what you're looking for, but the System.Data.Common.DbDataReader class has a GetProviderSpecificValues function that may do what you want
Okay, I have reached a sort of an impasse.
In my open source project, a .NET-based Oracle database browser, I've implemented a bunch of refactoring tools. So far, so good. The one feature I was really hoping to implement was a big "Global Reformat" that would make the code (scripts, functions, procedures, packages, views, etc.) standards compliant. (I've always been saddened by the lack of decent SQL refactoring tools, and wanted to do something about it.)
Unfortunatey, I am discovering, much to my chagrin, that there doesn't seem to be any one widely-used or even "generally accepted" standard for PL-SQL. That kind of puts a crimp on my implementation plans.
My search has been fairly exhaustive. I've found lots of conflicting documents, threads and articles and the opinions are fairly diverse. (Comma placement, of all things, seems to generate quite a bit of debate.)
So I'm faced with a couple of options:
Add a feature that lets the user customize the standard and then reformat the code according to that standard.
—OR—
Add a feature that lets the user customize the standard and simply generate a violations list like StyleCop does, leaving the SQL untouched.
In my mind, the first option saves the end-users a lot of work, but runs the risk of modifying SQL in potentially unwanted ways. The second option runs the risk of generating lots of warnings and doing no work whatsoever. (It'd just be generally annoying.)
In either scenario, I still have no standard to go by. What I'd need to know from you guys is kind of poll-ish, but kind of not. If you were going to use a tool of this nature, what parts of your SQL code would you want it to warn you about or fix?
Again, I'm just at a loss due to a lack of a cohesive standard. And given that there isn't anything out there that's officially published by Oracle, I think this is something the community could weigh in on. Also, given the way that voting works on SO, the votes would help to establish the popularity of a given "refactoring."
P.S. The engine parses SQL into an expression tree so it can robustly analyze the SQL and reformat it. There should be quite a bit that we can do to correct the format of the SQL. But I am thinking that for the first release of the thing, layout is the primary concern. Though it is worth noting that the thing already has refactorings for converting keywords to upper case, and identifiers to lower case.
PL/SQL is an Ada derivative, however Ada's style guide is almost as gut-twisting disgusting as the one most "old-school" DB-people prefer. (The one where you have to think their caps lock got stuck pretty bad)
Stick with what you already know from .Net, which means sensible identifiers, without encrypting/compressing half the database into 30 chars.
You could use a dictionary and split camel-cased or underscored identifier parts and check if they are real words. Kinda like what FxCop does.
Could be bit annoying, though. Since the average Oracle database has the most atrocious and inconsistent naming guidelines that were obsolete even 30 years ago.
So, I don't think you'll reach the goal of getting clean identifiers everywhere in your projects (or your user's)
Since PL/SQL is case insensitive and columns are preferred over equally named local vars, you'll have to make even more tradeoffs. You can take parts of the style guide of other pascal derivatives (Ada is based on Modula, which is based on Pascal), like Delphi which feel a bit closer to home for PL/SQL (I use a mixture of .Net & Delphi).
Especially the "aPrefix" for parameters can be a life saver, because you won't collide with column names that way:
subtype TName is SomeTable.Name%type;
subtype TId is SomeTable.Id%type;
function Test(aName in TName) return TId is
result TId;
begin
SELECT t.Id
INTO result
FROM SomeTable t
WHERE t.Name = aName;
return result;
exception
when No_Data_Found then
return null;
end;
Without the prefix, oracle would always pick the column "Name" and not the parameter "Name". (Which is pretty annoying, since columns can be qualified with an alias...)
I configured my PL/SQL Devloper to make all keywords in lowercase, however, I made the ones that are used in plain SQL to be uppercased (SELECT,WHERE, etc)
As a result, SQLs are sticking out of the code, but not all my code has to be brutalized by all-upper keywords. (They are highlighted anyways, so what's with the all-upper fetish? ;-) )
When your tool is capable of identifying plain SQLs and give some visual clue, then even the SQL keywords wouldn't need to have a different casing.
btw, I'd love to take a look at it. Can you post an url, or is still "under cover"?
Cheers,
Robert
TOAD has a "pretty printer" and uses a ton of options to give the user some say in what is done. (But it has gotten so complicated that I still can't manage to get the results I would like.)
For me, some options look downward horrible, but it seems that some people like them. A sensible default should be okay for 80% of the time, but as this is an issue of religious wars, I'm sure that you can spend a totally unreasonable amount of time for pretty small results. I'd suggest to code some things to handle the 10-year-old sp you mentioned, and to include something like a <pre> tag that the pretty printer leaves alone.
I like the "standard" Of Tom Kyte (in his books). That means everything in lowercase. Most easy for the eyes.
If all you're doing is rearranging whitespace to make the code look consistently clean, then there's no risk of changing SQL results.
However, as an Oracle/PLSQL developer for the past 8 years, I can almost guarantee I wouldn't use your tool no matter how many options you give it. Bulk reformatting of code sounds great in principle, but then you've totally destroyed its diffability in version control between revisions prior to and after the reformat.
When adding internationalisation capabilities to an Oracle web application (build on mod_plsql), I'd like to interpret the HTTP_ACCEPT_LANGUAGE parameter and use it to set various NLS_* settings in the Oracle session.
For example:
HTTP_ACCEPT_LANGUAGE=de
alter session set nls_territory=germany;
alter session set nls_lang=...
However, you could get something more complicated I suppose...
HTTP_ACCEPT_LANGUAGE=en-us,en;q=0.5
How have folks tackled this sort of thing before?
EDIT - following on from Curt's detailed answer below
Thanks for the clear and detailed reply Curt. I didn't really make myself clear though, as I was really asking if there were any existing Oracle widgets that handled this.
I'm already down the road of manually parsing the HTTP_ACCEPT_LANGUAGE variable and - as Curt indicated in his answer - there are a few subtle areas of complexity. It feels like something that must have been done many times before. As I wrote more and more code I had that sinking "I'm reinventing the wheel" feeling. :)
There must be an existing Oracle approach for this - probably something in iAS??
EDIT - stumbled across the answer
While looking for something else, I stumbled across the UTL_I18N package, which does exactly wham I'm after:
Is there an easy way to convert HTTP_ACCEPT_LANGUAGE to Oracle NLS_LANG settings?
Sure, and it's not too tough, if you break up the problem properly and don't get to ambitious at first.
You need, essentially, two functions: one to parse the HTTP_ACCEPT_LANGUAGE and produce a language code, and one to take that and generate the appropriate set commands.
The former can get pretty sophisticated; if you're given only 'en', you probably want to generate 'en-us', you need to deal with chosing one of multiple choices when nothing matches perfectly, you need to deal with malformed header values, and so on. Don't try to tackle this all at once: just do something very simple at first, and extend it later.
The same more or less goes for the other half of it, generating the set commands, but this is pretty simple in and of itself anyway; it's really just a lookup function, though it may get a bit more sophisticated depending on what is provided to it.
What will really make or break your programming experience on something like this is your unit tests. This is an ideal problem for unit testing and test-driven development. Unit tests will make sure that when you change things, old functionality keeps working, and make it easier to add new functionality and fix bugs, because you just add another test and you have that to guide you from that point on. (You'll also find it easier to do a complete rewrite of one of the functions if you find out you've gone terribly wrong at some point, because you can easily confirm that the new version isn't breaking anything.)
How you do unit testing in your environment is probably a bit beyond the scope of this question, but let me add a few hints. First, if there's a unit test framework ("pl-sql-unit?") available for your environment, that's great. If not, don't panic. You don't need anything sophisticated: just a set of inputs and expected outputs, and a way to run them through the function and either say "all OK!" or show any incorrect results. You can probably write a single, simple PL/SQL function that reads the inputs and expected outputs from a table and does this for you.
Finally stumbled across the answer. The Oracle package UTL_I18N contains functions to map from the browser language codes to Oracle NLS settings:
utl_i18n.map_language_from_iso;
utl_i18n.map_territory_from_iso;
The mapping doesn't seem to cope very well with multi-language settings, e.g. en-us,en;q=0.5, but as long as you just use the first 5 characters the functions seem to work ok.
HTTP_ACCEPT_LANGUAGE: ar-lb,en-gb;q=0.5
v_language:
v_territory:
HTTP_ACCEPT_LANGUAGE: ar-lb
v_language: ARABIC
v_territory: LEBANON