Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Can, in today's drivers, JDBCTypes of the most popular DBMSs and set always to JDBCType.OTHER? Can input/output classes be used interchangeably within the character stream/binary stream/number/time point type classes?
I am asking first, specifically about JDBCType/java.sql.Types constants and, second, about the methods used for retrieving (i.e. the Java class to which the column maps). My experience with several databases was that in general, they will happily convert anything to anything else if only it makes sense. Mind you, I did not perform any kinds of exhaustive tests, it is just an experience that I could almost always access a ResultSet column as any type I wanted specifically in that context, not worrying about JDBC<=>DBMS SQL mapping. Obviously, I am not advocating here timestamp<=>string, integer<=>string conversions and similar; I am also aware for example, of the subtle differences between accessing a column as ZonedDateTime with getObject, and converting manually the result of getTimestamp, but that's a difference between the concepts behind these classes.
Explicitly, applications use JDBCType in practice only in PreparedStatement.setNull and CallableStatement.registerOutParameter, and here my experience was also that as long as I stuck to number/string/time/binary distinction I could pick the class according to wishes.
So, overall, they seem to me like a relic of questionable example from 15(?) years ago when we didn't have the experience and knowledge about building modern server side applications and most of Java EE was based on fantasies.
The type constants themselves are also used for metadata introspection. For example, for DatabaseMetaData.getColumns, the result set column DATA_TYPE contains the java.sql.Types code, and ResultSetMetaData.getColumnType returns the java.sql.Types code, and that applies to other metadata methods and objects. Metadata introspection might not be very important in normal programs, but it is used a lot by database tools and data access libraries or frameworks.
Other usage really depends on the driver and database system. Some drivers (and database systems) will always determine the type of columns on prepare time, and when setting values the driver will convert the value to the expected type of the parameter (as long as such a conversion is possible or specified by JDBC). That is, if the parameter is VARCHAR, and you set a long, then the driver will convert the long to string and use the string as the parameter value. For those database systems, the java.sql.Types and java.sql.JDBCType doesn't have a lot of value (beyond metadata introspection), and will usually be ignored.
In other database systems and their drivers, parameters don't necessarily have an expected type at prepare time (or the prepare phase can be skipped if the driver supplies the type information, or the database system allows you to override parameter types), and the type will be determined by explicitly setting the value (so, setting a string will determine the type as VARCHAR, while setting a long will determine the type as BIGINT, etc). In those cases the type constants will have use, for example in setNull, or in setObject, as it will specify the type of the parameter, which could infer specific conversions or behaviour on the driver or on the database. It might also be necessary for situations where a parameter is passed to a polymorphic or overloaded function (that is, the actual type of the parameters determines what the function does and what it returns).
The registerOutParameter in CallableStatement is actually a special case of that. For the first type of drivers, this is usually technically unnecessary (as the types would be determined by the prepare), while for the second type it can be either necessary, to leave conversion of values to a specific type to the database engine, or useful to be able to execute stored procedures without explicit prepare: you tell the driver which OUT types to expect, and it can then execute the procedure without having prepared it first. In that last case, on execute it will - for example - send the statement text, and the parameters and a descriptor of the OUT types expected. If there is a mismatch (incompatible types, too few or too many parameters or OUT types, etc, the database system would then reject execution, or - though I don't know if this exists in practice - the combination of actual parameter types and expected OUT types could select a specific stored procedure implementation (if a database system supports overloaded stored procedures).
Related
How can I disable removing leading zero for decimal, when it convert to char?
For example:
select to_char(0.1) from dual;
return .1, not 0.1.
I know than i can use format parameter, but i want that it work correct for implicit conversion.
I have a lot of plsql code for get data from database.
In some case there are conversions from float to varchar2.
For example use dbms_sql.define_array with varchar2 table when column type of request is number.
I can try find all such things and correct them (and I do it), but IMHO it is better way to set up rule for such conversions.
Thanks in advance.
Afaik you cannot alter the default format for implicit conversion.
In case you limit yourself to using the conversion in the context of inserts/updates to a given table, you might mimic the intended behaviour using database triggers. I do not see any advantage compared to an express format model in your queries for that use case.
There are some official oracle docs claiming that you can override built-in functions by re-implementing them in java. I have never done that, I do not know whether this is a generally viable method, and I would strongly discourage any attempt at tinkering with the database API (therefore, nolink is included) !
Assuming that what you aim at is feasible somehow, reconsider why you do not want to expressly indicate the conversion format. The few additional keystrokes are hardly a nuisance for the developer while the idiom contributes to a self-documenting code base.
Spend your development resources more wisely than performing micro-optimizations !
I'm having a debate with a friend regarding Generic Objects vs. Strict Data-Type instances access.
If I have a fairly large JSON file to convert to objects & arrays of data in Flash, is it best that I then convert those objects to strict AS3 classes dedicated to each objects?
Is there a significant loss on performance depending on the quantity of objects?
What's the technical reason behind this? Does Generic Object leave a bigger foot-print in memory than Strict Data-Type instances of a custom class?
It's hard to answer this question on a generic scale since in the end "it all depends". What it depends on is what type of objects you are working with, how you expose those objects to the rest of the program and what type of requirements you have on your runtime environment.
Generally speaking, generic objects are bad since you no longer have "type security".
Generally speaking, converting objects to typed objects forces you to leave a bigger memory footprint since you need to run that class during runtime, and also forces you to recompile an untyped object "again" into another type of object, causing some extra cpu cycles.
In the end it kinda bowls down to this, if the data that you received is exposed to the rest of system, it's generally a good idea to convert it into some kind of typed object.
Converting it to a typed object and then working on that object, improves code readability and makes it easier to read the code since you don't have to remember if the data/key table used "image" or "Image" or "MapImage" as the accessor to retrieve the image info of something.
Also, if you ever change the backend system to provide other/renamed keys, you only have to do the change in one place, instead of scattered all over the system.
Hope this answer helps :)
In situations where we cannot use bind variables, such as when our dynamic queries have to execute ddl statements, is the following list of defenses enough?
Never use anonymous blocks in dynamic queries so that only one statement can be executed by execute immediate. This stops code injection attacks.
Escape all single quotes using replace function. This stops statement modification attacks.
What characters other than single quote can be used for quoting and how can they be escaped?
How to prevent statement modification through AND, UNION etc attacks?
How to prevent function calling attacks so that user cannot call built-in functions? Every user has rights of calling those functions and calling of those functions can cause denial of service and buffer over flow attacks. How to save from that?
I prefer allowing gui to take single quote character means not check that in client side and server side validation in a web application. This is to allow names like O'Brian. At the database level, just before the execute immediate statement escape the single quotes. Do you know of any better approach?
Solution to any other vulnerabilities not listed above.
Note: I have already gone through about a dozen questions related to SQL injection on this site. I still posted this question because:
It's specific to oracle. Most questions I found on this site on the topic are related to MySQL, SQL server etc.
It's specific to situations where bind variables cannot be used. If one can use bind variables then that is enough and no other defense is needed.
It's better to list down all needed methods at one place.
Some advanced techniques of SQL injection like function calling are not discussed in detail and I cannot find any solution against that.
Edit:
Following may be a viable solution.
I think I have a solution. Its in addition of using usual defenses such as static statements, bind variables etc. Its particularly useful in situations where the usual defenses cannot be used. Note that the only situation where bind variables cannot be used is ddl statements. For such statements:
Verify existence of database object using static sql. This solves half of the problem.
The other half is related to the new value we want to put in the database object. For example when changing password of a user: the first half is username, the second half is password. One should encrypt the new value at front-end and save encrypted value in database. The encrypted value encrypted as sql code cannot do any damage to database (cannot call any functions for example).
Never change user input because of various reasons such as confusing user for example in passwords, the value may be valid in some situations such as it can be a valid html etc. It means let the ['], [\'], [#] pass through all validations unchanged. It's the static SQL or the encryption that is supposed to handle it.
Is there any situation where we cannot encrypt the new value?
I have some code that is responsible for converting data from an IDataReader into an IronPython.Runtime.List of PythonTuples. This same code is employed for several different kinds of database connections (including Access, Oracle and MySql).
Oracle's OracleDecimal datatype causes an overflow when calling dataReader.GetValues() when the cursor contains a value with a large precision. This issue has been well documented, and the solutions always involve using specific methods on the OracleDataAdapter. I only have an IDataReader interface.
Is there any way around this issue without binding my code specifically to ODP.NET? Surely there must be some way to get at this data in a provider-agnostic way?
The only provider agnostic method that I am aware of is to round the values in your select statement. I've found that rounding to 15 decimal places usually does the trick.
It may not be exactly what you're looking for, but the System.Data.Common.DbDataReader class has a GetProviderSpecificValues function that may do what you want
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Does anyone know of the practical reasons for the com.company.project package structure and why it has become the de-facto standard?
Does anyone actually store everything in a folder-structure that directly reflects this package structure? (this is coming from an Actionscript perspective, btw)
Preventing name-clashes is a rather practical reason for package structures like this. If you're using real domain names that you own and everybody else uses their package names by the same role, clashes are highly unlikely.
Esp. in the Java world this is "expected behaviour". It also kind of helps if you want to find documentation for a legacy library you're using that no one can remember anymore where it was coming from ;-)
Regarding storing files in such a package structure: In the Java world packages are effectively folders (or paths within a .jar file) so: Yes, quite a few people do store their files that way.
Another practical advantage of such a structure is, that you always know if some library was developed in-house or not.
I often skip the com. as even small orgs have several TLDs, but definitely useful to have the owner's name in the namespace, so when you start onboarding third-party libraries, you don't have namespace clashes.
Just think how many Utility or Logging namespaces there would be around, here at least we have Foo.Logging and Bar.Logging, and the dev can alias one namespace away :)
If you start with a domain name you own, expressed backwards, then it is only after that point that you can clash with anyone else following the same structure, as nobody else owns that domain name.
It's only used on some platforms.
Several reasons are:
Using domain names makes it easier to achieve uniqueness, without adding a new registry
As far as hierarchical structuring goes, going from major to minor is natural
For the second point, consider the example of storing dated records in a hierarchical file structure. It's much more sensible to arrange it hierarchically as YYYY/MM/DD than say DD/MM/YYYY: at the root level you see folders that organize records by year, then at the next level by month, and then finally by day. Doing it the other way (by days or months at the root level) would probably be rather awkward.
For domain names, it usually goes subsub.sub.domain.suffix, i.e. from minor to major. That's why when converting this to a hierarchical package name, you get suffix.domain.sub.subsub.
For the first point, here is an excerpt from Java Language Specification 3rd Edition that may shed some light into this package naming convention:
7.7 Unique Package Names
Developers should take steps to avoid the possibility of two published packages having the same name by choosing unique package names for packages that are widely distributed. This allows packages to be easily and automatically installed and catalogued. This section specifies a suggested convention for generating such unique package names. Implementations of the Java platform are encouraged to provide automatic support for converting a set of packages from local and casual package names to the unique name format described here.
If unique package names are not used, then package name conflicts may arise far from the point of creation of either of the conflicting packages. This may create a situation that is difficult or impossible for the user or programmer to resolve. The class ClassLoader can be used to isolate packages with the same name from each other in those cases where the packages will have constrained interactions, but not in a way that is transparent to a naïve program.
You form a unique package name by first having (or belonging to an organization that has) an Internet domain name, such as sun.com. You then reverse this name, component by component, to obtain, in this example, com.sun, and use this as a prefix for your package names, using a convention developed within your organization to further administer package names.
The name of a package is not meant to imply where the package is stored within the Internet; for example, a package named edu.cmu.cs.bovik.cheese is not necessarily obtainable from Internet address cmu.edu or from cs.cmu.edu or from bovik.cs.cmu.edu. The suggested convention for generating unique package names is merely a way to piggyback a package naming convention on top of an existing, widely known unique name registry instead of having to create a separate registry for package names.