Does PreparedStatement.setFetchDirection() make a difference for an Oracle DB? - oracle

Does PreparedStatement.setFetchDirection(ResultSet.FETCH_FORWARD) make a difference for an Oracle DB?
I'm on project to upgrade the database and application server for the power facilities and line maintenance system of a large electrical utility company. We are testing the new set up now (Oracle 12c and JBoss 7).
Sometimes the queries to the db are taking minutes to get an answer. I've suggested doing PreparedStatement.setFetchSize(int rows), where rows being the number of rows of information the user wants to see on his screen. Many posters here in Stackoverflow and elsewhere have said setFetchSize() is important to getting the network connection optimized.
But I noticed the PreparedStatement also has setFetchDirection(). There's FETCH_FORWARD, FETCH_REVERSE and FETCH_UNKNOWN. I've seen this mentioned many times, but I've not seen anyone saying how much of a benefit it is.
If it helps to understand the situation - the power company services 30 million customers, with decades of records (from nuclear power plants down to utility poles). The users of the service will be maintenance crews, office staff, both from with the company and from subcontractors. So, the computer skills and patience of the users may not tolerate a 3 minute wait to find the locations of the transformers they need to repair.

Oracle says
"The JDBC 2.0 standard allows the ability to pre-specify the
direction, known as the fetch direction, for use in processing a
result set. This allows the JDBC driver to optimize its processing."
However Oracle also says "the Oracle JDBC drivers support only the forward preset value" ResultSet.FETCH_FORWARD
"The values ResultSet.FETCH_REVERSE and ResultSet.FETCH_UNKNOWN are
not supported."
So, my guess is that if you are using JDBC 2.0 standard drivers, then you can optimize fetch by saying setFetchDirection(ResultSet.FETCH_REVERSE) when wanting to find overdue maintenance work. But if you are using Oracle JDBC drivers you are out of luck.
(Downvote me if I'm wrong - I know that improves your geek creed.)

Related

Oracle database driver benchmark

I'm actually working in a situation where a .NET stack is managing an Oracle Database. Now, because the legacy code is consistently based on PL SQL stored procedures that deal with the majority of the work, the correct choice of driver to connect to the database is of primary importance.
For this reason, knowing that Oracle provides a large number of driver for the most known programming languages, I was trying to find a documented benchmark (even with all the problem and the influence of the context in which the tests are made) that could compare the different Oracle drivers for the different programming languages, just to support the hypothesis that the best choice in terms of performance for an isolated test microservice would be to use the Java driver in combination with Scala (Java is now property of Oracle now, after all).
Are there any on the internet that could support (or not) this hypothesis?
EDIT
I didn’t provide all the information. What I’m trying to achieve is develop a series of microservices focused on fetching data from the database and convert them into json to be consumed by the front end. .NET driver behaves seamlessly until the numbers become really overwhelming (> 1000 rows).
That’s why I was wondering if it could make any sense using JDBC to increase performance, having heard that, for instance, .NET driver for SQL server, both made by the same Company, performs 5 times better than the oracle one when it comes to gathering data from a cursor.
Your choice of drives may not give you the milage if most of the work is in PL/SQL or stored procedures. Having said that, jdbc is a good choice. However do not fight if developers are more familiar with other drivers like Oracle ODBC, oracle provider for .NET, ADO,etc.. all protocols have a Oracle drive to access Oracle DB.
You don’t have to convert to json. Oracle DB can convert it. Your network latency is more impacted by how big the pipe is ,array size,and packet size than protocols.

Migrating from Oracle to PostgreSQL : where are the limits?

I've found similar posts on this forum, but migrate it to another one if needed.
We want to migrate to PostgreSQL from Oracle, but we have 6000 users simultaneously connected to a 4 To GIS database(divided in 1 To instances) and many other instances for WebServices.
Before looking at other problems, we heard that 500 max connected users is the max limit supported before performances decrease, decrease augmented when size of database become huge.
Have you got any (or do you know links to) successful experience on such a migration?Do we have to wait for PostgreSQL better performances to migrate?
EDIT
Found another example.
Please, read this article on the subject by Kevin Grittner, it will explain a lot on why many connections are problematic and what were the decisions by the PostgreSQL Core Team to approach this issue.
For the list of success stories, refer to the EnterpriseDB site, this is a company offering support for the standard PostgreSQL distribution as well as support and licensing for the advanced products built on top of standard distribution. For the enterprise database usage Postgres Plus Advanced Server might be a good choice.

monetdb - anyone uses it in production?

I am very interested in using monetdb as a datamart, holding some huge data tables for querying and reporting
However, after some searching, I am unable to find any online posts / blogs regarding their use of Monetdb in any kind of production capacity.
Also, there seems to be little or next to no activity online regarding Monetdb.
Is this a bad sign for the future of Monetdb ?
I am very interested in using monetdb as a datamart, holding some huge data tables for >querying and reporting
My boss is also interested in MonetDB and I had the same reaction as you. No one is writing about MonetDB... is no one using MonetDB?
Regardless, I have been running performance tests on datasets of 500,000 to 1,000,000 records comparing MonetDB (column-oriented dbms) vs. MySQL (row-oriented dbms) and MonetDB beats MySQL in all regards- even in bulk inserts... which hypothetically it should not be as good at.
I can't speculate as to what all this means for MonetDB's future, but while it's around you might want to check it out because it performs well.
(I run Windows 7 and am communicating with each database using PHP)
I react a bit late to this post, but I'd like to add my voice to the ones using MonetDB in a production environment. We use it as the back-end of Spinque, a framework for designing complex search solutions. I've been using MonetDB for about 10 years, but only in the past 3 years in a production environment. Clearly, it has pros and cons and bugs like all other products, but it is being developed and improved very actively (I don't understand the low-activity signs that you refer to). If you want a DB that allows you to be ahead of the market standards, it's a good choice. Otherwise, just go for MS SQL ;)
I've been evaluating it lately for a client so I've had some time with it. My impression at this point is that it is just finishing "growing up" from being an academic experimental playground. It clearly has yet to be really discovered, though it does have some rough edges which might hinder certain applications.
As I write, I'm in the process of trying to load over 100 million rows into an instance (at 27mil presently). So far, it performs startlingly well in some areas (aggregates), but is oddly sluggish in others (most joins I've tried so far); that said, I've not yet run the recommended sampling process yet and I'm forcing it to live in just a single service with 32GB RAM.
I've found a few little glitches and one thing that caused a full service crash (obscure and reported), but I'm thinking that for many applications MonetDB could be just the ticket. Columnar storage (rather than NoSQL) seems to be the future IMO.
I'll update this if I find anything particularly interesting.
MonetDB is first and for all a research system, but has progressed far beyond the level of the average research prototype. It is the (only) relational column-store platform in open source that I know of that supports full SQL. I have used it myself at CWI in many research projects that are not core DB research, but do need advanced DB technology.
You can see on the user's mailing list that deployments happen in many different organisations. As Roberto Cornacchia stated in a different answer, it is the backend of all Spinque deployments and we are happy MonetDB users. MonetDB is also used at a variety of non-profit projects like open streetmap and open kvk.
More and more commercial parties deploy MonetDB for analytics. (They do not always like to advertise that their analyses depend on an open source system.) Recently, MonetDB Solutions has started to provide dedicated commercial support for these deployments.
We have been using MonetDB in our business. We analyse very large data sets with many millions of rows. Traditional methods of data warehousing on SQL databases became so slow. The problem we were facing was that the data was only going to get bigger! The only way forward was to go columnar.
The results have been amazing. When you have very few joins it is staggeringly quick. Even with joins on the data sets we are looking at it is still frightening how fast it comes back.
Having seen some of the commercial partnerships I think MonetDB is going to boom over the next few years. I believe some of the major BI suppliers are using Monet under their hood to perform the large data work.

Questions related to mantis:

We are interested in installing Mantis but we have some doubts Please clarify as early as possible if you can so that we can go for further process.
1) We have one team at USA (Client’s place) and one is at India. In which server we should install the Mantis. If we are installing at USA will it run slow in India?
2) What about technical Support. You may take technical support with payment. But how much support will be given free (As we have to discuss this with client).
3) As we have seen details in your website, you have given it supports oracle and sql database. But we wanted to know till which lowest version of oracle and sql it supports. Please send us minimum requirement.
4) What is the capacity of the database to store defects? Backup facility is available? If yes please tell us how should we take. Because we have big team and 5-6 applications so it should not give further problems.
5) Database support: Do you provide database support or database while installation? While installing all the prerequisite application will get installed or we need any application separately.
6)How many users can access at a time? Will it work slowly if more users are working at a time?
Thanks
Komal
1) assuming you can get a similar Internet link, place it where you have more users
2) assuming you have a VPN and LDAP running, you aren't likely to need heavy technical support unless you want to customize it, but anyway, Mantis provides support information on it's web
3) do you have a good reason why you don't want to use MySQL? Sure, you seem to have Oracle and MSSQL expertise in-house, but it's not like you would have to develop on MySQL: it's just basic infrastructure, one not very expensive to maintain.
4) It is unlikely you'll run into capacity problems unless your team really is huge. The database will store as many defects as needed. Backup: mysqldump --user username --password > dbdump.sql
5) you generally install the RDBMS and Apache separately and then deploy the Mantis application on top of those, dedicating an empty database to the application
6) slowdown is inevitable with a growing number of users, but that's beside the point: what's important is what kind of hardware you have and how many users. There have been a couple of discussions about Mantis performance (e.g. this one and this one) but they are really quite old and are a bit on the extreme side (100k users).

How do I deploy an Oracle database?

I have an ASP .NET application that connects to an Oracle or a SQL Server database. An installer has been developed to install a fresh database to an existing SQL Server using sql commands such as "restore database..." which simply restores a ".bak" file which we keep under source control.
I'm very new to Oracle and our application has only recently been ported to be compatible with 10g.
We are currently using the "exp.exe" tool to generate a ".dmp" file and then using the "imp.exe" to import it into a developers box.
How would you go about creating an "Oracle Database Installer"?
Would you create the database using script files and then populate the database with required default data?
Would you run the "imp.exe" tool behind the scenes?
Do we need to provide a clean interface for system administrators so that they can just select the destination server and have done, or should we just provide them with the ".dmp" file? What are the best practices?
Thanks.
The question is -- what do your customers know about Oracle?
Nothing? You should probably rethink this position. Oracle is very large and complex. If you assume your customers know nothing, you'll then start providing tutorials and help that's inappropriate.
Minimally Competent? If they're competent, they know enough to run imp by themselves. Also, they know enough to run a script that executes SQL.
Actual DBA's? Most organizations that can afford Oracle can afford real DBA's. Real DBA's can cope with a lot of things -- they do not need much hand-holding. Some of them like to assign storage parameters according to their shop standards.
You should provide a script with reasonable defaults. You should define your script in a way that someone can easily find all of your storage parameters and tweak them if necessary.
Your initial data can be via export/import or via a script. I prefer a script.
I have done this repeatedly from both sides (consumer and provider) as a DBA, developer, and architect.
As a provider, one of my grand accomplishments (in 1996) was the creation of an installation CD for a commercial insurance claims management software product targeted to the largest insurance carriers (a multi-million dollar item). That installation CD installed the Oracle 7.2 RDBMS engine, the FileNet optical storage system (scans paper documents and creates cataloged binary versions), and our custom claim-processing application (built in VB 4.0), all integrated and ready to run. As part of the installation process, the user could skip the Oracle software installation or customize it, and the user could customize/override the database configuration in all of its major details (database, schemas, tablespaces, sizes, disks, etc.).
I also provided the field service for this product, which included traveling to the client site as necessary. I tested the installation CD literally hundreds of times under every imaginable scenario that I could replicate, and we NEVER had a field failure that required even a phone call, let alone a trip (I did travel on four occasions, but for pre-sales stuff instead).
More recently (2007), I scripted the creation of an Oracle 10g database for an internal system at a megacorp. In production, the database was sized at 8 TB, mostly for a single transaction table with high data volume. In test, the database was sized around 1 TB for a modest server. In development, the database was sized around 100 MB to run on my laptop. The EXACT SAME SCRIPTS created all three environments, and I could extend them to handle a new environment/machine in about five minutes. This database involved extreme performance tuning, so customization of all pertinent characteristics was absolutely crucial.
Back to the insurance claims processing product--let me please add that I was originally hired to lead its conversion from a SQL Server database to an Oracle database. That conversion was identified as a business necessity because most potential clients did not view a SQL-Server-based product as a professional, serious solution. That is not quite as common today, but it still applies in general: a software product has a better chance of market penetration if it can accommodate multiple database options as preferred by the target customers (especially enterprise-class customers).
Likewise, the installation CD was also viewed as an essential element. However, that situation and many more have revealed to me that most "real" DBAs will not accept an import-based database installation. As a DBA and architect, I know that I definitely will not for the same reasons.
Simply put, an import-based database installation gives the customer almost no control over the resulting database. It is opaque to the customer, leaving them questioning what it did. It forces the customer to expend massive efforts to attempt to exercise what little control they can. It is notoriously fragile and error-prone (Oracle imports are well known for ownership and permission problems, constraint problems, etc.). Weighing all those impacts, an import-based database installation is unprofessional--it does not put the customers' needs first.
Scripting the database installation provides the right kind of transparency, configurability, selective repeatability, and overall customer control that professionalism demands. It also encourages you to properly understand the impacts of your database design decisions in a way that an import does not.
Best wishes.
Personally I favour SQL scripts to database creation and data loads where possible. I tend to use PL/SQL Developer. It has some good options to generate scripts from an existing database. Once you have these you can run the scripts using sqlplus or any application code that can execute arbitrary SQL (eg JDBC with Java). Toad is the more common (and more expensive) tool for Oracle development.
The only limitation of a SQL export is it can't export CLOB/BLOB fields. If you have those, you either need to do them separately (as a PL/SQL export) or do the whole thing as a PL/SQL export. Theres no dramas with this except the file is effectively a binary export (extension .pde) and is more limited in how you can execute it.
The other big advantage of SQL source files is they can be version controlled easily. It's really handy to be able to create a database environment by running one or two scripts.
The import and export tools for Oracle I think are more applicable for backup and restore operations.
Now, as for delivering that to a customer, from your comments it seems that you'll be giving this to DBAs. Pretty much any Oracle installation will have DBAs involved. They will be fine with SQL scripts to create the schema and do the data load. They will be doing a lot of site-specific configuration (eg tuning the SGA, temp tablespaces, # of concurrent connections, etc based on expected load).
You, as the vendor, can give guidance on any relevant configuration and you may get involved in support and possibly installation but ultimately it's up to them to figure out what works for them. Oracle runs on a large number of operating systems and hardware variants with infinite variations in network topology and firewall configuraiton. You can't factor in all of these to an installer or even a set of instructions (other than the guidelines mentioned previously).
The last time I was involved in the creation of a (oracle) db (for a reasonably large company with in-house DBAs) the DBAs wanted to know things like:
what we wanted to call the db,
what tablespaces we would need, and an estimate of how much data would be in each one
how many users would be connecting.
(From memory) they set up the db and tablespaces, then we provided a combination of simple scripts that they could run (or clear instructions if a task wasn't easy to automate)
As I say this was for an in-house app, so your mileage may vary, but in my case they wanted all instructions clearly spelt out so that (a) there was no possibily of a misunderstanding leading to the wrong thing being done, and (b) no culpability on their part if something didn't work ("we were just following the instructions")

Resources