Oracle ODP.net Managed vs Unmanaged Driver - oracle

Are there any performance benchmarks between the managed and unmanaged Oracle ODP.Net drivers?
(i.e. is there any advantage to moving to the managed driver other than for architectural/deployment simplicity)

I would like to share some results. I think that the small lack of performance is worth compared to the easiness of deployment.
Note: seg means seconds. Sorry about that.
Of course that it is a simple test, and there are several topics that is not covered like connection pool, stability, reliability and so on...
It is important to mention, that the scenarios were executed 100 times. So the time quantities are the average of that 100 executions.

Bullets from the quick start video:
Fewer files (1 or 2 dlls at most)
Smaller footprint (10 MB compared to 200 MB)
Easier side by side deployment
Same assembly for 32 and 64 bit (except for second MTS assembly).
Code Access Security
I'm not sure about performance but I doubt it will be much different either way. My guess is that the two drivers communicate in an identical way over "Oracle Net." While there might be minor differences in the in-memory client side operations done to prepare a command and process the results, this overhead typically only represents a fraction of the time relative to the entire transaction. Most of the cost/time is spent on the server in physical IO and transferring the data back to the client. This simply isn't the same as going from the oledb provider or the System.DataAccess.OracleClient driver. This is another release from the same RDBMS company - they're going to exploit all the same performance tricks that their other client used. I wish I could post a study, but i'd guess such a thing doesn't exist because in the end it would be unremarkable. A case of no news is good news - if the new provider was somehow worse you would be reading about it.
Simplicity is enough reason to switch to this IMO. The vast majority of developers and administrators do not fully understand the provider and its relationship to the unmanaged client. Confusion about oracle home preference, version mismatch, upgrades, etc comes up constantly. To eliminate these questions would be a welcome change.

Here is a gotcha for all you folks. Took me a couple weeks to figure out why Oracle Managed drivers would not connect using ef6. If your database has the following data integrity algorithms then you MUST use the unmanaged drivers!!
buried deep in the oracle documentation!!! THANKS ORACLE!!!!!

The easier deployment and bitness independence are really nice benefits, but you should rather evaluate your typical driver usage thoroughly. I faced almost 50% performance handicap when using the new managed driver in 64bit processes. Other people are reporting memory leaks etc on Oracle forum: https://forums.oracle.com/community/developer/english/oracle_database/windows_and_.net/odp.net . It looks like it's kind of typical Oracle buggy product which needs some more months/years to settle back :/

Keep in mind that Custom Types are not supported yet. This could be a reason not to switch to the managed driver.
See this Oracle doc for the differences between the managed and unmanaged version:
http://docs.oracle.com/cd/E16655_01/win.121/e17732/intro004.htm

Related

Any way to use >1 Core in PostgreSQL for a single Connection/Query?

I get that Postgres scales automatically to multicore with multiple connections, but what about when I'm running a massive query on a SINGLE connection? So frustrating that the CPU usage maxes out at 25% on my 4-core system.
I'm in process of switching from SQL Server and this is the only thing so far that really bugs me. SQL Server will use up to 100% of my CPU for a single connection/query.
I'm running 9.2 on Windows 7 Enterprise 64-bit with Xeon processor if it matters.
If there is not way to get around this, could someone address why this isn't seen as an issue? Is it because Postgres is focused on multi-user scenarios?
PostgreSQL does not currently support executing a single query across multiple CPU cores (minus background things like background writing and wal writing if you're doing a write query, but that doesn't really count). It's work that's in progress, but it's a long-term project, and is not in any current version of PostgreSQL.
This is the same on all platforms and architectures.
It is definitely an issue, but since PostgreSQL is, as you say, focused on multi user scenarios, it's not bubbled to the top of the priority queue until recently. But there are definitely people realizing it's an issue, and working on solving it for future versions, it's just not done yet.
There is a Foreign Data Wrapper that aims to add parallelism via the GPU called pg_strom. I've never used it, and it looks quite specialized, but maybe you (or someone here) has a use-case for it.
Article describing pg_strom
http://gpuscience.com/software/postgresql-gpu-pgstrom/
The code:
https://github.com/kaigai/pg_strom
It's not that it isn't seen as a problem. It's that it requires fundamental architectural changes. The use case for it is pretty specialised. It would only help on data warehouse type environments where you're executing long queries one at a time -- AND the queries are CPU bound, not disk i/o bound as they would usually be.

Scalability of Oracle Forms

What is your experience regarding the scalability of Oracle Forms? What's the maximum number of application users you would use Oracle Forms for: 100, 1000, 10000, 50000?
I know that this question lacks many detail information for a well-founded answer. However, I am interested in the gut feeling of seasoned Forms developers.
Thanks.
You may find this Oracle white paper useful: Forms Capacity Planning Guide.
One thing to consider is that Forms is a "stateful" system, so connected users will actually be maintaining Oracle sessions. Contrast this with a "stateless" system like Oracle Application Express (APEX). I believe (but don't have evidence to prove it) that APEX will scale better than Forms (i.e. with less hardware).
I am currently involved in an APEX project that will have 2000 concurrent users. The original plan was to use Oracle Forms, but we didn't change because Forms couldn't scale to 2000 users (it could), there were other reasons for doing so.
Personal opinon: At this point, we tend to use Forms for complex UI apps with lots of validation and pretty intense usage.
If you can meet the business needs easily in a pure-web tool like ApEx (or any of hundreds of others), I wouldn't use forms.
So you'll probably need to assume that many of those Forms users are going to be keeping their connections pretty active.
And complex Forms use a lot of memory. We're running the app server on 34-bit Windows (not my choice) and running into memory limits with about 50 active connections.
Forms is pretty good on concurrency, so with reasonable coding you're not going to hit any major database limits. And app server processing and IO won't be your constraint. It's really just a matter of how many active users you're dealing with at one time, what their memory footprint is, and how big or how many app servers you're willing to deal with.
(Background: Forms Developer since version 2.3 (with a bit of 2.0), still using it for some projects and a lot of legacy)
In simple terms Oracle Forms scales. My evidence? Oracle E-business Suite uses it. If one of Oracle's premier products couldn't scale it would have been moved off this platform a long time ago.
I agree with Tony & Jim comments, in that Forms being more expensive to scale because of the use of persistent connections.
I know this is an old question and maybe things changed now.
But we have running a huge project in oracle forms for over 2 years now with about 3000 concurrent users. And we are still expanding the project, so this number will rise in the future.
We have only 2 AIX servers running as application servers. And 1 DB server also on AIX.
Just make the necessary configurations and do a lot of performance tuning on you're applications and play with the parameters of DB and application server.
This all works fine now with our setup. So the answer of Tony Andrews about 2000 concurrent users not possible in forms is wrong. You just need to invest some time on tuning, but which application for 3000 concurrent users won't need tuning?

Reporting with DB2

I started learning .net about 3 years ago. I have gone thru a boot camp during that time learning OO and various data access technologies such as NHibernate, Subsonic, LINQ TO SQL.
didn't wanna try EF cause it hasn't reached version 3 :)
As far as reporting goes, I have heard that many ORM'S fall flat on their face when it comes to reporting. We have AS400 OR DB2 as our backend. I have heard that LLBLGEN does a good job on reporting for this product. But it is a commercial product and not FREE. Can someone point me to some good resources for Reporting from DB2? thanks for any links/blog articles
Reporting on DB2 will work the same as reporting on almost any other database - you can use ODBC, JDBC or native DB2 calls to the database. So, you don't need DB2 reporting references - any database reporting references should meet your needs.
The only thing special about DB2 might be a little of the syntax extensions, and how you scale up the back end through parallel database servers (like MapReduce, Teradata, etc). But neither should be of much concern - since it's extremely ansi compliant and the scalability should be largely invisible to the reporting developer.
And Crystal Reports, Brio, Cognos, Business Objects, Microstrategy, Actuate, JasperReports, Birt, etc should all work fine.
ORMs are typically terrible for reporting - since they're object rather than set oriented. You'll especially feel the pain with very large data volumes, complex reports or a large number of reports.
Please, don't overlook the most obvious answer: Query/400!
It is native iSeries software. You configure and runs the report on the iSeries but it works great. It is simple, straight forward and maybe a little bit limited but you get most of the works done.
Don't be scared of the green screen or the simple interface. It's really a powerfull tool that does handle the iSeries database very well.
Can someone point me to some good resources for Reporting from DB2?
RPG I!
Light up those indicators!
Query Manager:
You cam use SQL (that can take input parameters) to build it, then create a "form" that will provide totals, level breaks, counts, customized headers, titles, etc.
Query/400 does not accept parameters AFAIK.
Free manual at:
http://publib.boulder.ibm.com/infocenter/iseries/v6r1m0/topic/rzatc/sc415212.pdf

Securing/Encrypting embedded database in Delphi

Which method do you suggest to efficiently secure your embedded database in Delphi applications?
Here are the methods I've tested:
Using Molebox Pro
Pros - Fast, unpacking is not child's play, no additional task/coding
Cons - Database will be read-only, same drawbacks as exe compressors
Using DISQLite3
Pros - Overcome Molebox's read-only issue
Cons - 50% or more performance fall on encrypting
So I'd like to know if you have used anything like this in your projects and if you satisfied with speed and encryption etc. Please share your techniques.
The fact that Molebox Pro leaves your DB read-only while DISQLite3 does not seems to be the deciding factor. Likewise if the performance penalty on encrypting is the only con for DISQLite3, then it is irrelevant since Molebox Pro is read-only (thus no encrypting during operation). It really comes down to your requirements.
If you are looking for other options then I would suggest checking out ElevateDB or DBISAM from ElevateSoft. They are both embedded databases with built in encryption. I've used DBISAM, but ElevateDB is their newer and preferred database. Also check out Advantage DB from Sybase, which is less embedded but also has encryption.
If you have other requirements that may impact your choice let me know!

MS Velocity vs Memcached for Windows?

I've been paying some attention to Microsoft's fairly recent promoting of Velocity as a distributed caching solution that would compete with the likes of Memcached.
I've been looking for a 64bit version of Memcached for Windows for some time now with no luck, and since everything about the ASP.Net MVC project I'm working on is 64bit, it doesn't make sense to use anything but 64bit.
Now we're already hedging our bets with ASP.NET MVC in Beta (RTM soon hopefully), but StackOverflow doesn't seem to be doing too badly, so I have limited concerns there. But Velocity is still very much an unknown quantity and will still be Beta (or CTP) for ages - but it does have 64bit!
Does anyone have relevant experience or point of view to offer in this situation? Should we bide our time for Velocity - is it even anywhere near good enough to compete with a giant like Memcached, or should we invest time trying to get a 64bit version of Memcached going?
We have done recently a fair amount of comparing of Velocity and Memcached. In the nutshell, we found Velocity to be 3x - 5x slower than Memcached, and (even more crucially) it does not have currently support for a multi-get operation. So at the moment, I would recommend going with Memcached. Also, another lesson we have learned was that the slowest operation in distributed caching is serialization and deserialization (at least in ASP.NET). The in-process ASP.NET cache is order of magnitudes faster. So you have to choose caching strategies much more carefully.
If you don't mind paying for a license, you can use Scale Out State Server, which I talk about in my answer to a similar question here. They have both 32- and 64-bit versions.
EDIT: Despite the name of the product, it handles both Session State and distributed caching.
Memcached has some open source libraries if I'm not mistaken so if you want to go the 64bit route can you not just recompile?
I evaluated Velocity when it first arrived but came to the conclusion it was a bit undeveloped at that stage. Being able to run memcached on non-windows servers is also a bonus.

Resources