oracle databse PSU , CPU release number - oracle

I am currently working on oracle databases, but I don't have Oracle support. Anyone knows where to get a map of versions releases and CPU , CPU:
Example: 12.2.0.2.19XXXXX -- PSUJAN20XX ;
11.1.X.X.XXXXXXX -- CPUOCT2013

Version release info is available here:
https://support.oracle.com/knowledge/Oracle%20Database%20Products/742060_1.html
http://www.oracle.com/us/support/library/lsp-tech-chart-069290.pdf
Patch updates are cumulative and released quarterly, and Oracle has changed their branding a couple of times, so how they are described can be version or date-specific. See here: https://blogs.oracle.com/fusionmiddlewaresupport/patch-numbering-for-oracle-db,-enterprise-manager-and-middleware

Related

Mongodb file allocator takes more time

When mongodb is creating a new file under data directory it takes more time to create :
Line 376: Thu Jan 15 18:01:49.407 [FileAllocator] allocating new datafile >\data\db\test.3, filling with zeroes...
Line 476: Thu Jan 15 18:03:55.650 [FileAllocator] done allocating datafile >\data\db\test.3, size: 512MB, took 126.242 secs
Because of that node give below error after that node is not able to connect with mongodb.
{ "error":"{ err: 'connection to [localhost:27017] timed out' }","level":"error","message":"uncaught exception: ","timestamp":"2015-01-15T20:45:03.702Z"}
My understanding is that this error is coming from MongoMQ lib. I am not sure how I can handle it. Any one can help on this issue.
Windows Answer
The most obvious issue which could apply here is if you are using Windows 7 or Windows Server 2008. An issue (SERVER-8480) with those operating systems, which can be fixed by applying this hotfix means that the data files being allocated by MongoDB must be filled with zeroes.
That is a lengthy process when compared to the normal method. Unfortunately, even with the hotfix installed, with versions 2.6 and 2.4 MongoDB assumes the problem is still there on Windows 7 or Server 2008 and zero fills anyway. Version 2.8+ fixes the problem by detecting the hotfix specifically and reverting to not zero filling.
To give you an idea of the difference, here is a sample log line from 2.8.0-rc5 (which detects the fix) on Windows 7:
2015-01-22T16:56:51.749+0000 I STORAGE [FileAllocator] done allocating datafile E:\data\db\280\test.2, size: 2047MB, took 0.016 secs
And here is a sample log line from the same machine doing the same allocation with version 2.6.5:
2015-01-22T16:47:33.762+0000 [FileAllocator] done allocating datafile E:\data\db\265\test.2, size: 2047MB, took 112.071 secs
That's 112.071 seconds versus 0.016 seconds. Windows 8/2012 or 2.8+ (once released, and with the hotfix installed of course) seem to be the way to go here if allocation is causing problems for you.
There are also several more general known issues with versions of MongoDB prior to 2.6.4 on Windows, most notably SERVER-13729 and SERVER-13681 which were addressed with 2.6.4.
The remaining issues are being tracked in SERVER-12401 and are dependent on this hotfix from Microsoft to improve flushing of memory mapped files by the OS. Unfortunately, that hotfix is only available for Windows 8 and 2012, it has not been made available for Windows 7 and 2008.
Hence, make sure you are using 2.6.4+ at a minimum and, if possible Windows 8 or 2012. It may also be helpful compare any performance seen on remote storage with a local disk to determine if that is a contributing factor.
Linux Answer
(preserved in case anyone else ends up here when using Linux)
This should only take a few milliseconds, if you are using a supported filesystem. I suspect you are using something else (ext3 perhaps?), or perhaps using a very old version of Linux.
The supported filesystems use fallocate() to allocate the data files, which is very quick. If this is not supported, then the files must be allocated by filling with zeroes, and that will take a very long time.
Even if that is the case, 126 seconds to zero fill a 512MB file is very slow, which indicates the disk is either slow/broken, or oversubscribed/saturated and struggling to keep up.
If you want to evaluate the allocation outside of MongoDB, I've written a small bash script to pre-allocate data files for MongoDB (for testing purposes), which uses the aforementioned fallocate() and completes in milliseconds for multiple gigabytes on my test system.

Tortoise Is very Slow And uses Huge amount of memory

Since some days TortoiseSVN uses lots of memory when I want to commit also it takes 10 - 20 minutes before the changed files appear.
On normal use it doensn't use much memory only when commiting or comparing changed files.
As you can see the memory usage is not normal.
I have already reinstalled the newest version (1.8.10) but no difference.
Does anyone have any clue?
(the directory I am working in is 2 GB This includes the tempdata witch is excluded from svn and i am working on w7 x64)
Here is a Screenshot of the Icon Overlay settings i use
I had the same issue since I updated to (TortoiseSVN 1.8.10); excessive amounts of memory used and a each refresh of your view would increase this amount even further.
The new version 1.8.11 appears to have resolved the issue.

Need documentation of Progress application upgrade 11.3?

We are upgrading our Progress application on 9.1D to 11.3. Is there any sample document which we should look for our migration.
Currently we have built a new server where we are installing OpenEdge Enterprise RDBMS 11.3.
Can we backup the current database and dump it to new version.
Any suggestions/documents ?
Generally Progress is very "kind" when upgrading but you have to take in mind that moving from 9.1d to 11.3 (11.4 is soon out by the way) is moving from 2002 to 2013. A lot has changed since then.
If you have program logic that relies on disc layout, os utilities (for example using UNIX, DOS or OS-COMMAND) they might be changed as well. So an upgrade might break even if the files compile without errors. You need to test everything!
You cannot directly backup and restore from 9.1D to 11.3, you need to dump & load.
What you need to do:
Back everything up! Don't miss this and make sure you save a copy of the backup. Back up database, scripts, program files (.p, .i, .r, .cls, etc). Everything! This is vital! Make sure you always have an untouched version left of the backup so you can restart if things go bad. Progress has built in utilities for backing up the database. OS utilities can also be used. Be aware that OS utilities can not be used to create online backups. The backed up database will most likely be corrupt. Shut down the database before backing up when using OS utilities.
Dump you current database. Data as well as schema. Don't forget to check for sequences etc.
Rebuild a new database on your new server with schema from old db.
If possible - move to Type 2 Storage Areas when doing an upgrade like this. It will increase perfomance. Check documentation and knowledgebase around required settings for this.
Load dumped data
Copy program files from old server to new
Recompile
Create startup scripts etc for starting databases as well as clients. Old parameters might not fit your new server, you most likely have more memory, faster CPU, larger discs etc.
All steps have several substeps. I suggest you dive into the documentation found at community.progress.com. You can also search the KnowledgeBase (knowledgebase.progress.com)
Also if you run into problems you can ask more specific questions here (but tag accordingly for example with openedge).
11.3 Documentation
9.1D Documentation
KnowledgeBase

How to increase maximum open files Mac OSX 10.6 for neo4j graph database?

I am getting this error message when starting my local neo4j server for development (live I am using the heroku neo4j addon).
WARNING: Detected a limit of 2560 for maximum open files, while a minimum value of 40000 is recommended.
WARNING: Problems with the operation of the server may occur. Please refer to the Neo4j manual regarding lifting this limitation.
I have googled, tried to search the manual on site, downloaded and searched the pdf. (To hopefully eliminate the RTFM responses). I cannot find how to do this in Mac OSX 10.6. Sounds like something pretty basic that it's just assumed I'll know. Any thoughts?
The best way to change this would be to set the resource limits in a launchd.plist format and use that to spawn your development shell or your database process. Once you have your launchd job, you can load, unload and have the system start it up and respawn it as needed.
See man launchd.plist - look for:
HardResourceLimits <dictionary of integers>
Resource limits to be imposed on the job. These adjust variables set with setrlimit(2). The follow-
ing keys apply:
NumberOfFiles <integer>
The maximum number of open files for this process. Setting this value in a system wide daemon
will set the sysctl(3) kern.maxfiles (SoftResourceLimits) or kern.maxfilesperproc
(HardResourceLimits) value in addition to the setrlimit(2) values.
I've also had good luck with the published guides and blogs for Oracle 10g installs as they explain fairly well what sysctl and kernel values Oracle likes to change on Snow Leopard (and other releases) since Lion is a bit more launchd centric than past releases and you indicated a 10.6 for your base OS.

Jboss slows down after a while

In a nutshell: my JBoss instance is running ok, but after some days it's performance is slowly degrading.
Detailed:
I've got a setup with JBoss 5.1.0-GA and Java 1.6.0_18-b07 (x64) running on a 64 bits RHEL 4 box. The hardware is a virtual machine with 8 core Xeon X5550 / 20G ram.
The product deployed in JBoss contains a webservice on which a endurance test is performed.
No database is involved in the process.
The tests are performed using soapui with 4 threads and the tests are configured to create 20% cpu usage.
Let say, at first the average response times are 300ms. After 2 days, the response times are now 600ms, which I don't understand.
Of course I did some checks:
There are no memory leaks (confirmed with jprofiler)
Heap mem is always around 25-50%, perm space usage is 50%
GC is almost never busy
All threads are idle after inspecting a thread dump
While do some further investigations, I did a cpu profile with JProfiler at the beginning (when it's still fast), and on on the (slow) end. What I see then, is that every single call just is 100% slower!
Even call's to a simple Map#put(). (the # of invocations and the content of these maps are the same).
When running a profiler, there are no signs of blocked threads, just running threads.
Does anyone has a clue what's causing the performance degradation?
Thanks!
Update: solved the performance degradation by upgrading the Java version to 1.6.0_24 !
While out of options, I scanned through all the release notes of the java vm, and discovered a performance and reliability fix in 1.6.0_23. See also the
1.6.0_23 release notes
After the jvm upgrade, the performance stays the same and does not degrade over days.
Solution found by Jan :
Solved the performance degradation by upgrading the Java version to 1.6.0_24 !
While out of options, I scanned through all the release notes of the java vm, and discovered a performance and reliability fix in 1.6.0_23. See also the 1.6.0_23 release notes
After the jvm upgrade, the performance stays the same and does not degrade over days.*

Resources