Oracle Database Slow Performance [closed] - oracle

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 days ago.
Improve this question
I have a serious issue with an Oracle Database. At random times, it seems like the database cannot serve any request (some queries are hanging, new connections cannot be made etc), resulting in a big freezing and extremely slow performance. Something eats up so much resources that even an ssh connection, to the hosting VM, takes from about 3 to 7 minutes to be established.
Most of the times the database works just fine without any issue at all.
Setup:
Oracle 12.2c hosted in an Oracle Linux VM (2 CPUs, 32 GB RAM, 4TB Disk)
Java EE (Servlets, JSPs, JDBC, PreparedStatements) web app deployed in 6 tomcats under a haproxy
Java EE (Spring Boot) web app deployed in 1 tomcat
I assume that something runs in the background, at that time, but i cannot find it. Where should I start looking to solve this annoying issue? Any suggestions?
Things done:
We've checked all the hardware for issues, but nothing found.
We've also checked all cpu running proccesses and none seems suspicious.
A backup proccess runs every day in the background, but even when we kill it nothing changes.

Related

WebCenter Installation

Typically, how long should it take to install the Oracle WebCenter Suit?
We have a team of 3 developers trying to install WCS, however, it seems to be taking a little too long.
It really hard to say without any environment info like db version, cluster, network, load balancing etc...
Normally, for a local development installation, with correct database and os version, and a little bit luck, bringing up a standalone webcenter stack should be around 1-3 days.
If your developers are really stuck with the installation. I would suggest to get a Oracle pre-built VM for a good start without holding the enviroment
http://www.oracle.com/technetwork/community/developer-vm/index.html#wcp
Oracle WebCenter Portal VM
It really depends, but assuming a local, non clustered content install, you should be able to knock it out in a few hours.
Some factors that can extend the process:
web tier installation
slow x11 over VPN
clustered
networking issues
not doing a proper pre-install checklist (e.g., not having credentials ready)
I've seen it take as long as a week for a non-expert to install.
Update if you have any specific questions.
-ryan

What could cause successful FTP transfers to be truncated on a brand new RackSpace SSD server? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I am trying to upgrade my web server. I have created a brand new instance of a latest generation virtual server on RackSpace that uses an SSD. On this brand new instance, I installed the following:
Google Chrome
FileZilla FTP Client
I then connected to a FileZilla FTP Server on a different server, which is hosting 2 image files that I am using to test. I then downloaded the 2 image files, which FileZilla reports as "successfully transferred". However, both of the image files are truncated! What could possibly be causing this?
A few things to note:
This only happens on the new instance if it is using an SSD. If I create an identical instance without the SSD (using SATA instead), the error does not occur.
On the server which is transferring the files, the files are also reported as having been transferred successfully. This server has been used as an FTP server for quite some time without any issues.
If I set up the new SSD instance as an FTP server and upload a bunch of files to it, some of them randomly get truncated by 2-10KB. Out of a ~150MB upload, I may end up with 150-200KB missing. If I transfer them again, a different subset of files gets truncated.
If I throttle the transfer speed on the FTP server to 100KB/s, the 2 image files transfer successfully without getting truncated. If I throttle the transfer speed to 500KB/s, the image files get truncated the same way as if there was no throttling.
Any ideas on how this could be happening?
Update: It is not related to FileZilla. Here is the same issue using ftp on the command line:
The solution is documented here: http://www.rackspace.com/knowledge_center/article/disabling-tcp-offloading-in-windows-server-2012
That article is for Windows Server 2012. In my case, I was using Windows Server 2008. To get to the network adapter properties, go to
Right click on Computer --> Properties
Device Manager
Open up network adapters drop down and right click --> Properties
Go to Advanced tab
Disable everything except UDP Checksum Offload.
Important note: If only some of the options are disabled, you will notice a massive performance degradation. Performance will go back up to normal levels after you have disabled all of the necessary options.
The reason it says that the transfer is complete is because closing the socket is- unfortunately- how FTP defines a completed transfer. (It opens up a data connection and sends the data. Closing the connection means the file is completely sent.)
For some reason, it seems like the connection is prematurely closing.
Personally, to me this does sound really bizarre and it might be a driver problem or hardware problem, but I would try:
1. Try Passive mode FTP. The command line client uses PORT mode by default. PASV is more firewall friendly.
2. Try disabling all software firewalls (like Windows Firewall) and retrying.

java.net.SocketException: No buffer space available (maximum connections reached?): JVM_Bind

Tomcat is running a webapp under Windows. After a few days (under very low load), the exception mentioned in the title starts to appear in the logs, no new connections can be established from that point on, the only fix is then to reboot the server.
Environment:
Latest Tomcat 6
Windows Server 2008 R2
JDK 6 update 30
SQL Server 2008
Kerberos authentication
Evidence collected so far:
netstat shows no excessive amount of connections
ProcessExplorer shows no excessive amount of open file handles
system main memory usage is average
JVM heap usage is average
restarting Tomcat does not solve the problem
Open questions:
if we were leaking connections, shouldn't they show up in netstat?
shouldn't a restart of the appserver resolve the problem, because the OS should free all process resources?
is there a way to trace the problem to its origin? E.g. installing monitoring software, maybe something similar to lsof etc.?
I'm out of ideas, any hints appreciated!
The reason we got this error is a bug in Windows Server 2008 R2 / Windows 7. The kernel leaks loopback sockets due to a race condition on machines with more than one core, this patch fixes the issue:
http://support.microsoft.com/kb/2577795
I was running Alfresco Community 4.0d on Windows 7 64 bit and had the same symptoms and errors.
The problem was fixed with Microsoft's patch: "Kernel sockets leak on a multiprocessor computer that is running Windows Server 2008 R2 or Windows 7" (http://support.microsoft.com/kb/2577795) (ie. Buddy Casino's answer (see below)).
Another observation I'd like to add is that Windows connections (Internet Explorer, Remote Desktop etc) would work again about 5-10 mins after the Alfresco services were shutdown.
Alfresco is an excellent product and I was afraid I would have to scrap it. Fortunately stackoverflow came to the rescue !
Thanks again to Buddy Casino's answer.
Boo to the person who down-voted the Question.
We are seeing the same thing on a similar setup, W2008R2, Tomcat 6.0.29, Java 1.6.0.25. Restarting tomcat does not help, but restarting the server itself does, at least for a while. After the last time we started shutting down individual services and believe we have it narrowed down to either an instance of Alfresco that is also running on the server or the Backup Exec Agent services. After those services (four in total) were stopped, the applications in Tomcat started working again, although we were still seeing the buffer/connections error in the stdout log which was strange. Will need to wait for the problem to return before confirming which are the culprit, which could be anywhere from a few days to a week or more.
Any chance you are running either Alfresco or BE on your server?

VS 2008/AJAX Project Fails Under Stress

I've been working on a VB.NET/VS2008/AJAX/SQL Server project for over 2 years now without any real issues coming up. However, we're in the last week of our project doing some heavy stress testing and the project starts failing once I get about 150 simultaneous users. I've even gone so far as to create a stripped down version of the site which only logs in a user, pulls up their profile and then logs off. That still fails under stress. When I says "fails" I mean the CPU's are spiked and the App Pool eventually crashes. This is running on a Windows 2008 R2 duo quad server w/ 16 gig of memory. The memory never spikes but the CPU tops out.
I ran YSlow on the site and it pointed out that I needed to compress the .axd files, etc... I did that by implementing Gzip compression on everything but that's what got me to the 150 users. I run YSlow now and it says everything is "A".
I'm really not sure where to go from here. I'd be more than willing to share the stripped down version of the site for anyone to review. I'm not sure if it's the server, my code or the web.config.
I know it is a bit late but have you considered increasing the number of worker processes in the application pool of your site to form a web garden? You can do this on the IIS Manager.

Network problem, suggestions sought

The LAN which has about a half dozen windows xp professional pcs and one windows 7 professional pc.
A jet/access '97 database file is acting as the database.
The method of acccess is via dao (DAO350.dll) and the front end app is written in vb6.
When an instance is created it immediately opens a global database object which it keeps open for the duration of its lifetime.
The windows 7 machine was acting as the fileserver for the last few months without any glitches.
Within the last week what's happened is that instances of the app will work for a while (say 30 mins) on the xp machines and then will fail on database operations, reporting connection errors (eg disk or network error or unable to find such and such a table.
Instances on the windows 7 machine work normally.
Moving the database file to one of the xp machines has the effect that the app works fine on ALL the xp machines but the error occurs on the windows 7 machine instead.
Just before the problem became apparent a newer version of the app was installed.
Uninstalling and installing the previous version did not solve the problem.
No other network changes that I know of were made although I am not entirely sure about this as the hardware guy did apparently visit about the same time the problems arose, perhaps even to do something concerning online backing up of data. (There is data storage on more than one computer) Apparently he did not go near the win 7 machine.
Finally I know not very much about networks so please forgive me if the information I provide here is superfluous or deficient.
I have tried turning off antivirus on the win 7 machine, restarting etc but nothing seems to work.
It is planned to move our database from jet to sql server express in the future.
I need some suggestions as to the possible causes of this so that I can investigate it further. Any suggestions would be gretly appreciated
UPDATE 08/02/2011
The issue has been resolved by the hardware guy who visited the client today. The problem was that on this particular LAN the IP addresses were allocated dynamically except for the Win 7 machine which had a static IP address.
The static address happened to lie within the range from which the dynamic addresses were being selected. This wasn't a problem until last week when a dynamic address was generated that matched the static one and gave rise to the problems I described above.
Thanks to everyone for their input and thanks for not closing the question.
Having smart knowledgeable people to call on is a great help when you're under pressure from an unhappy customer and the gaps in your own knowledge mean that you can't confidently state that your software is definitely not to blame.
I'd try:
Validate that same DAO and ODBC-drivers is used on both xp- and vista machines.
Is LAN single broadcast domain? If not, rewire. (If routers required make
sure WINS is working)
Upgrade to ms-sql. It could be just a day of well worth work, ;-)
regards,
//t

Resources