I have been using Oracle 11 occi libraries to interface my OracleDb server.
After trying to upgrade to the Oracle12 occi libraries I have noticed that the process's
soft stack size limit have chnaged.
After doing some experiments I havre zoomed in on the dfference :
On Oracle11, the occi::createenvironment call changes the stack size soft limit to 32MB
(The default the process starts with is 8MB and the hard limit is unlimited).
After moving to Oracle12 client, this behavior went away. I do ot know if this is because
of the way the client was installed or bcause of a change in the client itself.
I could ot find any documentation to this behavior.
Can you advise ?
Related
In WSO2 EI 6.6, proxy stopped working abruptly. upon analyzing observed an error in the wso2 carbon log "GC Overhead limit exceeded", after this error nothing happening in the EI.
Proxy logic is to get the data from Sql server table and form an xml and send it to an external API. Proxy runs every 5 mins interval and in every interval maximum of 5 records will be pushed to an API.
After restarting the wso2 carbon services, proxy are started working. currently we are restarting the services every 3 days to avoid this issue.
Need to know how to identify the potential issue and resolve this.
This mean the JVM has run out of allocated memory. There can be many reasons for this. For example, if you haven't allocated enough memory to the JVM you can easily run out of memory. If that's not the case you need to analyze a memory dump and see what's occupying the memory causing it to fill up.
Generally, when you see the mentioned error the JVM automatically creates a heap dump(heap-dump.hprof) in the <EI_HOME>/repository/logs directory. You can try analyzing the dump to find the root cause. If the server doesn't generate a memory dump, manually take a memory dump when it's occupied than the expected level and analyze it.
i want set heap size for go application in my windows machine
In java we used to provide -Xms settings as vm arguments in intellij but how to provide similar setting in golang and set memory limit for the go application.
Tried with
<env name="GOMEMLIMIT" value="2750MiB" />
but not working
we are using go 1.6.2 version.
Go 1.19 adds support for a soft memory limit:
The runtime now includes support for a soft memory limit. This memory limit includes the Go heap and all other memory managed by the runtime, and excludes external memory sources such as mappings of the binary itself, memory managed in other languages, and memory held by the operating system on behalf of the Go program. This limit may be managed via runtime/debug.SetMemoryLimit or the equivalent GOMEMLIMIT environment variable.
You can't set a hard limit as that would make your app malfunction if it would need more memory.
To set a soft limit from your app, simply use:
debug.SetMemoryLimit(2750 * 1 << 20) // 2750 MB
To set a soft limit outside of your app, use the GOMEMLIMIT env var, e.g.:
GOMEMLIMIT=2750MiB
But please note that doing so may make your app's performance worse as it may enforce more frequent garbage collection and return memory to OS more aggressively even if your app will need it again.
On Varnishv4.1 when i use ram as backend for Caching,when Reqests comes to it
after a while the amount of server's ram begans to full little by little and after it completely fills, the server crashes
and again it starts caching in the ram
I assign following variables in systemd service file for varnish.service
but still it does its Previous behavior and it crushs again:
LimitMEMLOCK=14336
MemoryLimit=13G
MemoryHigh=13G
MemoryMax=13G
How can i limit and specify the special amonut of memory that it cant exceed from that?
#Version used:
Varnishv4.1
#Operating System and version:
Ubuntu16.04
#Source of binary packages used (if any)
Installed from official ubuntu packages
You will have to limit both malloc and Transient.
I.e. as startup parameters -s malloc,3GB s Transient,1GB
In general the RAM allocated for Varnish should not exceed the 80% of total RAM avaiable on the system
We are having some issues with an application of ours which we are attempting to diagnose. While taking a close look at things, we think we may be having some DBCP connection pool issues.
Among a few things we noticed, we discovered something via a secondary support application (small JDBC based sqlclient for monitoring the DB) using the same driver the main application uses. That discovery was entropy exhaustion. After applying the fix noted in Oracle JDBC intermittent Connection Issue to this small utility, the issue went away.
At that time, we suspected the main application could be suffering from the same problem. We did not apply the same fix at this point, but rather we've started monitoring available entropy via /proc/sys/kernel/random/entropy_avail every 5 seconds to validate.
After reviewing the data for a 24 hour period, we do not see the same drop in available entropy as with the jdbc utility prior to the use of /dev/urandom. Rather, we noticed that the entropy never drops below 128 bytes nor climbs above 191 bytes. We have searched the application configuration files and can't find anything related to specifying the random number source.
OS: Red Hat Enterprise Linux Server release 6.3 (Santiago)
JDBC Driver: ojdbc6-11.2.0.3
Pooling Method: Hibernate DBCP
So, my questions are:
1) If we've not knowingly told the application/driver to use /dev/urandom vs /dev/random, what would possibly explain why we don't see the same entropy drop when new pool connections are created?
2) Why would the minimum and maximum available entropy be so rigid at 128/191? I would expect a little more, pardon the pun, randomness in these values.
I hesitate to go posting a bunch of configuration files not knowing which may be relevant. If there is something particular you'd like to see, please let me know and I will share.
Does your application use JDBC connection pooling or does it make authentication attempts as frequently as your test application did/does?
Keep in mind that each authentication attempt consumes the random pool.
I implemented a web application to start the Tomcat service works very quickly, but spending hours and when more users are entering is getting slow (up to 15 users approx.).
Checking RAM usage statistics (20%), CPU (25%)
Server Features:
RAM 8GB
Processor i7
Windows Server 2008 64bit
Tomcat 7
MySql 5.0
Struts2
-Xms1024m
-Xmx1024m
PermGen = 1024
MaxPernGen = 1024
I do not use Web server, we publish directly on Tomcat.
Entering midnight slowness is still maintained (only 1 user online)
The solution I have is to restart the Tomcat service and response time is again excellent.
Is there anyone who has experienced this issue? Any clue would be appreciated.
Not enough details provided. Need more information :(
Use htop or top to find memory and CPU usage per process & per thread.
CPU
A constant 25% CPU usage in a 4 cores system can indicate that a single-core application/thread is running 100% CPU on the only core it is able to use.
Which application is eating the CPU ?
Memory
20% memory is ~1.6GB. It is a bit more than I expect for an idle server running only tomcat + mysql. The -Xms1024 tells tomcat to preallocate 1GB memory so that explains it.
Change tomcat settings to -Xms512 and -Xmx2048. Watch tomcat memory usage while you throw some users at it. If it keeps growing until it reaches 2GB... then freezes, that can indicate a memory leak.
Disk
Use df -h to check disk usage. A full partition can make the issues you are experiencing.
Filesystem Size Used Avail Usage% Mounted on
/cygdrive/c 149G 149G 414M 100% /
(If you just discovered in this example that my laptop is running out of space. You're doing it right :D)
Logs
Logs are awesome. Yet they have a bad habit to fill up the disk. Check logs disk usage. Are logs being written/erased/rotated properly when new users connect ? Does erasing logs fix the issue ? (copy them somewhere for future analysis before you erase them)
If not. Logs are STILL awesome. They have the good habit to help you track bugs. Check tomcat logs. You may want to set logging level to debug. What happens last when the website die ? Any useful error message ? Do user connections are still received and accepted by tomcat ?
Application
I suppose that the 25% CPU goes to tomcat (and not mysql). Tomcat doesn't fail by itself. The application running on it must be failing. Try removing the application from tomcat (you can eventually put an hello world instead). Can tomcat keep working overnight without your application ? It probably can, in which case the fault is on the application.
Enable full debug logging in your application and try to track the issue. Run it straight from eclipse in debug mode and throw users at it. Does it fail consistently in the same way ?
If yes, hit "pause" in the eclipse debugger and check what the application is doing. Look at the piece of code each thread is currently running + its call stack. Repeat that a few times. If there is a deadlock, an infinite loop, or similar, you can find it this way.
You will have found the issue by now if you are lucky. If not, you're unfortunate and it's a tricky bug that might be deep inside the application. That can get tricky to trace. Determination will lead to success. Good luck =)
For performance related issue, we need to follow the given rules:
You can equalize and emphasize the size of xms and xmx for effectiveness.
-Xms2048m
-Xmx2048m
You can also enable the PermGen to be garbage collected.
-XX:+UseConcMarkSweepGC -XX:+CMSPermGenSweepingEnabled -XX:+CMSClassUnloadingEnabled
If the page changes too frequently to make this option logical, try temporarily caching the dynamic content, so that it doesn't need to be regenerated over and over again. Any techniques you can use to cache work that's already been done instead of doing it again should be used - this is the key to achieving the best Tomcat performance.
If there any database related issue, then can follow sql query perfomance tuning
rotating the Catalina.out log file, without restarting Tomcat.
In details,There are two ways.
The first, which is more direct, is that you can rotate Catalina.out by adding a simple pipe to the log rotation tool of your choice in Catalina's startup shell script. This will look something like:
"$CATALINA_BASE"/logs/catalina.out WeaponOfChoice 2>&1 &
Simply replace "WeaponOfChoice" with your favorite log rotation tool.
The second way is less direct, but ultimately better. The best way to handle the rotation of Catalina.out is to make sure it never needs to rotate. Simply set the "swallowOutput" property to true for all Contexts in "server.xml".
This will route System.err and System.out to whatever Logging implementation you have configured, or JULI, if you haven't configured.
See more at: Tomcat Catalina Out
I experienced a very slow stock Tomcat dashboard on a clean Centos7 install and found the following cause and solution:
Slow start up times for Tomcat are often related to Java's
SecureRandom implementation. By default, it uses /dev/random as an
entropy source. This can be slow as it uses system events to gather
entropy (e.g. disk reads, key presses, etc). As the urandom manpage
states:
When the entropy pool is empty, reads from /dev/random will block until additional environmental noise is gathered.
Source: https://www.digitalocean.com/community/questions/tomcat-8-5-9-restart-is-really-slow-on-my-centos-7-2-droplet
Fix it by adding the following configuration option to your tomcat.conf or (preferred) a custom file into /tomcat/conf/conf.d/:
JAVA_OPTS="-Djava.security.egd=file:/dev/./urandom"
We encountered a similar problem, the cause was "catalina.out". It is the standard destination log file for "System.out" and "System.err". It's size kept on increasing thus slowing things down and ultimately tomcat crashed. This problem was solved by rotating "catalina.out". We were using redhat so we made a shell script to rotate "catalina.out".
Here are some links:-
Mulesoft article on catalina (also contains two methods of rotating):
Tomcat Catalina Introduction
If "catalina.out" is not the problem then try this instead:-
Mulesoft article on optimizing tomcat:
Tuning Tomcat Performance For Optimum Speed
We had a problem, which looks similar to yours. Tomcat was slow to respond, but access log showed just milliseconds for answer. The problem was streaming responses. One of our services returned real-time data that user could subscribe to. EPOLL were becoming bloated. Network requests couldn't get to the Tomcat. And whats more interesting, CPU was mostly idle (since no one could ask server to do anything) and acceptor/poller threads were sitting in WAIT, not RUNNING or IN_NATIVE.
At the time we just limited amount of such requests and everything became normal.