Memory management on Glassfish - memory-management

I have several background tasks running on my Glassfish server implemented by #TimerService instances. The goal of these services is to extract data from files and insert that data into the database.
I tried initially to do this in JPA but the system stalled far to easily, I have now converted the process to JDBC which is far more responsive. However there are still enormous memory leaks somewhere along the way which I cannot pinpoint.
Each file is extracted in a method which manages its own transactions (1 file = 1 transaction). I would think that once this method finalises all variables loose scope and be GC'ed, but this is not the case. After a very short time I am experiencing OutOfMemoryException.
I am wondering if, how, and why Glassfish would be keeping reference to my variables (which are very heavy objects). What settings or methodologies can I apply to minimize these memory leaks?
For reference I am using the stock Glassfish settings with a couple of modifications :
-XX:+CMSPermGenSweepingEnabled
-XX:+CMSClassUnloadingEnabled
-XX:MaxPermSize=256m
–XmX1024m

You might be dealing with a class loader leak. JAXB can do this when you're unmarshalling. To find out for sure you should use a memory analyzer. I highly recommend using the Eclipse Memory Analyzer Tool. Just follow a few of the tutorials and you should be able to get it figured out.

Related

Why simple Spring cloud Run is taking so much RAM

When a simple (just sample) Spring Boot Application in Cloud Run, there is no files being written in the application, gets terminated with the following error.
Memory limit of 256M exceeded with 257M used. Consider increasing the
memory limit, see
https://cloud.google.com/run/docs/configuring/memory-limits
When I look at Deleting temporary files, it says that the disk storage is in-memory, so if the code is not writing any files, then how these files are being written. How to find these files using
gsutil ls -h gs://projectName
An interesting question.
Are you confident that your app isn't writing any files? I bet it is.
There's a wrinkle to Google's statement. Anything written to /var/log is not part of the in-memory filesystem|quota and is shipped to Cloud Logging.
I think Google should consider providing metrics that help differentiate between the container's process(es)' use of memory and the in-memory filesystem's use. Currently there is no way to disambiguate this usage (Cloud Monitoring metrics for Cloud Run). Perhaps raise a feature request on Google's public issue tracker?
To answer you question, you may want to consider running the container locally. Then you can grab the container's (process') process id (PID) and then try e.g. ls -la /proc/${PID}/fd to list files that the container is producing.
I considered suggesting Google Cloud Profiler but it requires an agent for Java and so it would be cumbersome to deploy to Cloud Run and would not obviously yield an answer to your question.
It's the problem of Java, and it's worse with Spring. Java, with a standard JVM use a lot of memory by default (at least 128MB). When you run Spring on top of java, tons of beans are loaded in memory, the library loaded in memory and it take easily more than 350Mb for a simple hello world app.
I wrote an article on that. You have 2 solutions to mitigate the cold start and the memory (and container) size:
Use raw java without heavy framework
Use native compilation (graalVM for instance).
I tried to optimize the JVM used (a micro JVM) or Spring directly (limit the beans loaded, use lazy loading, add JVM parameters). It saved a few second at startup (cold start) but not really memory.
I also started to investigate AppCDS with a former Google CLoud Dev Advocate, but he left the company 1 year ago and I stopped my effort (I'm no longer a Java/Spring developer, but I always liked the concept and I worked on it).
Eventually, you can also have a look to new generation framework, like Micronaut, Quarkus or Vert.x

How to speed up the TYPO3 Backend?

Given: Each call to a BE module takes several seconds even with a SSD drive. (A well configured setup runs below 1 second for general BE tasks.)
What are likely bottlenecks?
How to check for them?
What options to speed up?
On purpose I don't give a special configuration, but ask for a general checklist, so that the answer is suitable for many people as first entry point.
General tips on performance tuning for TYPO3 can be found here: https://wiki.typo3.org/Performance_tuning
However, in my experience most general performance problems are due to one of a few reasons:
Bad/no caching. Usually this is a problem with one or more extensions (partly) disabling cache. Try disabling all third party extensions and enabling them one by one to see which causes the site to slow down the most. $GLOBALS['TSFE']->set_no_cache() will disable all cache, so you could search for that. USER_INT and COA_INT in TypoScript also disable cache for anything that's configured inside there.
A lot of data. Check the database for any tables containing a lot of data. How many constitutes "a lot", depends on a lot of factors, but generally anything below a million records shouldn't be too much of a problem unless for example you do queries with things like LIKE '%...%' on fields containing a lot of data.
Not enough resources on the server. To fix this, add more memory and/or CPU cores to the server. Or if it's a shared server, reduce the number of sites running on it.
Heavy traffic. No matter how many resources a server has, it will always have a limit to the number of requests it can process in a given time. If this is your problem you will have to look into load balancing and caching servers. If you don't (normally) have a lot of visitors, high traffic can still be caused by robots crawling your site too quickly. These are usually easy to block on IP address in your firewall or webserver configuration.
A slow backend on a server without any other traffic (you're the only one who can access it) rules out 1 (can only cause a slow backend if users are accessing the frontend and causing a high server load) and 4 (no other traffic).
one further aspect you could inspect: in the user record a lot of things are stored, for example the settings you used in the log module.
one setting which could consume a lot of memory (and time to serialize and deserialize) is the state of the pagetree (which pages are expanded/ which are not).
Cleaning the user settings could make the backend faster for this user.
If you have a large page tree and the user has to navigate through many pages the effect will stall. another draw back: you loose all settings as there still is no selective cleaning.
Cannot comment here but need to say: The TSFE-Object does absolutely nothing in the TYPO3 Backend. The Backend is always uncached. The TYPO3-Backend is a standalone module to edit and maintenance the frontend output. There are tons of Google search results that will ignore this fact.
Possible performance bottlenecks are poor written extensions that do rendering or data processing. Hooks to core functions are usually no big deal but rendering of many elements for edit forms (especially in TYPO3s Fluid Template Engine) can cause performance problems.
The Extbase-DBAL-Layer can also cause massive performance problems. The reason is the database model does not know indexes. It' simple but stupid. A SQL-Join on a big table of 2000 records+ will delay the output perceptibly, depending on the data model.
Also TYPO3 Backend does not really depend on the Typoscript-Configuration but in effect to control some output or loaded by extensions, the full parsing of the *.ts files is needed. And this parser is very slow.
If you want to speed things up you need to know what goes wrong. The only way to debug this behaviour is to inspect the runtime with a PHP profiling tool like xdebug because the TYPO3 Framework is very complex. It's using some kind of Doctrine Framework and will load tons of files, by every request. Thus a good configured OpCache is a must.
Most reason the whole thing is slow is because it is poor written. You can confirm that fact by inspecting the runtime.
In addition to what already has been said, put the runtime environment onto your checklist:
Memory:
If heavy IDE and other tools are open at the same time, available memory can become an issue. To check the memory profile, you may start a tool that monitors the memory usage of the machine.
If virtualization is used, check the memory assigned to the box. Try if assigning more memory improves behaviour.
If required and possible spend more memory to your machine. This should not be a bugfix to poorly written code. Bad code can blow up any size of memory.
File access:
TYPO3 reads and writes thousands of files. If you work with a contemporary SSD, this is surprisingly fast. I did measure this. Loading all class files of TYPO3 takes just a fraction of a second.
However this may look different if you do not work with a standard setup. Many factors may slow you down:
USB-Sticks as storage.
Memory cards as storage.
All kind of external storage may be limited due to slow drivers.
Virtualization can become an issue. Again it's a question of drivers.
In doubt test and store your files and DB on a different drive to compere the behaviour.
Routing
The database itself may be fast. A bad routing of your request may still slow you down. Think of firewalls, proxies etc. even on your local machine and specially if virtualisation is used.
Database connection:
I fast database connection is crucial. If the database access is slow TYPO3 can't be fast.
Especially due to Extbase TYPO3 often queries much more data than really required and more often than really required, because a lot of relations are resolved in the PHP layer instead of the DB layer itself. Loading data structures like the root line may cause a lot of ping-pong between the PHP and the DB layer.
I can't give advice, how to measure your DB-connection. You have to as your admin for that. What you always can do is to test and compare with another DB from a completely different environment.
The speed of the database may depend on the type of the database itself. Typically you use MySQL/Maria-DB which should be fast. It also depends on the factors mentioned above, memory, file access and routing.
Strategy:
Even without being and admin and knowing all performance tools, you can always exchange parts of your system and check if matters improve. By this approach you can localise the culprit without being an expert. Once having spotted the culprit, Google may help you to get more information.
When it comes to a clean and performant setup of routing or virtualisation it's still the best idea to ask an experienced admin.
Summary
This is all in addition to what others have already pointed to.
What would be really helpful would be a BE-Plugin, that analyses and measures the environment. May there are some out there I don't know.

hazelcast vs ehcache

Question is clear as you see in the title, it would be appreciated to hear your ideas about adv./disadv. differences between them.
UPDATE:
I have decided to use Hazelcast because of the advantages like distributed caching/locking mechanism as well as the extremely easy configuration while adapting it to your application.
We tried both of them for one of the largest online classifieds and e-commerce platform. We started with ehcache/terracotta(server array) cause it's well-known, backed by Terracotta and has bigger community support than hazelcast. When we get it on production environment(distributed,beyond one node cluster) things changed, our backend architecture became really expensive so we decided to give hazelcast a chance.
Hazelcast is dead simple, it does what it says and performs really well without any configuration overhead.
Our caching layer is on top of hazelcast for more than a year, we are quite pleased with it.
Even though Ehcache has been popular among Java systems, I find it less flexible than other caching solutions. I played around with Hazelcast and yes it did the job, it was easy to get running etc and it is newer than Ehcache. I can say that Ehcache has much more features than Hazelcast, is more mature, and has big support behind it.
There are several other good cache solutions as well, with all different properties and solutions such as good old Memcache, Membase (now CouchBase), Redis, AppFabric, even several NoSQL solutions which provides key value stores with or without persistence. They all have different characteristics in the sense they implement CAP theorem, or BASE theorem along with transactions.
You should care more about, which one have the functionality you want in your application, again, you should consider CAP theorem or BASE theorem for your application.
This test was done very recently with Cassandra on the cloud by Netflix. They reached to million writes per second with about 300 instances. Cassandra is not a memory cache but you data model is like a cache, which is consist of key value pairs. You can as well use Cassandra as a distributed memory cache.
Hazelcast has been a nightmare to scale and stability is still a major issue.
The dedicated client to grid component choices are
The messy version that cant survive node loss anywhere, negating the point of backups (superclient), or
An incredibly slow native client option that does not allow for any type of load balancing to processing nodes in the grid.
If any host could request records from this data grid it would be a sweet design, but you are stuck with those two lackluster option to get anything out of it.
Also multiple issues with database thread pools locking up on individual members and not writing anything to the databases, causing permanent records loss is a frequent issue and we often have to take the whole thing down for hours to refresh any of the JVM's. Split brain is also still an issue, although in 1.9.6 it seems to have calmed down a little.
Rallying to move to Ehcache and improving the database layer instead of using this as a band-aid.
Hazelcast serializes everything whenever there is a node (standard-one), so the data you will save to Hazelcast must implement serialization.
http://open.bekk.no/efficient-java-serialization/
Hazelcast has been a nightmare for me. I was able to get it "working" in a clustered Websphere environment. I use the term "working" loosely. First, all of Hazelcast's documentation is out of date and only shows examples using deprecated method calls. Trying to use the new code without comments in the Javadocs and no examples in the documentation is very hard. Also, the J2EE container code simply does not work at this point because it does not support XA transactions in Websphere. An error is thrown calling code that follows their only J2EE example explicitly(it does look like Milestone 3.0 is addressing this). I had to forget about joining Hazelcast to a J2EE transaction. It does seem Hazelcast is definitely geared to a non EJB/Non-J2EE container environment. Making calls to Hazelcast.getAllInstances() fails to retain any information about Hazelcast's state when switching from one enterprise java bean to another. That forces me to create a new Hazelcast instance just to run calls that give me access to my data. That causes many Hazelcast Instances to start up on the same JVM. Also,retrieving data from Hazelcast is not fast. I tried retrieving data using both the Native Client and directly as a member of the cluster. I stored 51 lists, each containing only 625 objects in Hazelcast. I could not perform a query directly on a list and did not want to store a map just to get access to that feature (SQL operations can be performed on a map). It took about a half second to retrieve each list of 625 objects because Hazelcast Serializes the entire list and sends it over the wire rather than just giving me the delta (what has changed). Another thing, I had to switch to a TCPIP configuration and explicitly list the ip addresses of the servers I wanted to be in the cluster. The default Multicast configuration did not work and from the group discussions in google, other people are experiencing that difficulty as well. To sum up; I did eventually get 8 machines communicating in a cluster through many hours of torturous programmatic configuration and trial and error (the documentation will be little help) but when I did, I still had no control over the number of instances and partitions being created on each JVM due to the half finished nature of Hazelcast for EJB/J2EE and it was VERY SLOW. I implemented a real use case in the unemployment insurance application I work on and the code was much faster making direct calls to the database. It would have been cool if Hazelcast worked as advertised because I really did not want to use a separate service to implement what I am trying to do. I have used MongoDB extensively so I may skip the whole in memory cache and just serialize my objects as documents in a separate repository.
One advantage of Ehcache is that it is backed by a company (Terracotta) that does extensive performance, failover, and platform testing in a large performance lab. Terracotta provides support, indemnity, etc. For many companies, that sort of thing is important.
I have not used Hazelcast but I've heard that it is easy to use and that it works. I haven't heard anything with respect to scalability or performance of Hazelcast vs Terracotta/Ehcache but given the amount of scalability and failover testing that Terracotta does, it's hard for me to imagine that Hazelcast would be competitive in a production deployment. But I presume it would work fine for smaller uses.
[Bias: I'm a former employee of Terracotta.]
Developers describe Ehcache as "Java's Most Widely-Used Cache". Ehcache is an open-source, standards-based cache for boosting performance, offloading your database, and simplifying scalability. It's the most widely-used Java-based cache because it's robust, proven, and full-featured. Ehcache scales from in-process, with one or more nodes, all the way to mixed in-process/out-of-process configurations with terabyte-sized caches. On the other hand, Hazelcast is detailed as "Clustering and highly scalable data distribution platform for Java". With its various distributed data structures, distributed caching capabilities, elastic nature, memcache support, integration with Spring and Hibernate and more importantly with so many happy users, Hazelcast is feature-rich, enterprise-ready and developer-friendly in-memory data grid solution.
Ehcache and Hazelcast are primarily classified as "Cache" and "In-Memory Databases" tools respectively.

Move application to Websphere clusters

What should we take care of before moving an application from a single Websphere Application Server to a Websphere cluster
This is my list from experience. It is not complete but should cover the most common problem areas:
Plan head the distributed session management configuration (ie. will you use memory-to-memory or database based replicaton). Make a notice that if you are still on 32-bit platform the resource requirement overhead from clustering might cause you instability issues if your application uses already lots of memory.
Make sure that everything you put into user sessions can be serialized with the default serializer (implements Serializable). You might otherwise run into problems with distributed sessions.
The same goes for everything you put into DynaCache. Make sure everything serializes properly.
Specify and make sure all the resource definitions (JDBC providers etc) will be made to a proper scope. I would usually recommend using the actual Cluster scope for everything that your applications installed to cluster use. That ensures the testing features work properly from proper points, and that you don't make conflicting definitions.
Make sure your application uses relative paths for resources in web interfaces. Once you start load balancing and stuff you can run into some serious problems if you have bolted down a lot of stuff.
If you had any sort of timers make sure they work well with clusters. With Quartz that means probably that you should use the JDBC store for timer tasks. With EJB Timers make sure you register the timers only once (it is possible to corrupt the timer database of WAS if you have several nodes attempting the registering at the exactly same time) and make sure you install them to Cluster scope.
Make sure you use the WAS provided SSO mechanisms. If you have a custom implementation please make sure it handles moving the user between servers in cluster well.
Keep it simple, depending on your requirements, try configuring your load balancer to use sticky sessions and not hold state in your HTTP Session. That way you don't need to use resource hungry in memory session replication.
Single Sign On isn't an issue for a single cluster as your HTTP clients will not be moving off the same http://server.acme.com/... host domain name.
Most of your testing should focus on database contention. If you have a highly transactional application (i.e. many writes to the same table) make sure you look at your database Isolation levels so that locks are not held unecessarily. Same goes for your transaction demarkaction. Keep transactions as brief as possible. If you dont have database skills yourself make sure you get a Database Analyst to help you monitor the database while you test.
Also a good advice to raise a PMR to IBM Support up front of any major changes, such as this one or upgrading to new versions etc. Raise it as a "Software Usage Question" and they can provide you with feedback from their knowledge database based on other customers input. Same would apply for any type of product which you have a support agreement for - ask support before problems occur.

What's the best place for a database-backed, memory-resident global cache in an ASP.NET web server?

I have to cache an object hierarchy in-memory for performance reasons, which reflects a simple database table with columns (ObjectID, ParentObjectID, Timestamp) and view CurrentObjectHierarchy. I query the CurrentObjectHierarchy and use a hash table to cache the current parents of each object for quickly looking up the parent object ID, given any object ID. Querying the database table and constructing the cache is a 77ms operation on average, and ideally this refresh occurs only when a method in my database API is called that would change the hierarchy (adding/removing/reparenting an object).
Where is the best place for such a cache, if it must be accessed by multiple ASP.NET web applications, possibly running in different application pools?
Originally, I was storing the cache in a static variable in a C# dll shared by the different web applications. The problem, of course, is that while static variables can be accessed across threads, they cannot be accessed across processes, which is a problem when multiple web-apps are involved (possibly running in separate application pools). As a result... synchronized, thread-safe modifications to the object hierarchy cache in one application are not reflected in other applications, even though they are using the same code-base.
So I need a more global location for this cache. I cannot use static variables (as I just explained), session state (which is basically a per-user store), and application state (needs to be accessible across applications).
Potential places I've been considering are:
Some kind of global object storage within IIS itself, accessible from any thread in any application in any application pool (if such a place exists. Does it?)
A separate, custom web service that manages an exclusive cache.
Right now, I think the BEST solution is SQL CLR integration, because:
I can keep my current design using static variables
It's a separate service that already exists, so I don't have to write a custom one
It will be running in a single process (SQL Server), so the existing lock-based synchronization will work fine
The cache would be setting as close as possible to the data structures it represents!
I would embed the hierarchy-traversing methods in the SQL CLR DLL, so that I could make a single SQL call where I would normally make a regular method call. This all depends on SQL Server running in a single process and the CLR being loaded into that process, which I think is the case. What do you think of this? Can you see anything obviously wrong with this idea that I may be missing? Is this not an awesome idea?
EDIT:
After looking more closely, it seems that different ASP.NET applications actually run in the same process, but are isolated by AppDomains. If I could find a way to share and synchronize data across AppDomains, that would be very very useful. I'm reading about .NET Remoting now.
Microsoft is working on a distributed caching framework: Velocity. However, the latest release is a CTP3 version, so it may not be production ready...

Resources