What is the actual benefit of restarting a container when updating its configuration instead of updating the configuration at runtime (e.g. Spring Boot supports listening to ConfigMap changes or Spring Cloud Config Server has this feature)? I can actually see none and I can see some drawbacks such as a need to reset TCP connections.
Unlike Spring Boot, other stacks such as Node.js, Go or Rust actually don't have as big overhead when booting up. The problem with Spring Boot is that it just takes longer than any other "modern" stacks to start up because it's booting up JVM and Tomcat. Those two technologies were here well before Docker and Kubernetes were a thing and honestly, that's the price to pay to run Spring Boot in containers.
And what's the benefit? If you're a single developer, probably none. If you work in a team and everybody tinkers with live ConfigMaps and environment variables, it can get hairy really quickly.
Assuming you're using for example Terraform to manage your configurations then everybody gets a nice overview of what is going on and which values are injected where.
Related
I have a mobile app where the back-end is currently running as a NodeJS Cloud Function, but I'm nowhere near as comfortable with NodeJS as I am with Java. So, I've re-written the API in Java - however, when it comes to deploying that as a Cloud Function or on Cloud Run the cold-start performance is obviously not very good. I'm seeing roundabouts 15 second cold-start time when I add in the dependencies that I need, which is not going to work. I do have a "warmup" endpoint that I call immediately when a user logs into the mobile app to kick off the initialization of the API back-end, which does help a little.
I've been playing around with GraalVM and generating a native image for a while as well, and while I can get your basic hello-world app and some slightly more elaborate examples working, my app has some dependencies like gRPC and Cloud Firestore, among others, and I have not been successful in generating a native image for that with Micronaut, Quarkus, or Spring Boot.
I considered running on a managed instance group with a minimum of 1 so there's always at least one instance up and running, ready to serve requests, but I would then need a Cloud Loadbalancer in front and I've read some horror stories where the Cloud Loadbalancer wound up costing folks a lot more than they had expected.
Is there a way to front a managed instance group using Cloud Endpoints? I see where you can do it with a single VM instance, but not across a group which leads me to believe that in that case I would need a Cloud Loadbalancer to do what I need?
Cost-effectiveness is important, because my app is super new and is not generating any revenue at all yet, and since it's just me funding it using personal money, my infrastructure budget is not super high :)
TL;DR/ Looking for tips on what the cheapest way would be to host a Java-based API app on a framework like Micronaut, Quarkus, or Spring Boot on GCP while maintaining good performance and elasticity.
Any insight would be greatly appreciated.
I wrote an article on Java framework cold start on Cloud Run (the results are outdated because after this article release and discussions with Googlers, the team has updated the Cloud Run platform and the way to manage Java containers. Now they start quickly now!)
Anyway, your question seems relevant at the beginning, but finally not really. I will explain why.
Firstly, the cold start is a temporary issue. Your first request is slow, and the dozens, hundreds after are very fast. Does it really a problem?
If so, the min instance feature (only available on CLoud Run for Anthos for now) is coming in the managed version. Like this you never really scale to 0, an instance is kept warm and start instantly (but, as counterpart, it won't be free).
Secondly, if you look for maintainability, I recommend you the framework that you know. You will be more efficient to improve your code, fix your issue and to save your time (and time is money) much more that infrastructure consideration!
All the Java framework are relatively close when optimized (Naive Spring Boot on Cloud Run start in 20s, in 2s after packaging optimizations!). Of course, native compilation (with GraalVM) is the fastest, but it's not really stable for now with several side effect (and I won't recommend it for production).
Personal opinion: I'm a big fan of Spring Boot and its ecosystem. But Micronaut and its AOT compilation, in addition of annotation compliant with Spring Boot idioms, is absolutely awesome. Quarkus is more recent, and I haven't real opinion on it (never used in production/real project)
I would say you need more Micronaut or Quarkus in combination with GraalVM if you target performance. Define your services to be run as
My experience is primary with Micornaut serverless application and it is manageable to have api service running as function/lambda with boot time of 100-500 ms. Cold starts are not a big issue anymore if you enable provisioning (feature is available since 12.2019 in AWS), you could skip the so called warming.
How to make your lambda faster ?
Keep your package size as small a possible (remove all big libraries where a fraction of it is used) - keep package size to max 20 MB. On every cold start this package is fetched and decompressed.
If you use a JVM technology for your services, try to migrate them to Graalvm where the boot-up overhead is reduced to minimum.
micronaut + graalvm
quarkus + graalvm
helidon + graalvm
Use cloud infrastructure configs to reduce the cold starts.
This is what AWS provides, not sure about GPC
https://aws.amazon.com/about-aws/whats-new/2019/12/aws-lambda-announces-provisioned-concurrency/
Note: IMHO AWS has a better setup for serverless application so far compared to GCP in terms of boot-up and cold starts.
I am wondering about how can we testing automate functionality.
I am working on a Spring Boot micro-service where we use a GemFire cache. Right now I am testing it manually for below scenarios:
Is the data purged correctly after TTL is reached
Retrieving the data from cache if object exists
So, I know we can have a separate service which calls the GemFire and making sure that the object exists in cache (for step2). But not really sure how can we automate testing for step1.
And the whole point I am wondering is do we really need a new service completely to test this as a overhead? Are there any tools / better approach for testing the functionality?
Since you're using spring-boot and VMware GemFire together, I really hope you're taking advantage of the huge help and functionality spring-boot-data-gemfire provides out of the box. If you are, then you'd be delighted to know that there's yet another project, spring-test-data-geode, which can be used to write Unit and Integration Tests when building Spring Data for Apache Geode & VMware GemFire applications, you should really give it a try as it greatly helps in managing the scope and lifecycle of mock VMware GemFire/Apache Geode objects, along with cleaning all resources used by real objects used during Integration Tests.
As a side note, if you're using the Data Expiration Functionality shipped out of the box with VMware GemFire, I really don't see an actual need (other than the peace of mind that comes with I've tested everything I could) to include custom tests within your testing suite, you should only test what you own. The functionality itself is thoroughly tested already as part of the VMware GemFire / Apache Geode project itself, and you can see some (certainly not all) examples of such tests in the following links: ExpirationDUnitTest, RegionExpirationDistributedTest, ReplicateEntryIdleExpirationDistributedTest.
Cheers.
I have had some success using TestContainers here is the code used to create the container and
a sample test. It works by executing gfsh commands on the container but is slow.
Lets say I have 25 spring boot micro-services each of which starts with 1GB JVM in production. At any given time not all are in use and there is no instance when they are using the full 25GB memory at once. In reality many of them will sit idle 90% of the time but any of them might at some point get called and require up to 1GB memory.
In my development environment I would like to run all of them at once but only have 8GB memory. I don't need great performance but I need them all to run at the same time for the entire app to work. I would like to try to run all the applications within a single JVM with 6GB dedicated memory. That should be enough at any given time.
This seems like it would be a common issue as many companies are converting to cloud/microservices. 10 years ago we would have one monolithic app with single JVM (easy to run in dev environment). Now we have dozens of small apps which might not need a ton of memory but they each run in their own JVM so each has a good amount of overhead. This actually makes development more complex rather than simplifying. So Im trying to find a solution for our developers where they can run everything but not kill the memory on their machines.
The spring boot apps need to run without modification aside from
maybe local profiles. Otherwise developers would have to make tons of changes every time they pull the code from git
Each project needs to be able to configure a different port (application-local.properties setting)
for tomcat.
Each project needs its own classpath entries (for instance one might use version 1.0 of a jar and another might use version 2.0 and without separate classpaths one or the other would break)
I have been trying to follow this post but its not 100% what I want. I feel like a proper solution should respect the application.properties / application-local.properties file and use the port set inside the project rather than having to hardcode any configuration outside the project. Essentially his post is starting a separate thread for each microservice and attaching a separate classloader to each thread. Then calling SpringApplication.run and passing in the classname that would normally be used to start the microservice. I think this is maybe ignoring the auto configuration properties.
Any help would be greatly appreciated!
You can manage how much resources your applications are consuming with docker. One spring boot application should be one docker container. You can at runtime change how much resources(in your case memory) container use. Take a look at this
article on how to at runtime change resource allocation in docker. Also, with kubernetes is possible to define minimum and maximum resources that your application needs.
Latest project I used Spring boot, and prepare to deploy to production environment, I want to know which way to run application have better performance or have the same performance?
generate a war package and put it in a stand-alone tomcat
generate a jar package and use embedded tomcat
In addition, when publish to production environment if should to remove devtools dependency.
This is a broad question. The answer is it depends on your requirements.
Personally, I prefer standalone applications with Spring Boot today. One app, one JVM. It gives you more flexibility and reliability in regard to deployments and runtime behaviour. Spring Boot 1.3.0.RELEASE comes with init scripts which allows you to run your Spring Boot application as a daemon on a Linux server. For instance, you can integrate rpm-maven-plugin into your build pipeline in order to package and publish your application as a RPM for deployment or you can dockerize your application easily.
With a classic deployment into a servlet container like Tomcat you will be facing various memory leaks after redeployment for example with logging frameworks, badly managed thread local objects, JDBC drivers and a lot more.
Either you spend time to fix all of those memory leaks inside your application and frameworks you use or just restart servlet container after a deployment. Running your application as a standalone version, you don't care about those memory leaks because you are forced to restart in order to bring you new version up.
In the past, several webapps ran inside one servlet container. This could lead to performance degradation for all webapps because every webapp has its own memory, cpu and GC characteristics which may interfere with each other. Further more, resources like thread pools were shared among all webapps.
In fact, a standalone application is not save from performance degradation due to high load on the server but it does not interfere with others in respect to memory utilization or GC. Keep in mind that performance or GC tuning is much more simpler if you can focus on the characteristics of just one application. It gets complicated as soon as you'll need to find common denominator for several webapps in one servlet container.
In the end, your decision may depend on your work environment. If you are building an application in a corporation where software is running and maintained by operations, it is more likely that you are forced to build a war. If you have the freedom to choose your deployment target, then I recommend a standalone application.
In order to remove devtools from a production build
you can use set the excludeDevtools build property to completely
remove the JAR. The property is supported with both the Maven and
Gradle plugins.
See Spring Boot documentation.
Lately I started to learn Java EE and related technologies and there are some concepts which confuse me. Somewhere I read that whenever one is building a Java EE application then it is sort of mandatory to use a container.
Currently, I am learning Spring framework and trying to build a small application with it to get hands-on. Now in that I am not sure if it is mandatory for me to use a container (say Tomcat) or it depends application which I am building that I need a container or not.
If it depends on the application that one is building, then what are the factors which help to decide whether a container should be used or not?
Puuhhh, this is a very big question and there is no simple answer. But I will do my best to explain my own opinion at least:
What are containers?
Containers provide functionality to you. Such a functionality can be to handle web request and dispatch them to servlets - in this case we call them servlet containers (e.g. Tomcat or Jetty).
But containers can also provide other things, e.g. they can provide user authentication, logging or the connection to a database. Most containers (e.g. Tomcat) do multiple of those things (e.g. Tomcat does all I mentioned). Some containers do more then others, e.g. JBoss can do much more than Tomcat.
Trade Off
However, there is a trade off: If you use a simple container (like Tomcat), you need to do a lot of things on you own or by using other Frameworks (like Spring). But if you use a powerful container, you must know the container very well and the chance is high that your application will depend on this concrete container sooner or later.
The point, that using a container is not mandatory. It is a decision. Some people will argue for it, others against it. But depending on the books you read, this decision is already made (e.g. J2EE needs a J2EE container, that's how it works).
The trend (IMHO)
Years ago the trend was to use big and powerful (J2EE) containers which provide as much as possible. IMHO the trend today is to use smaller and light-way solutions. Most developers would prefer to use a Tomcat server instead of a JBoss server today.
Frameworks without containers
While J2EE needs a container, there are other frameworks/technologies which supports the development of web applications without any external container. Such frameworks are Play! or Spark Java.
Note
If you are not familiar with containers and Spring, take care to don't get confused. Most applications you will develop with Spring are web applications which will be deployed to a servlet container. This is very common. But Spring doesn't relay on that. You can also use Spring without such a container, e.g. to develop a desktop application. But if you want to develop a web application, the Java-way is to use a servlet container.
If your application is using servlets, you'll need a container to handle the requests. Tomcat is a very popular choice.
I'll anticipate your next topic to cover with this discussion of "application server" versus "container."
There are two containers. One is Web Container (IIS, Apache) to run Web Applications and another is "Application Container" to run Enterprise Applications.
Web Applications = Apps developed using HTML, XML, CSS and JSPs
Enterprise Applications = Apps developed used JAVA, J2E and Serverlets in addition to HTML and XML.