What is the most cost-effective way to run a Java based API back-end (Spring Boot, Micronaut, Quarkus) on GCP? - spring-boot

I have a mobile app where the back-end is currently running as a NodeJS Cloud Function, but I'm nowhere near as comfortable with NodeJS as I am with Java. So, I've re-written the API in Java - however, when it comes to deploying that as a Cloud Function or on Cloud Run the cold-start performance is obviously not very good. I'm seeing roundabouts 15 second cold-start time when I add in the dependencies that I need, which is not going to work. I do have a "warmup" endpoint that I call immediately when a user logs into the mobile app to kick off the initialization of the API back-end, which does help a little.
I've been playing around with GraalVM and generating a native image for a while as well, and while I can get your basic hello-world app and some slightly more elaborate examples working, my app has some dependencies like gRPC and Cloud Firestore, among others, and I have not been successful in generating a native image for that with Micronaut, Quarkus, or Spring Boot.
I considered running on a managed instance group with a minimum of 1 so there's always at least one instance up and running, ready to serve requests, but I would then need a Cloud Loadbalancer in front and I've read some horror stories where the Cloud Loadbalancer wound up costing folks a lot more than they had expected.
Is there a way to front a managed instance group using Cloud Endpoints? I see where you can do it with a single VM instance, but not across a group which leads me to believe that in that case I would need a Cloud Loadbalancer to do what I need?
Cost-effectiveness is important, because my app is super new and is not generating any revenue at all yet, and since it's just me funding it using personal money, my infrastructure budget is not super high :)
TL;DR/ Looking for tips on what the cheapest way would be to host a Java-based API app on a framework like Micronaut, Quarkus, or Spring Boot on GCP while maintaining good performance and elasticity.
Any insight would be greatly appreciated.

I wrote an article on Java framework cold start on Cloud Run (the results are outdated because after this article release and discussions with Googlers, the team has updated the Cloud Run platform and the way to manage Java containers. Now they start quickly now!)
Anyway, your question seems relevant at the beginning, but finally not really. I will explain why.
Firstly, the cold start is a temporary issue. Your first request is slow, and the dozens, hundreds after are very fast. Does it really a problem?
If so, the min instance feature (only available on CLoud Run for Anthos for now) is coming in the managed version. Like this you never really scale to 0, an instance is kept warm and start instantly (but, as counterpart, it won't be free).
Secondly, if you look for maintainability, I recommend you the framework that you know. You will be more efficient to improve your code, fix your issue and to save your time (and time is money) much more that infrastructure consideration!
All the Java framework are relatively close when optimized (Naive Spring Boot on Cloud Run start in 20s, in 2s after packaging optimizations!). Of course, native compilation (with GraalVM) is the fastest, but it's not really stable for now with several side effect (and I won't recommend it for production).
Personal opinion: I'm a big fan of Spring Boot and its ecosystem. But Micronaut and its AOT compilation, in addition of annotation compliant with Spring Boot idioms, is absolutely awesome. Quarkus is more recent, and I haven't real opinion on it (never used in production/real project)

I would say you need more Micronaut or Quarkus in combination with GraalVM if you target performance. Define your services to be run as
My experience is primary with Micornaut serverless application and it is manageable to have api service running as function/lambda with boot time of 100-500 ms. Cold starts are not a big issue anymore if you enable provisioning (feature is available since 12.2019 in AWS), you could skip the so called warming.
How to make your lambda faster ?
Keep your package size as small a possible (remove all big libraries where a fraction of it is used) - keep package size to max 20 MB. On every cold start this package is fetched and decompressed.
If you use a JVM technology for your services, try to migrate them to Graalvm where the boot-up overhead is reduced to minimum.
micronaut + graalvm
quarkus + graalvm
helidon + graalvm
Use cloud infrastructure configs to reduce the cold starts.
This is what AWS provides, not sure about GPC
https://aws.amazon.com/about-aws/whats-new/2019/12/aws-lambda-announces-provisioned-concurrency/
Note: IMHO AWS has a better setup for serverless application so far compared to GCP in terms of boot-up and cold starts.

Related

Automate testing of caching functionality in a Spring Boot application

I am wondering about how can we testing automate functionality.
I am working on a Spring Boot micro-service where we use a GemFire cache. Right now I am testing it manually for below scenarios:
Is the data purged correctly after TTL is reached
Retrieving the data from cache if object exists
So, I know we can have a separate service which calls the GemFire and making sure that the object exists in cache (for step2). But not really sure how can we automate testing for step1.
And the whole point I am wondering is do we really need a new service completely to test this as a overhead? Are there any tools / better approach for testing the functionality?
Since you're using spring-boot and VMware GemFire together, I really hope you're taking advantage of the huge help and functionality spring-boot-data-gemfire provides out of the box. If you are, then you'd be delighted to know that there's yet another project, spring-test-data-geode, which can be used to write Unit and Integration Tests when building Spring Data for Apache Geode & VMware GemFire applications, you should really give it a try as it greatly helps in managing the scope and lifecycle of mock VMware GemFire/Apache Geode objects, along with cleaning all resources used by real objects used during Integration Tests.
As a side note, if you're using the Data Expiration Functionality shipped out of the box with VMware GemFire, I really don't see an actual need (other than the peace of mind that comes with I've tested everything I could) to include custom tests within your testing suite, you should only test what you own. The functionality itself is thoroughly tested already as part of the VMware GemFire / Apache Geode project itself, and you can see some (certainly not all) examples of such tests in the following links: ExpirationDUnitTest, RegionExpirationDistributedTest, ReplicateEntryIdleExpirationDistributedTest.
Cheers.
I have had some success using TestContainers here is the code used to create the container and
a sample test. It works by executing gfsh commands on the container but is slow.

Benefits of fully immutable containers

What is the actual benefit of restarting a container when updating its configuration instead of updating the configuration at runtime (e.g. Spring Boot supports listening to ConfigMap changes or Spring Cloud Config Server has this feature)? I can actually see none and I can see some drawbacks such as a need to reset TCP connections.
Unlike Spring Boot, other stacks such as Node.js, Go or Rust actually don't have as big overhead when booting up. The problem with Spring Boot is that it just takes longer than any other "modern" stacks to start up because it's booting up JVM and Tomcat. Those two technologies were here well before Docker and Kubernetes were a thing and honestly, that's the price to pay to run Spring Boot in containers.
And what's the benefit? If you're a single developer, probably none. If you work in a team and everybody tinkers with live ConfigMaps and environment variables, it can get hairy really quickly.
Assuming you're using for example Terraform to manage your configurations then everybody gets a nice overview of what is going on and which values are injected where.

How do I run a single JVM with multiple spring boot applications?

Lets say I have 25 spring boot micro-services each of which starts with 1GB JVM in production. At any given time not all are in use and there is no instance when they are using the full 25GB memory at once. In reality many of them will sit idle 90% of the time but any of them might at some point get called and require up to 1GB memory.
In my development environment I would like to run all of them at once but only have 8GB memory. I don't need great performance but I need them all to run at the same time for the entire app to work. I would like to try to run all the applications within a single JVM with 6GB dedicated memory. That should be enough at any given time.
This seems like it would be a common issue as many companies are converting to cloud/microservices. 10 years ago we would have one monolithic app with single JVM (easy to run in dev environment). Now we have dozens of small apps which might not need a ton of memory but they each run in their own JVM so each has a good amount of overhead. This actually makes development more complex rather than simplifying. So Im trying to find a solution for our developers where they can run everything but not kill the memory on their machines.
The spring boot apps need to run without modification aside from
maybe local profiles. Otherwise developers would have to make tons of changes every time they pull the code from git
Each project needs to be able to configure a different port (application-local.properties setting)
for tomcat.
Each project needs its own classpath entries (for instance one might use version 1.0 of a jar and another might use version 2.0 and without separate classpaths one or the other would break)
I have been trying to follow this post but its not 100% what I want. I feel like a proper solution should respect the application.properties / application-local.properties file and use the port set inside the project rather than having to hardcode any configuration outside the project. Essentially his post is starting a separate thread for each microservice and attaching a separate classloader to each thread. Then calling SpringApplication.run and passing in the classname that would normally be used to start the microservice. I think this is maybe ignoring the auto configuration properties.
Any help would be greatly appreciated!
You can manage how much resources your applications are consuming with docker. One spring boot application should be one docker container. You can at runtime change how much resources(in your case memory) container use. Take a look at this
article on how to at runtime change resource allocation in docker. Also, with kubernetes is possible to define minimum and maximum resources that your application needs.

SpringBoot with Jetty Vs Core Java with OSGI Jetty

My project has requirement to deploy a Java Based application as an operating system Job (and not use any container). The application need to have following capabilities:-
Scheduling
Few HTTPS based services
Ability to make JMX calls
Storage: Data for last 5 to 10 minutes of transactions (not more than 600 rows X 20 columns). Something like embedded H2 or in-memory options
Decision Tree: Something like Drools..
My manager wants to write this application as a core Java with OSGized Jetty version. I am suggesting to use Spring Boot with embedded Jetty(which will give me ready to use capabilities for Scheduling, JMX Integration and REST Services).
His bend towards core Java is emerging from the requirement that this application needs to be extremely efficient, fast and self-contained. He wants to reduce dependency on any open source. I have never worked directly on OSGI but have used products coming out of it - like eclipse.
Can somebody guide how OSGI based development might benefit over SpringBoot?
For many people, OSGi is superfluous, because they don't see the value in being modular. Not being worth the trouble.
Think about the application lifecycle, more or less being plan-develop-test-deploy.
How many developers you have? If many, OSGi helps a lot, because being modular make the boundaries very clear. You can delegate things very easily.
If outsourcing is your thing, you can just handle the module APIs and tell them to develop against it. They will never know how the rest was implemented, no fear of secrets being leaked.
Unit tests are so easy. You obviously see what you can test, every else you mock/stub/spy/fake. Unit tests can be can be reused in Integration tests, of course that isn't news, but the trick is running Unit tests outside the OSGi container, and Integration tests inside. So if you decide OSGi was not worth it, your code stills works fine (unit tests being the proof).
You can make your app a collection of modules, and every module having independent versioning and source repositories. Makes easier to handle and find bugs. For example, the current app crashed, you find out that sub-module-1.2 is throwing errors, try with version sub-module-1.1(still bad), then version 1.0(good), bug was introduced in 1.1 (avoids bisecting the source code). Programmers don't need to be perfectly synchronized with each other if they are working in different modules.
How do you plan to update the app? Most frameworks are of the all-or-nothing approach, where you have to stop the world, update, then restart the app. If you make things modular, you just need to update that thing. Making the downtime very small, and sometimes even zero.
If you need to make a big change in your app, but can't afford to refactor everything right now. With OSGi you can run the system with both my-module-1.0 and my-module-2.0. You can even adapt my-module-1.0 to redirect calls to my-module-2.0, but that is a kind of last resort hack (just saying that you can, if you want to).
I can do everything you say without OSGi, right? Well, probably you can, but in the end, would be something like OSGi.
I love the Dependency Injection of my framework. No problem, OSGi have something like that.
I hate Dependency Injection, it kills my app perfomance. No problem, you can use something like osgi.getService(MyService.class);. The OSGi container isn't concerned about intercepting every call of your app.
OSGi is like Java++, Java plus modules.
You can mix Spring Boot with OSGi, can't say if this is good or bad. There are many libraries and frameworks that fit your list, many will work out-of-the-box with OSGi.

Just how scalable is Grails?

I'm looking to make a website that will probably get some heavy, repetitive traffic. Is grails up to the task?
I agree with lael, also because it's built on java technologies there are a lot of proven clustering and 'enterprisey' tools available which allow you to easily scale across multiple application services.
The cloud tools around Grails are also becoming very good and make deploying to a cloud like EC2 very easy. I've recently been using Cloud Foundry and found it very good.
As the first poster points out however, you can write a badly performing application in any framework/language. One thing I'd recommend is getting a good understanding of Hibernate which is the underlying persistence library. If you understand how that works, it should help you avoid making any silly mistakes at the DB level. On this side of things, a tool like p6spy is great for checking what the database is up to during normal use. It should help you spot any repetitive queries.
The scalability of your web application won't really depend on what language/framework you choose to use, but rather how your application is built. You can build a scalable web application in Grails, just as you can build an incredibly slow application in C++. If Grails is the framework you would like to use, then use it; you can always rewrite the slow parts in Java or another fast language, if need be. (After all, that's what Twitter did with Scala.)
Disclaimer: I've never actually used Grails.
Grails is essentially a thin layer on top of the Spring Framework, which many consider to be a very scalable framework in the enterprise world. Spring + Hibernate has become a standard in many Java shops around the globe.
If you run into performance bottlenecks in Groovy, you can always rewrite those parts in Java.
Take a look at the Success Stories for examples of sites that were written in Grails. The Testamonials are also a good place to look for examples. You will use a little more memory(heap and permgen) than a vanilla Java app, but you can tune it just like you would any other Java application.
On the low end you aren't going to find $3/month Hosting options that you could with PHP stack (for example). That said, there are some good caching solutions for Grails apps EhCache, MemCache, etc. Beyond that you can also setup an Apache layer to caches static resources or whatever you need.
Don't mean to pile on here. You've already got some great answers but I just want to add on thing that I was reminded of recently. Scalability depends not only on the software you write (regardless of language/framework) but also on the deployment environment. A very well written application deployed on an undersized or poorly configured server will not scale at all. If you do use Grails or any other Java based framework, the default settings on your container (Tomcat, JBoss, etc.) will probably not be what you need.
Just something to keep in mind,
Dave
Grails run on the JVM. Simply put, you will not find a more scalable, solid and robust runtime platform than the JVM, anywhere. That's Grails's big advantage over, say, PHP or RoR.

Resources