I have used Jhipster to develop quite a few applications however today I run into a little problem and it is how to configure Transaction Routing Datasource in Jhipster. I am using Kubegres to set up a Postgresql server with multiple read replicas and one write replica. I have configured a project to use Spring's Transaction Routing Datasource and I have succeeded in configuring it, but when I try to do the same configuration with Jhipter for another project and although I manage to get it to run without problems, it does not route for read-only replicas. Has anyone had the same problem and managed to get it to work
Related
I wanted to run my Grails application (version 4.0.4) in a cluster. I tried to apply Hazelcast to replicate the HTTP session across the nodes/instances but somehow I couldn’t override/replace the SessionRepository bean that Grails uses with the Hazelcast implementation.
My working configuration in Spring Boot is: I declare the Hazelcast bean and annotate the Application with #EnableHazelcastHttpSession which in turn introduces the new SessionRepository from Hazelcast.
But I couldn’t make this configuration work in Grails and override the SessionRepository. (Although the app starts, it acts very strange.)
Any ideas?
Or do you suggest an alternative approach to implement a distributed session in Grails? How did you replicate session from your past experience?
(P.S The reason I chose Hazelcast is, since it is a distributed cache which can be embedded with the application itself, I can avoid dependency on external service such as Redis, to run the app. That is part of the requirement).
Thank you.
I am using springcloud config server to refresh my application properties at the runtime on scheduled basis in production environment. My schedule runs biweekly without any issues.
My application is running on Kubernetes cloud on multiple pods. Pods tends to crash or restart at any moment. What happens in case of pod crash/restart it fetches the latest property file from Config Server and repository at application startup rather waiting for next scheduled refresh cycle.
This lead to inconsistencies across the pods configuration and application behavior.
What I am looking for a strategy to avoid property refresh at the app startup and "Spring cloud config" client to only refresh based on refresh cycle.
Any suggestions to solve above would be greatly appreciate.
You want to use old properties even your application gets restarted so you need to keep old properties detail somewhere. You can not do that in your application as property detail is coming from config server so its better to set refresh rate for config server also how frequent it should pull config detail from git or whatever is your source.
If you set refresh-rate to two week of config server it will contain old property detail only and does not matter how frequent your application restarted it will get old properties from config-server.
We have an application that uses several data sources. A DB underlying one of those data sources is down at the moment: IOError. Network adapter couldn't establish the connection & Socket read timed out.
Is there an annotation (or other means) of configuring Spring Boot such that it bypasses the culprit data source and still starts up: the DB is not essential in current development work. spring.datasource.continue-on-error=true doesn't seem to work. This is Spring 2.2.2.RELEASE.
using multiple datasource, so when your apps fail at start up your apps still work, i mean using memory db / sqlite to handle fail at connection error...
I am creating a spring boot microservice project with intelij IDEA.
Currently I have developed three seperate spring boot rest services as customer service, vehicle service and spring cloud config server. Spring cloud config server is pointing to a github repository.
The issue is sometimes above projects take more than 10 minutes to run and sometimes does't run and give an error message as "failed to check application readystate intellij attached provider for the vm is not found". I have no idea why this happens ?
There are two possible causes:
1. IntelliJ IDEA and the Spring application are running in different JVMs.
There is a bug for IntelliJ IDEA regarding that:
https://youtrack.jetbrains.com/issue/IDEA-210665
Here is short summary:
IntelliJ IDEA uses local JMX connector for retrieving Spring Boot actuator endpoint's data by default. However, it could be impossible to get local JMX connector address via attach api if Spring Boot application and IntelliJ IDEA are run by different JVMs. In this case, add the following lines to VM options of your Spring Boot run configuration:
-Dcom.sun.management.jmxremote.port={some_port}
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
As mentioned in the official Oracle documentation, this configuration is insecure. Any remote user who knows (or guesses) your port number and host name will be able to monitor and control your Java applications and platform.
2. Prolonged time to retrieve local hostname
You can check that time using inetTester. Normally it should take only several milliseconds to complete. If it takes a long time then you can add the hostname returned by inetTester to /etc/hosts file like this:
127.0.0.1 localhost winsky
::1 localhost winsky
our prod environment architecture is decided to be like this:
2 machines that each of them have 2 tomcat instances (on vm). there is spring web app with hibernate running on tomcat.
there are also 2 db instances distributed to both machines.
so, we think that hazelcast fits this achitecture well. hazelcast will be second level cache for hibernate, it will manage clustered cache over db instances.
we installed hibernate server and defined our clusters on it.
i've searched offical hazelcast doc and several sites but i couldnt find the way to configure hibernate to use this hazelcast server as L2 cache.
we dont want to change our existing app. we'll keep using hibernate as it is. is it possible? if so, how we can configure hazelcast server on our web app?
I think it is important to understand that your probably don't want to have a standalone Hazelcast cluster/server; what you normally do is to embed Hazelcast within your application.
Like Miko said, you can just enable Hazelcast to be used as second level cache; no need to make any fundamental changes.
I also don't understand what you mean with 'hibernate server', because Hibernate is just an OR mapper library and has no concept of server.
So can you tell a bit more what you want so we can help you out?