How to change consitency mode to Stale in Spring Cloud consul config - spring

Consul supports consistency mode param in http . As per consul documentation it can have DEFAULT,CONSISTENT,STALE . I want to change the consistency mode from default ot STALE in one of my application. I didn't find any way in the provided spring documentation. Is this achievable using spring cloud consul config?

if your use case is only to start consul after only one stayed up. You can use this hack, and then call it from Spring boot main method.
public static void changeConsistencyModeToStale() {
for (Field field : QueryParams.class.getFields()) {
if ("DEFAULT".equals(field.getName())) {
try {
field.setAccessible(true);
Field modifiersField = Field.class.getDeclaredField("modifiers");
modifiersField.setAccessible(true);
modifiersField.setInt(field, field.getModifiers() & ~Modifier.FINAL);
field.set(null, new QueryParams(ConsistencyMode.STALE));
} catch (NoSuchFieldException | IllegalAccessException e) {
log.error("Error while try to set stale mode to consul", e);
}
log.info("Consistence mode has been set to stale successfully");
}
}
}

Related

Load balancing problems with Spring Cloud Kubernetes

We have Spring Boot services running in Kubernetes and are using the Spring Cloud Kubernetes Load Balancer functionality with RestTemplate to make calls to other Spring Boot services. One of the main reasons we have this in place is historical - in that previously we ran our services in EC2 using Eureka for service discovery and after the migration we kept the Spring discovery client/client-side load balancing in place (updating dependencies etc for it to work with the Spring Cloud Kubernetes project)
We have a problem that when one of the target pods goes down we get multiple failures for requests for a period of time with java.net.NoRouteToHostException ie the spring load balancer is still trying to send to that pod.
So I have a few questions on this:
Shouldn't the target instance get removed automatically when this happens? So it might happen once but after that, the target pod list will be repaired?
Or if not is there some other configuration we need to add to handle this - eg retry / circuit breaker, etc?
A more general question is what benefit does Spring's client-side load balancing bring with Kubernetes? Without it, our service would still be able to call other services using Kubernetes built-in service / load-balancing functionality and this should handle the issue of pods going down automatically. The Spring documentation also talks about being able to switch from POD mode to SERVICE mode (https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/index.html#loadbalancer-for-kubernetes). But isn't this service mode just what Kubernetes does automatically? I'm wondering if the simplest solution here isn't to remove the Spring Load Balancer altogether? What would we lose then?
An update on this: we had the spring-retry dependency in place, but the retry was not working as by default it only works for GETs and most of our calls are POST (but OK to call again). Adding the configuration spring.cloud.loadbalancer.retry.retryOnAllOperations: true fixed this, and hence most of these failures should be avoided by the retry using an alternative instance on the second attempt.
We have also added a RetryListener that clears the load balancer cache for the service on certain connection exceptions:
#Configuration
public class RetryConfig {
private static final Logger logger = LoggerFactory.getLogger(RetryConfig.class);
// Need to use bean factory here as can't autowire LoadBalancerCacheManager -
// - it's set to 'autowireCandidate = false' in LoadBalancerCacheAutoConfiguration
#Autowired
private BeanFactory beanFactory;
#Bean
public CacheClearingLoadBalancedRetryFactory cacheClearingLoadBalancedRetryFactory(ReactiveLoadBalancer.Factory<ServiceInstance> loadBalancerFactory) {
return new CacheClearingLoadBalancedRetryFactory(loadBalancerFactory);
}
// Extension of the default bean that defines a retry listener
public class CacheClearingLoadBalancedRetryFactory extends BlockingLoadBalancedRetryFactory {
public CacheClearingLoadBalancedRetryFactory(ReactiveLoadBalancer.Factory<ServiceInstance> loadBalancerFactory) {
super(loadBalancerFactory);
}
#Override
public RetryListener[] createRetryListeners(String service) {
RetryListener cacheClearingRetryListener = new RetryListener() {
#Override
public <T, E extends Throwable> boolean open(RetryContext context, RetryCallback<T, E> callback) { return true; }
#Override
public <T, E extends Throwable> void close(RetryContext context, RetryCallback<T, E> callback, Throwable throwable) {}
#Override
public <T, E extends Throwable> void onError(RetryContext context, RetryCallback<T, E> callback, Throwable throwable) {
logger.warn("Retry for service {} picked up exception: context {}, throwable class {}", service, context, throwable.getClass());
if (throwable instanceof ConnectTimeoutException || throwable instanceof NoRouteToHostException) {
try {
LoadBalancerCacheManager loadBalancerCacheManager = beanFactory.getBean(LoadBalancerCacheManager.class);
Cache loadBalancerCache = loadBalancerCacheManager.getCache(CachingServiceInstanceListSupplier.SERVICE_INSTANCE_CACHE_NAME);
if (loadBalancerCache != null) {
boolean result = loadBalancerCache.evictIfPresent(service);
logger.warn("Load Balancer Cache evictIfPresent result for service {} is {}", service, result);
}
} catch(Exception e) {
logger.error("Failed to clear load balancer cache", e);
}
}
}
};
return new RetryListener[] { cacheClearingRetryListener };
}
}
}
Are there any issues with this approach? Could something like this be added to the built in functionality?
Shouldn't the target instance get removed automatically when this
happens? So it might happen once but after that the target pod list
will be repaired?
To resolve this issue you have to use the Readiness and Liveness Probe in Kubernetes.
Readiness will check the health of the endpoint that your application has, on the period of interval. If the application fails it will mark your PODs as Unready to accept the Traffic. So no traffic will go to that POD(replica).
Liveness will restart your application if it fails so your container or we can say POD will come up again and once we will get 200 response from app K8s will mark your POD as Ready to accept the traffic.
You can create the simple endpoint in the application that give response as 200 or 204 as per need.
Read more at : https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
Make sure you application using the Kubernetes service to talk with each other.
Application 1 > Kubernetes service of App 2 > Application 2 PODs
To enable load balancing based on Kubernetes Service name use the
following property. Then load balancer would try to call application
using address, for example service-a.default.svc.cluster.local
spring.cloud.kubernetes.loadbalancer.mode=SERVICE
The most typical way to use Spring Cloud LoadBalancer on Kubernetes is
with service discovery. If you have any DiscoveryClient on your
classpath, the default Spring Cloud LoadBalancer configuration uses it
to check for service instances. As a result, it only chooses from
instances that are up and running. All that is needed is to annotate
your Spring Boot application with #EnableDiscoveryClientto enable
K8s-native Service Discovery.
References : https://stackoverflow.com/a/68536834/5525824

Capturing cassandra cql metrics from spring boot application

I want to capture the db query metrics from springboot cassandra application and expose to prometheus endpoint.
Already have implemenatation for springboot+ postgres and its working with r2dbc-proxy. since r2dbc not providing support for cassandra. looking for any sample implementation.
After edited code for below comment:
String contactPoint = System.getProperty("contactPoint", "127.0.0.1");
// init default prometheus stuff
DefaultExports.initialize();
// setup Prometheus HTTP server
Optional<HTTPServer> prometheusServer = Optional.empty();
try {
prometheusServer = Optional.of(new HTTPServer(Integer.getInteger("prometheusPort", 9095)));
} catch (IOException e) {
System.out.println("Exception when creating HTTP server for Prometheus: " + e.getMessage());
}
Cluster cluster = Cluster.builder()
.addContactPointsWithPorts(new InetSocketAddress(contactPoint, 9042))
.withoutJMXReporting()
.build();
try (Session session = cluster.connect()) {
MetricRegistry myRegistry = new MetricRegistry();
myRegistry.registerAll(cluster.getMetrics().getRegistry());
CollectorRegistry.defaultRegistry.register(new DropwizardExports(myRegistry));
session.execute("create keyspace if not exists test with replication = {'class': 'SimpleStrategy', 'replication_factor': 1};");
session.execute("create table if not exists test.abc (id int, t1 text, t2 text, primary key (id, t1));");
session.execute("truncate test.abc;");
}
catch(IllegalStateException ex){
System.out.println("metric registry fails to configure!!!!!");
throw ex;
}
}
}
If using micrometer
add dependency com.datastax.oss:java-driver-metrics-micrometer
Create CqlSessionBuilderCustomizer and register MeterRegistry (io.micrometer.core.instrument.MeterRegistry) bean using withMetricRegistry method.
Create DriverConfigLoaderBuilderCustomizer with used metrics (https://stackoverflow.com/a/62940370/12584290)
DataStax Java driver exposes metrics via Dropwizard Metrics library (driver version 3.x, driver version 4.x) that could be exposed as Prometheus endpoint using via standard Prometheus libraries, like io.prometheus.simpleclient_dropwizard that is part of Prometheus Java client library.
Here is an example for driver version 4.x, but with small modification it could work with 3.x as well. The main part is following:
MetricRegistry registry = session.getMetrics()
.orElseThrow(() -> new IllegalStateException("Metrics are disabled"))
.getRegistry();
CollectorRegistry.defaultRegistry.register(new DropwizardExports(registry));
the rest is just creating session, exposing metrics via HTTP, etc.

io.micrometer.core.instrument.config.MeterFilter : DENY is not working in spring boot

I want to expose all metrics on the metrics endpoint but publish some of them to a remote meter registry.
For doing so, I have a SimpleMeterRegistry for the metrics endpoint and added a MeterRegistryCustomizer for the remote meter registry(Datadog) to add some MeterFilter to avoid specific metrics using MeterFilter's DENY function. For example :
#Bean
public MeterRegistryCustomizer<StatsdMeterRegistry> meterRegistryCustomizer() {
return (registry) -> new StatsdMeterRegistry(config, Clock.SYSTEM).config().meterFilter(MeterFilter.denyNameStartsWith("jvm"));
}
However, all jvm related metrics are visible in Datadog. I tried MeterFilterReply but no use.
Please suggest how this can be achieved.
You are configuring the filter on a new StatsdMeterRegistry. When using a MeterRegistryCustomizer you need to operate on the registry that was passed in.
#Bean
public MeterRegistryCustomizer<StatsdMeterRegistry> meterRegistryCustomizer() {
return (registry) -> registry.config().meterFilter(MeterFilter.denyNameStartsWith("jvm"));
}
Since the customizer will be used against all registries, you also would need to add an if statement to only filter against the registry you want filtered.
#Bean
public MeterRegistryCustomizer<StatsdMeterRegistry> meterRegistryCustomizer() {
return (registry) -> {
if(registry instanceof StatsdMeterRegistry) {
registry.config().meterFilter(MeterFilter.denyNameStartsWith("jvm"));
}
}
}

Unable to dynamically provision a GORM-capable data source in Grails 3

We are implementing a multitenant application (database per tenant) and would like to include dynamic provisioning of new tenants without restarting the server. This is Grails 3.2.9 / GORM 6.
Among other things this involves creating a dataSource at runtime, without it being configured in application.yml at the application startup.
According to the documentation (11.2.5. Adding Tenants at Runtime) there exists ConnectionSources API for adding tenants at runtime, but the ConnectionSource created this way doesn't seem to be properly registered with Spring (beans for the dataSource, session and transaction manager) and Grails complain about missing beans when we try to use the new datasource.
We expect that when we use the ConnectionSources API to create a connection source for a new database, Grails should initialise it with all the tables according to the GORM Domains in our application, execute Bootstrap.groovy, etc., just like it does for the sources statically configured in application.yml This is not happening either though.
So my question is whether the ConnectionSources API is intended for a different purpose than we are trying to use it for, or it is just not finished/tested yet.
I meant to come back to you. I did manage to figure out a solution. Now this is for schema per customer, not database per customer, but I suspect it would be easy to adapt. I first create the schema using a straight Groovy Sql object as follows:
void createAccountSchema(String tenantId) {
Sql sql = null
try {
sql = new Sql(dataSource as DataSource)
sql.withTransaction {
sql.execute("create schema ${tenantId}" as String)
}
} catch (Exception e) {
log.error("Unable to create schema for tenant $tenantId", e)
throw e
} finally {
sql?.close()
}
}
Then I run the same code as the Liquibase plugin uses, with some simple defaults, as follows:
void updateAccountSchema(String tenantId) {
def applicationContext = Holders.applicationContext
// Now try create the tables for the schema
try {
GrailsLiquibase gl = new GrailsLiquibase(applicationContext)
gl.dataSource = applicationContext.getBean("dataSource", DataSource)
gl.dropFirst = false
gl.changeLog = 'changelog-m.groovy'
gl.contexts = []
gl.labels = []
gl.defaultSchema = tenantId
gl.databaseChangeLogTableName = defaultChangelogTableName
gl.databaseChangeLogLockTableName = defaultChangelogLockTableName
gl.afterPropertiesSet() // this runs the update command
} catch (Exception e) {
log.error("Exception trying to create new account schema tables for $tenantId", e)
throw e
}
}
Finally, I tell Hibernate about the new schema as follows:
try {
hibernateDatastore.addTenantForSchema(tenantId)
} catch (Exception e) {
log.error("Exception adding tenant schema for ${tenantId}", e)
throw e
}
Anywhere you see me referring to 'hibernateDatastore' or 'dataSource' I have those injected by Grails as follows:
def hibernateDatastore
def dataSource
protected String defaultChangelogTableName = "databasechangelog"
protected String defaultChangelogLockTableName = "databasechangeloglock"
Hope this helps.

One Tomcat two spring application (war) two seperate logging configurations

As mentioned in the title I have two applications with two different logging configurations. As soon as I use springs logging.file setting I can not seperate the configurations of both apps.
The problem worsens because one app is using logback.xml and one app is using log4j.properties.
I tried to introduce a new configuration parameter in one application where I can set the path to the logback.xml but I am unable to make the new setting work for all logging in the application.
public static void main(String[] args) {
reconfigureLogging();
SpringApplication.run(IndexerApplication.class, args);
}
private static void reconfigureLogging() {
if (System.getProperty("IndexerLogging") != null && !System.getProperty("IndexerLogging").isEmpty()) {
try {
JoranConfigurator configurator = new JoranConfigurator();
configurator.setContext(context);
// Call context.reset() to clear any previous configuration, e.g. default
// configuration. For multi-step configuration, omit calling context.reset().
System.out.println("SETTING: " + System.getProperty("IndexerLogging"));
System.out.println("SETTING: " + System.getProperty("INDEXER_LOG_FILE"));
context.reset();
configurator.doConfigure(System.getProperty("IndexerLogging"));
} catch (JoranException je) {
System.out.println("FEHLER IN CONFIG");
}
logger.info("Entering application.");
}
}
#Override
protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
reconfigureLogging();
return application.sources(applicationClass);
}
The above code works somehow. But the only log entry which is written to the logfile specified in the configuration, which ${IndexerLogging} points to, is the entry from logger.info("Entering application."); :(
I don't really like to attach that code to every class which does some logging in the application.
The application has to be runnable as tomcat deployment but also as spring boot application with integrated tomcat use.
Any idea how I can set the path from ${IndexerLogging} as the path to read the configuration file when first configuring logging in that application?
Take a look at https://github.com/qos-ch/logback-extensions/wiki/Spring you can configure the logback config file to use.

Resources