Why placeholders in the bootstrap.yml are not resolved? - spring-boot

I'm getting below exception in my Spring project:
Caused by: java.lang.IllegalArgumentException: Could not resolve placeholder 'MY_KEY' in value "${MY_KEY}"
org.springframework.util.PropertyPlaceholderHelper.parseStringValue(PropertyPlaceholderHelper.java:174)
org.springframework.util.PropertyPlaceholderHelper.replacePlaceholders(PropertyPlaceholderHelper.java:126)
This is how my bootstrap.yml looks like in src/main/resources/
spring:
profiles:
active: native
application:
name: myapp
cloud:
config:
server:
bootstrap: true
health:
enabled: false
encrypt:
# key: ${MY_KEY}
key: ${MY_KEY}
Other placeholders defined in application.yml are resolved correctly.
Following are the dependencies in my project:
spring-boot-starter:1.5.4.RELEASE
spring-cloud-starter-config: -> 1.3.1.RELEASE
spring-cloud-starter:1.2.2.RELEASE
spring-cloud-config-server: -> 1.3.1.RELEASE
spring-cloud-core:1.2.4.RELEASE
spring-cloud-config-client:1.3.1.RELEASE

Related

Spring boot config server does not refresh if I update local

I have Spring cloud project which has below information:
Controller of address-service-1:
#RestController
#RequestMapping("/api/address")
#Slf4j
#RefreshScope
public class AddressController {
#Value("${scope-refresh-testing}")
private String message;
#RequestMapping("/scope-refresh-testing-message")
String getMessage() {
return this.message;
}
}
bootstrap.yml of address-service-1:
spring:
application:
name: address-service-1
# register to config server
cloud:
config:
discovery:
enabled: true
service-id: config-server-1
profile: default
management:
endpoints:
web:
exposure:
include:
- refresh
Properties of config-server-1:
server:
port: 8888
spring:
application:
name: config-server-1
cloud:
config:
server:
git:
default-label: master
uri: file:///E:/workspace/Spring-boot-microservices-config-server-study/
Properties which I stored in file:///E:/workspace/Spring-boot-microservices-config-server-study/ (address-service-1.yml):
scope-refresh-testing: testDefault
If I change scope-refresh-testing: testDefault222 then run url actuator/refresh, the properties file will rollback to scope-refresh-testing: testDefault
Did I miss something in configuration?
Thanks

There is not connect another profile in Spring boot

I have a project with two properties.
application.yml (major)
spring:
config:
activate:
on-profile: local
application:
name: test-app
datasource:
cluster:
enabled: true
driverClassName: org.postgresql.Driver
hikari:
maximum-pool-size: 30
minimum-idle: 3
autoCommit: false
jpa:
database-platform: org.hibernate.dialect.PostgreSQL95Dialect
properties:
hibernate:
connection.provider_disables_autocommit: true
jdbc.lob.non_contextual_creation: true
open-in-view: false
application-local.yml
spring:
datasource:
read:
url: jdbc:postgresql://localhost:5432/test-app
write:
url:jdbc:postgresql://localhost:5432/test-app
username:
password:
driver-class-name: org.postgresql.Driver
jpa:
hibernate:
ddl-auto: update
properties:
open-in-view: false
I also have added the setting in IntelliJ IDEA
VM options also has used.
-Dspring.profiles.active=local
But during the launch I get an error:
Description:
Failed to configure a DataSource: 'url' attribute is not specified and
no embedded datasource could be configured.
Reason: Failed to determine suitable jdbc url
I assume that the specified profile did not start
Who has any ideas how to fix this ?

spring-cloud-stream-binder-kafka - Unable to create multiple kafka binders with ssl configuration

I am trying to connect to a kafka cluster through SASL_SSL protocol with jaas config as follows:
spring:
cloud:
stream:
bindings:
binding-1:
binder: kafka-1-with-ssl
destination: <destination-1>
content-type: text/plain
group: <group-id-1>
consumer:
header-mode: headers
binding-2:
binder: kafka-2-with-ssl
destination: <destination-2>
content-type: text/plain
group: <group-id-2>
consumer:
header-mode: headers
binders:
kafka-1-with-ssl:
type: kafka
defaultCandidate: false
environment:
spring:
cloud:
stream:
kafka:
binder:
brokers: <broker-hostnames-1>
configuration:
ssl:
truststore:
location: <location-1>
password: <ts-password-1>
type: JKS
jaas:
loginModule: org.apache.kafka.common.security.scram.ScramLoginModule
options:
username: <username-1>
password: <password-1>
kafka-2-with-ssl:
type: kafka
defaultCandidate: false
environment:
spring:
cloud:
stream:
kafka:
binder:
brokers: <broker-hostnames-2>
configuration:
ssl:
truststore:
location: <location-2>
password: <ts-password-2>
type: JKS
jaas:
loginModule: org.apache.kafka.common.security.scram.ScramLoginModule
options:
username: <username-2>
password: <password-2>
kafka:
binder:
configuration:
security:
protocol: SASL_SSL
sasl:
mechanism: SCRAM-SHA-256
The above configuration is inline with the sample config available on the spring-cloud-stream's official git repo.
similar issue raised on the library's git repo says it's fixed in latest versions but doesn't seem so. Getting the following error:
springBootVersion: 2.2.8 and
spring-cloud-stream-dependencies version - Horsham.SR6.
Failed to create consumer binding; retrying in 30 seconds | org.springframework.cloud.stream.binder.BinderException: Exception thrown while starting consumer:
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindConsumer(AbstractMessageChannelBinder.java:461)
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindConsumer(AbstractMessageChannelBinder.java:90)
at org.springframework.cloud.stream.binder.AbstractBinder.bindConsumer(AbstractBinder.java:143)
at org.springframework.cloud.stream.binding.BindingService.lambda$rescheduleConsumerBinding$1(BindingService.java:201)
at org.springframework.cloud.sleuth.instrument.async.TraceRunnable.run(TraceRunnable.java:68)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.common.KafkaException: Failed to create new KafkaAdminClient
at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:407)
at org.apache.kafka.clients.admin.AdminClient.create(AdminClient.java:65)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createAdminClient(KafkaTopicProvisioner.java:246)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.doProvisionConsumerDestination(KafkaTopicProvisioner.java:216)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.provisionConsumerDestination(KafkaTopicProvisioner.java:183)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.provisionConsumerDestination(KafkaTopicProvisioner.java:79)
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindConsumer(AbstractMessageChannelBinder.java:402)
... 12 common frames omitted
Caused by: org.apache.kafka.common.KafkaException: javax.security.auth.login.LoginException: KrbException: Cannot locate default realm
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:160)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:146)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:67)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:99)
at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:382)
... 18 common frames omitted
Caused by: javax.security.auth.login.LoginException: KrbException: Cannot locate default realm
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:804)
at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)
at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
at org.apache.kafka.common.security.authenticator.AbstractLogin.login(AbstractLogin.java:60)
at org.apache.kafka.common.security.authenticator.LoginManager.<init>(LoginManager.java:61)
at org.apache.kafka.common.security.authenticator.LoginManager.acquireLoginManager(LoginManager.java:111)
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:149)
... 22 common frames omitted
Caused by: sun.security.krb5.RealmException: KrbException: Cannot locate default realm
at sun.security.krb5.Realm.getDefault(Realm.java:68)
at sun.security.krb5.PrincipalName.<init>(PrincipalName.java:462)
at sun.security.krb5.PrincipalName.<init>(PrincipalName.java:471)
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:706)
... 38 common frames omitted
Caused by: sun.security.krb5.KrbException: Cannot locate default realm
at sun.security.krb5.Config.getDefaultRealm(Config.java:1029)
at sun.security.krb5.Realm.getDefault(Realm.java:64)
... 41 common frames omitted
Which makes me think that the library is not picking up the config props properly because jaas.loginModule is specified as ScramLoginModule but it's using Krb5LoginModule to authenticate.
But well, it's striking to find that when the configuration is done as follows (the difference lies in the last part with ssl credentials outside binder's environment), it connects to the binder which is specified in the global ssl props(outside the binder's env) and silently ignores the other binder without showing any error logs.
Say if password credentials of the binder kafka-2-with-ssl were specified in the global ssl props, that binder is created and the bindings subscribed to that binder start consuming the events. But this is useful only when we need to create single binder.
spring:
cloud:
stream:
bindings:
binding-1:
binder: kafka-1-with-ssl
destination: <destination-1>
content-type: text/plain
group: <group-id-1>
consumer:
header-mode: headers
binding-2:
binder: kafka-2-with-ssl
destination: <destination-2>
content-type: text/plain
group: <group-id-2>
consumer:
header-mode: headers
binders:
kafka-1-with-ssl:
type: kafka
defaultCandidate: false
environment:
spring:
cloud:
stream:
kafka:
binder:
brokers: <broker-hostnames-1>
configuration:
ssl:
truststore:
location: <location-1>
password: <ts-password-1>
type: JKS
jaas:
loginModule: org.apache.kafka.common.security.scram.ScramLoginModule
options:
username: <username-1>
password: <password-1>
kafka-2-with-ssl:
type: kafka
defaultCandidate: false
environment:
spring:
cloud:
stream:
kafka:
binder:
brokers: <broker-hostnames-2>
configuration:
ssl:
truststore:
location: <location-2>
password: <ts-password-2>
type: JKS
jaas:
loginModule: org.apache.kafka.common.security.scram.ScramLoginModule
options:
username: <username-2>
password: <password-2>
kafka:
binder:
configuration:
security:
protocol: SASL_SSL
sasl:
mechanism: SCRAM-SHA-256
ssl:
truststore:
location: <location-2>
password: <ts-password-2>
type: JKS
jaas:
loginModule: org.apache.kafka.common.security.scram.ScramLoginModule
options:
username: <username-2>
password: <password-2>
Assure you that nothing is wrong with the ssl credentials. Tested diligently with either of the ssl-kafka-binder successfully getting created individually. The aim is to connect to multiple kafka binders with SASL_SSL protocol. Thanks in advance.
I think you may want to follow the solutions implemented in KIP-85 for this issue. Instead of using the Spring Cloud Stream Kafka binder provided JAAS configuration or setting the java.security.auth.login.config property, use the sasl.jaas.config property which takes precedence over other methods. By using sasl.jaas.config, you can override the restriction placed by JVM in which a JVM-wide static security context is used, thus ignoring any subsequent JAAS configurations found after the first one.
Here is a sample application that demonstrates how to connect to multiple Kafka clusters with different security contexts as a multi-binder application.

#EnableBinding(Source.class) throws Failed to bind properties under 'server.error.include-stacktrace' to org.springframework.boot.autoconfigure.web

Spring Cloud Stream and kafka.
This error shows up after I add the following dependencies. ( If I comment out the #EnableBinding(Source.class) everything works. )
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
</dependency>
When I add the Enable Binding property,
#SpringBootApplication
#EnableBinding(Source.class)
public class CustomersServiceApplication {
public static void main(String[] args) {
SpringApplication.run(CustomersServiceApplication.class, args);
}
}
And these are my properties
spring:
application:
name : customerservice
cloud:
stream:
bindings:
output:
destination: orgChangeTopic
content-type: application/json
kafka:
binder:
zkNodes: localhost
brokers: localhost
logging:
level:
com.netflix: WARN
org.springframework.web: WARN
com.thoughtmechanix: DEBUG
eureka:
instance:
preferIpAddress: true
client:
registerWithEureka: true
fetchRegistry: true
serviceUrl:
defaultZone: http://localhost:8761/eureka/
server:
port: 7000
This is my full code, however when I run the application now, I am getting an error.
2019-04-08 15:40:33.325 INFO 22917 --- [ restartedMain] ConditionEvaluationReportLoggingListener :
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2019-04-08 15:40:33.336 ERROR 22917 --- [ restartedMain] o.s.b.d.LoggingFailureAnalysisReporter :
***************************
APPLICATION FAILED TO START
***************************
Description:
Failed to bind properties under 'server.error.include-stacktrace' to org.springframework.boot.autoconfigure.web.ErrorProperties$IncludeStacktrace:
Property: server.error.include-stacktrace
Value: ALWAYS
Origin: "server.error.include-stacktrace" from property source "devtools"
Reason: 0
Action:
Update your application's configuration
After Adding the property to my properties file. I still get the same error
server:
port: 7000
error:
include-stacktrace : ALWAYS
It looks like spring version conflict issue. Two suggestions here:
Create a clean spring cloud stream with Kafka project, to make sure it works well.
Use mvn dependency tree to analysis conflict spring version.

Spring Config profile file is not found

I've created this application-local.json into src/main/resources:
{
"spring": {
"datasource": {
"url": "jdbc:postgresql://xxx:yyy/db",
"username": "xxx",
"password": "xxx",
"driverClassName": "org.postgresql.Driver"
},
"profiles": {
"active": "local"
}
}
}
By other hand, apliication.yml:
spring:
jpa:
generate-ddl: false
show-sql: true
properties:
hibernate:
format_sql: true
jdbc:
lob:
non_contextual_creation: true
profiles:
active: local
management:
security:
enabled: false
endpoints:
web:
exposure:
include: '*'
---
spring:
profiles: local
server:
port: 8092
Currently, I'm getting this message:
***************************
APPLICATION FAILED TO START
***************************
Description:
Failed to auto-configure a DataSource: 'spring.datasource.url' is not specified and no embedded datasource could be auto-configured.
Reason: Failed to determine a suitable driver class
Action:
Consider the following:
If you want an embedded database (H2, HSQL or Derby), please put it on the classpath.
If you have database settings to be loaded from a particular profile you may need to activate it (no profiles are currently active).
When Spring application runs, it load properties from value of application.yml->spring->profiles->active. Spring supports only yml and .properties file as a source.
So, in your case Spring will look for application-local.yml or application-local.properties to read profile specific property.
But here, you have defined property file as a application-local.json and that a reason why spring is not reading values and you are getting exception.
Solution
Create application-local.yml or application-local.properties and paste your content and try. It should work.
Here is sample DB configuration.
spring.datasource.url=jdbc:mysql://localhost:3306/_schema name_
spring.datasource.username=_username_
spring.datasource.password=_password_
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5Dialect
spring.jpa.show-sql = true
logging.level.org.hibernate.SQL=debug

Resources