Spring boot application - failed to start - spring-boot

I have recently upgraded spring boot from 1.x to 2.2.1. I am able to deploy most of the modules. For one of the modules I am getting below error while deploying in openshift.
***************************_APPLICATION FAILED TO START_***************************__Description:__Failed to bind properties under 'spring.jackson.serialization' to java.util.Map<com.fasterxml.jackson.databind.SerializationFeature, java.lang.Boolean>:__ Reason: No converter found capable of converting from type [java.lang.String] to type [java.util.Map<com.fasterxml.jackson.databind.SerializationFeature, java.lang.Boolean>]__Action:__Update your application's configuration_
I have added jackson-databind and jackson-core dependencies but no luck.
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.11.1</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-annotations</artifactId>
<version>2.11.1</version>
</dependency>
In local environment the deployment works fine.
application.yml:
application:
name: #project.name#
server:
port: 8085
context: /*
sessionAPIUrl: localhost:8087
sessionAPIUri: /AppMS/user/session
account:
host: http://localhost:8080
remote:
dev: http://localhost:8080
test: http://10.8.99.8:8080
staging: https://stg2-tic.myapp.com
prod: https://ss1.myapp.com
uri:
data: AppMS/service/test
javax.persistence.query.timeout: 120000
#mongodb
spring.data.mongodb.authentication-database:
spring.data.mongodb.host:
spring.data.mongodb.port: 22017
spring.data.mongodb.database: ticnf
spring.data.mongodb.username: ******
spring.data.mongodb.password: ******
#JMX setting
endpoints.jmx.unique-names: true
#logging setup
logging.level.org.springframework.web: WARN
logging.level.com.myapp: INFO
# Logging pattern for the console
logging.pattern.console: "%d{dd-MMM-yyyy HH:mm:ss:SSS zzz}, TYPE= %-5p, SESSIONID=%X{sessionID}, CLIENT_IP=%X{UserIPAddress}, REQID=%X{requestID}, SSOUID=%X{ssoUserId}, ticUID=%X{ticUserID}, APP=%X{APP}, REQUESTURI=%X{requestURI}, CLASS=%c{1}, METHOD=%M, MSG=%m%n"
# Logging pattern for file
logging.pattern.file: "%d{dd-MMM-yyyy HH:mm:ss:SSS zzz}, TYPE= %-5p, SESSIONID=%X{sessionID}, CLIENT_IP=%X{UserIPAddress}, REQID=%X{requestID}, SSOUID=%X{ssoUserId}, ticUID=%X{ticUserID}, APP=%X{APP}, REQUESTURI=%X{requestURI}, CLASS=%c{1}, METHOD=%M, MSG=%m%n"
logging.file: logs/ticms/ticms.log
#kafka configuration
kafka:
broker:
address: localhost:9099
zookeeper:
connect: localhost:2191
consumerId: tic.account
tic.secret: where to store this is an question? DB/File?
spring:
profiles: dev
jackson:
serialization:
INDENT_OUTPUT: true
datasource:
driver-class-name: com.ibm.db2.jcc.DB2Driver
url: jdbc:db2://192.0.0.1:9000/sdb1:currentSchema=DEV1;
username: *****
password: *****
platform: db2
schema: classpath:schema-db2-stg.sql
jpa:
show-sql: true
properties:
hibernate:
dialect: org.hibernate.dialect.DB2Dialect
default_schema: DEV1
server:
domainURI: https://stg-us-api.myapp.com/oauth2/v1
log4j.logger.org.hibernate.type: trace
org.hibernate.type: trace
org.springframework.transaction: debug```

Thank you everyone.
When I saw the error message saying "_Update your application's configuration" , I was sure something is wrong in my application.yml. But in local it was deploying fine.
In paas it was failing, hence I tried to check all the paas related config files. paas-pom was ok. FInally I found application-paassit.yml file was missing. This file was mentioned in my config map for that application in openshift.
I added the file and it was solved.

Related

The server selected protocol version TLS10 is not accepted by client preferences [TLS13, TLS12]

I'm trying to connect to a sql server but I get the error:
The server selected TLS10 protocol version is not supported by client preferences [TLS13, TLS1]
I've already tried some solutions like "encrypt=false" and also editing the "java.security" file and removing TLSv1.1 and/or TLSv1 and trying to connect with jtds, but all result in the same error. How can I solve this?
Application.yml
server:
port: 8081
spring:
datasource:
#url: jdbc:jtds:sqlserver://123.345.7.890:1433;databaseName=mydatabase
url: jdbc:sqlserver://123.345.7.890:1433;databaseName=mydatabase
username: sa
password: mypassword
#driver-class-name: net.sourceforge.jtds.jdbc.Driver
driver-class-name: com.microsoft.sqlserver.jdbc.SQLServerDriver
jpa:
hibernate:
ddl-auto: update
properties:
hibernate:
dialect: org.hibernate.dialect.SQLServerDialect
My drivers in pom.xml:
<dependency>
<groupId>com.microsoft.sqlserver</groupId>
<artifactId>mssql-jdbc</artifactId>
</dependency>
<dependency>
<groupId>net.sourceforge.jtds</groupId>
<artifactId>jtds</artifactId>
<version>1.3.1</version>
</dependency>

Spring Cloud Config - config loaded twice and without profile

I have problem with properly configuring spring cloud config. I have dependencies
<spring.cloud.version>2021.0.3</spring.cloud.version>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-config</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-bootstrap</artifactId>
</dependency>
Boostrap.yml
spring:
application.name: myapp
profiles:
active: dev
config:
import: optional:configserver:http://ip:8888
cloud:
config:
enabled: true
username: admin
password: secret
Now when my application starts it loads config twice. The first load is by class ConfigServicePropertySourceLocator and is without expected profile. The second is by ConfigServerConfigDataLoader and this time it is with proper profile. It seems that the configs loaded during first load takes precedence and my application does not start.
12:32:54 [ConfigServicePropertySourceLocator:241] - Fetching config from server at : http://ip:8888
12:32:54 [ConfigServicePropertySourceLocator:165] - Located environment: name=myapp, profiles=[default], label=null, version=7d2bc5d68acd8fcca65f34f2074b1860f36e19c6, state=null
12:32:54 [MyApplication:646] - The following 1 profile is active: "dev"
12:32:54 [ConfigServerConfigDataLoader:255] - Fetching config from server at : http://ip:8888
12:32:54 [ConfigServerConfigDataLoader:255] - Located environment: name=myapp, profiles=[dev], label=null, version=7d2bc5d68acd8fcca65f34f2074b1860f36e19c6, state=null
Providing profile using param -Dspring.profiles.active=dev does not help. How to configure profile that can be read by boostrap?
Add file resources -> bootstrap.properties
spring.application.name=myapp
spring.profiles.active=dev
# ip and port of the config server
spring.cloud.config.uri=http://localhost:8888

Problem with port determination using spring cloud

I have created 2 instances of a microservice named InfyGo_Flights. I have put a yaml file in git hub with
server.port 9004. My properties file in MS is empty apart from application name. These 2 instances have 2 diff issues:
1.First I overrode in the config itself to take server.port as 9010. But still ran on 9004. I deleted the server.port in yaml in github. But this caused it to run into error. So replaced it with 9004 again. But now it has started running on 9010. Removing the overridden property causes it to run into error.
2.Second config I created after the first one was causing issue. But despite the yaml file in cloud config it runs on default 8080 port.
application.properties:
spring.application.name=InfyGo_Flights
bootstrap.properties
spring.cloud.config.uri=http://localhost:1111
management.endpoints.web.exposure.include
InfyGo_Flights.yml
spring:
application:
name: InfyGo_Flights
mvc:
view:
prefix: /WEB-INF/pages/
suffix: .jsp
datasource:
username: root
password:
url: jdbc:mysql://localhost:3307/mydb?serverTimezone=UTC
jpa:
show-sql: true
hibernate:
ddl-auto: update
properties:
hibernate:
dialect:
logging:
file: Errorlog.log
level:
root: info
com.infoys.ars: info
pattern:
file: "%d{yyyy-MM-dd HH:mm:ss,SSS} %5p [%t] %c [%M] - %m%n"
server:
port:9004
You will need to put a Whitespace between the port & 9004 so it get recognizable by the yaml file:
//wrong
port:9004
//right
port: 9004

Configuring application using application.yaml instead of application.properties

I have a small Quarkus 1.1.0.Final Web application (using Java 1.8). I'm trying to use a YAML file to configure the application (instead of the usual application.properties) but there is no way the application comes up. I'm always getting this not-so-useful error message(s):
13:53:32,494 ERROR [io.qua.dev.DevModeMain] Failed to start Quarkus: java.lang.RuntimeException: io.quarkus.builder.ChainBuildException: No producers for required item class io.quarkus.deployment.builditem.BuildTimeRunTimeFixedConfigurationBuildItem
at io.quarkus.runner.RuntimeRunner.run(RuntimeRunner.java:180)
at io.quarkus.dev.DevModeMain.doStart(DevModeMain.java:177)
at io.quarkus.dev.DevModeMain.start(DevModeMain.java:95)
at io.quarkus.dev.DevModeMain.main(DevModeMain.java:66)
Caused by: io.quarkus.builder.ChainBuildException: No producers for required item class io.quarkus.deployment.builditem.BuildTimeRunTimeFixedConfigurationBuildItem
at io.quarkus.builder.BuildChainBuilder.build(BuildChainBuilder.java:240)
at io.quarkus.deployment.QuarkusAugmentor.run(QuarkusAugmentor.java:112)
at io.quarkus.runner.RuntimeRunner.run(RuntimeRunner.java:113)
... 3 more
13:53:32,519 INFO [io.qua.dev.DevModeMain] Attempting to start hot replacement endpoint to recover from previous Quarkus startup failure
13:53:32,532 ERROR [io.qua.dev.DevModeMain] Failed to start quarkus: java.lang.IllegalArgumentException: workerPoolSize must be > 0
at io.vertx.core.VertxOptions.setWorkerPoolSize(VertxOptions.java:275)
at io.quarkus.vertx.core.runtime.VertxCoreRecorder.convertToVertxOptions(VertxCoreRecorder.java:151)
at io.quarkus.vertx.core.runtime.VertxCoreRecorder.initializeWeb(VertxCoreRecorder.java:104)
at io.quarkus.vertx.http.runtime.VertxHttpRecorder.startServerAfterFailedStart(VertxHttpRecorder.java:115)
at io.quarkus.vertx.http.deployment.devmode.VertxHotReplacementSetup.handleFailedInitialStart(VertxHotReplacementSetup.java:30)
at io.quarkus.dev.RuntimeUpdatesProcessor.startupFailed(RuntimeUpdatesProcessor.java:449)
at io.quarkus.dev.DevModeMain.doStart(DevModeMain.java:191)
at io.quarkus.dev.DevModeMain.start(DevModeMain.java:95)
at io.quarkus.dev.DevModeMain.main(DevModeMain.java:66)
This is my YAML file:
#
# https://quarkus.io/guides/all-config
# https://quarkus.io/guides/config#overriding-properties-at-runtime
quarkus:
datasource:
driver: org.postgresql.Driver
flyway:
migrate-at-start: true
health:
extensions:
enabled: true
hibernate-orm:
dialect: org.hibernate.dialect.PostgreSQL10Dialect
http:
port: 8080
log: # ALL > FINEST > FINER > FINE > CONFIG > INFO > WARNING > SEVERE > OFF
console:
async: true
color: true
enable: true
format: "%d{yyyy-MM-dd HH:mm:ss,SSS} |- %-5p in %c:%L{3.} [%t] - %s%e%n"
level: WARNING
resteasy:
path: /api
smallrye-openapi:
path: /open-api
swagger-ui:
always-include: true
path: /swagger-ui
"%dev":
quarkus:
datasource:
password: postgres
url: jdbc:postgresql://localhost:5432/quarkus_web
username: postgres
flyway:
clean-at-start: true
hibernate-orm:
log:
sql: true
statistics: true
log:
category:
"io.quarkus.arc.processor":
level: OFF
"io.quarkus":
level: INFO
"org.acme":
level: CONFIG
"%prod":
quarkus:
datasource:
password: postgres
url: jdbc:postgresql://localhost:5432/quarkus_web
username: postgres
flyway:
clean-at-start: false
hibernate-orm:
database:
generation: none
sql-load-script: no-file
"%test":
quarkus:
datasource:
password: postgres
url: jdbc:postgresql://localhost:5432/quarkus_web
username: postgres
flyway:
clean-at-start: true
log:
category:
"io.quarkus":
level: WARNING
"org.acme":
level: WARNING
Anyone using this approach already and succeeded? I'm including io.quarkus:quarkus-config-yaml:1.1.0.Final.
One thing that I've noticed is that I can't separate the profiles using --- as I do with Spring Boot. I think I should file an issue for this :thinking_face:
It certainly looks like a bug, maybe even 2 as the second error message doesn't look like something I would expect considering your configuration file.
Could you create a bug in our tracker: https://github.com/quarkusio/quarkus/issues/new?assignees=&labels=bug&template=bug_report.md&title= ?
It would be nice if you could provide a reproducer. AFAICS, Quarkus doesn't start at all so probably your pom.xml with the various extensions you use and your configuration file should be enough to reproduce the issue.
Turns out this was not an issue. I just had an old Gradle config; some things changed with the Gradle plugin with version 1.1.0.Final and I just didn't have those.

Eureka registration based on host

I'm working with a simple example of a Spring Boot Eureka service registration. I am using spring-boot-starter 1.5.4.RELEASE, spring-cloud-starter-eureka 1.3.1.RELEASE. The eureka server should register the client instance only if the registration request are coming from white-listed servers.
Is there any out of box feature available in Spring Boot Eureka to achieve this requirement.
The username and password for login is more preferred.
Add maven dependency:
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-eureka-server</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
</dependencies>
Add username and password and other configurations to your configuration file application.yaml, note that the client.service-url.defaultZone should contain username and password:
security:
basic:
enabled: true
user:
name: user
password: 123456
server:
port: 8761
eureka:
instance:
hostname: localhost
client:
register-with-eureka: false
fetch-registry: false
service-url:
defaultZone: http://user:123456#${eureka.instance.hostname}:${server.port}/eureka/

Resources