I am having quite a time figuring out where my issue is stemming from. I can run locally, I have built my .jar and ran that locally too.
I have my integration flow set as follows
#Bean
IntegrationFlow integrationFlow(final DataSource datasource) {
return IntegrationFlows.from(
Ftp.inboundStreamingAdapter(template())
.remoteDirectory("/folder/")
.patternFilter("file_name.txt")
.filter(
new FtpPersistentAcceptOnceFileListFilter(metadataStore(datasource), "")),
spec -> spec.poller(Pollers.fixedDelay(5, TimeUnit.SECONDS)))
.transform(streamToBytes())
.handle(handler())
.get()
}
#Bean
FtpRemoteFileTemplate template() {
return new FtpRemoteFileTemplate(ftpSessionFactory());
}
#Bean
public StreamTransformer streamToBytes() {
return new StreamTransformer(); // transforms to byte[]
}
#Bean
public ConcurrentMetadataStore metadataStore(final DataSource dataSource) {
return new JdbcMetadataStore(dataSource);
}
#Bean
public SessionFactory<FTPFile> ftpSessionFactory() {
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost(host);
sf.setPort(port);
sf.setUsername(userName);
sf.setPassword(password);
return sf;
}
I have my datasource and my ftp information set in my application.yml
When I run this locally, I have no problems. When I run gradle build and run my .jar with several different openjdk versions (8u181, 8u191, 11.04), I have no issues.
When I run inside of a docker container using my .jar file, the problem arises.
My docker file
FROM openjdk:8u212-jdk-alpine
WORKDIR /app
COPY build/libs/app-1.0.jar .
RUN apk add --update ttf-dejavu && rm -rf /var/cache/apk/*
ENTRYPOINT ["java", "-jar", "app-1.0.jar"]
I turned DEBUG on and watched output.
Running locally and running the built .jar, I can see that the poller is working and it triggers the SQL queries to the metadataStore table that has been created in my remote db (postgresql).
Running in the docker container, I do not see the sql queries being run. which tells me that somewhere therein lies the issue.
With debug on my startup logs in the console are the same INFOs and WARNs regardless of running locally, running the built .jar, or running in the docker container.
There is this info message that might be of some assistance
Bean with key 'metadataStore' has been registered as an MBean but has no exposed attributes or operations
I checked to see if there might be a hidden SessionFactory connection issue by trying to connect to an invalid host, but I indeed get exceptions in my docker container for the invalid host. So I can confidently say that the FTP connection is valid and running with the correct host and port.
I am thinking it has to do with either the poller or my datasource.
Inside of this application, I am also running Spring Data Rest using JDBC and JPA, would there be any issue with the usage of the datasource bean across the different libraries?
Any help or guidance would be greatly appreciated.
so the default client mode for the DefaultFtpSessionFactory is "ACTIVE", but in my case, inside of a docker container, the client mode must be set to "PASSIVE"
to do this, i needed to add one line of code to the DefaultFtpSessionFactory
you must set client mode to 2 ... sf.setClientMode(2);
below is the final DefaultFtpSessionFactory bean.
#Bean
public SessionFactory<FTPFile> ftpSessionFactory() {
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost(host);
sf.setPort(port);
sf.setUsername(userName);
sf.setPassword(password);
sf.setClientMode(2);
return sf;
}
Related
I basically followed the steps described here: https://docs.spring.io/spring-data/neo4j/docs/current/reference/html/#configure-spring-boot-project
My application.properties contains the following:
spring.neo4j.uri=neo4j://localhost:7687
spring.neo4j.authentication.username=neo4j
spring.neo4j.authentication.password=verySecret357
I have a Neo4jConfiguration Bean which only specifies the TransactionManager, rest is (supposedly) taken care of by spring-boot-starter-data-neo4j:
#Configuration
public class Neo4jConfiguration {
#Bean
public ReactiveNeo4jTransactionManager reactiveTransactionManager(Driver driver,
ReactiveDatabaseSelectionProvider databaseNameProvider) {
return new ReactiveNeo4jTransactionManager(driver, databaseNameProvider);
}
}
Neo4j (5.3.0) runs in a Docker container I started with
docker run -d --name neo4j -p 7474:7474 -p 7687:7687 -e 'NEO4J_AUTH=neo4j/verySecret357' neo4j:4.4.11-community
I can access it through HTTP on my localhost:7474 and can authenticate using the credentials above.
Now, when I run my springboot app and try to create Nodes in Neo4j, I keep getting the same exception:
org.neo4j.driver.exceptions.AuthenticationException: The client is unauthorized due to authentication failure.
Running in debug, it however seems the client authentication scheme is correctly set:
Any thoughts on what I might be doing wrong ?
Edit: one thing though, I would assume that the "authToken" would contain a base64-encoded String (username:password) as the scheme is basic auth. It looks like it's not the case (using neo4j-java-driver:5.2.0).
Edit: seems to be related to the Docker image. A standalone neo4j instance works fine.
I have an application.properties file in Spring Boot v2.6.1 where I declared a multi document file notation like below :
spring.profiles.active=#spring.profiles.active#
#---
spring.config.activate.on-profile=prod
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.Oracle12cDialect
#---
spring.config.activate.on-profile=dev
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL8Dialect
This works perfectly fine(i.e., picked accordingly) when I run the application in integrated server or IDE by passing spring.profiles.active as prod/dev in VM Arguments.
The same thing doesn't work when I deploy as a war in tomcat by passing in setenv.sh as
export CATALINA_OPTS="$CATALINA_OPTS -Dspring.profiles.active=prod"
it always picks the "org.hibernate.dialect.MySQL8Dialect" instead of "org.hibernate.dialect.Oracle12cDialect"
Any help?
After a day of brain storming, finally I found the solution to pick the respective dialect when using multiple datasources based on the profile.
In my case, Primary datasource is Oracle with UCP and secondary is mySQL for Prod & dev profiles respectively.
As per the question, multi document file notation in application.properties works fine in IDE or integrated tomcat but not in External tomcat when deployed as a WAR.
Below solution works for both (Integrated & External Tomcat)
In MySQL Configuration class, I have set the custom JPA Properties as a tweak.
#Configuration
#Profile(someConstants.ENV_DEV)
public class MySqlConfiguration {
private static final Logger logger = LogManager.getLogger(MySqlConfiguration.class);
#Bean(name = "mySQL")
#Profile(someConstants.ENV_DEV)
#ConfigurationProperties(prefix = "spring.mysql.datasource")
public DataSource dataSource() {
final String METHOD_NAME = ":: DataSource ::";
logger.info(METHOD_NAME + "Initialising the MySQL Connection");
return DataSourceBuilder.create().build();
}
#Bean
#ConfigurationProperties(prefix = "spring.mysql.jpa")
public JpaProperties jpaProperties() {
JpaProperties properties = new JpaProperties();
return properties;
}
}
In Application.properties
spring.mysql.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL8Dialect // this for mySQL
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.Oracle12cDialect // this for Oracle
Moreover, I have done the same setting in setenv.sh file for external tomcat
// for prod
export CATALINA_OPTS="$CATALINA_OPTS -Dspring.profiles.active=prod"
// for dev
#export CATALINA_OPTS="$CATALINA_OPTS -Dspring.profiles.active=dev"
I analysed the logs and now each datasources picks up the respective properties & dialect based on profile, perfectly fine and awesome.
Happy Coding..
Facing issue while using DynamoDb with Spring Boot for storing data.
It gives me the following error.
com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException: Cannot do operations on a non-existent table (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ResourceNotFoundException; Request ID: 7ffd4509-e444-4569-8c81-d4e7a1c218ef)
I have started a local instance of DynamoDb using the following command on a windows machine
java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -port 8001 -sharedDb
Created DynamoDBMapper for interacting with DB
#Bean
public DynamoDBMapper mapper() {
return new DynamoDBMapper(amazonDynamoDBConfig());
}
public AmazonDynamoDB amazonDynamoDBConfig() {
return AmazonDynamoDBClientBuilder.standard()
.withEndpointConfiguration(new
AwsClientBuilder.EndpointConfiguration(awsDynamoDBEndPoint, awsRegion))
.withCredentials(new AWSStaticCredentialsProvider(new
BasicAWSCredentials(awsAccessKey, awsSecretKey)))
.build();
}
And called mapper using #Autowiring
#Autowired
private DynamoDBMapper mapper;
When I try to add data using
mapper.save(person);
it gives an error saying Cannot do operations on a non-existent table
Please give me some idea where I am missing the trick here.
Thanks in advance.
The root cause might be because of aws-cli and the application using different aws profiles (credential and region). It will create different db files and use different db files when aws-cli and application connect to dynamodb local.
Please use the below approach to debug.
You must use sharedDB to start your docker instance.
docker run -p 8000:8000 -v $(pwd)/local/dynamodb:/data/ amazon/dynamodb-local -jar DynamoDBLocal.jar -sharedDb -dbPath /data
Please check the aws profile u have created (aws_access_key_id & aws_secret_access_key) . Use these same values in your application to connect to the docker dynamoDB instance .
In your Person.java(Model class) check the table name. Table names are case sensitive for DynamoDB
#DynamoDBTable(tableName = "Person")
Or
#DynamoDBTable(tableName = "person")
I'm trying to create Spring Application without referring to any external files. This is supposed to be a module that you'd then include as a dependency, configure and use to plug in the service into an existing ecosystem. This is how I'm doing that:
Map<String, Object> properties = new HashMap<>();
properties.put("server.address", "0.0.0.0")
properties.put("server.port", 8080)
properties.put("spring.profiles.active", "cloud")
properties.put("spring.application.name", "someApp")
properties.put("spring.cloud.config.failFast", true)
properties.put("spring.cloud.config.discovery.enabled", true)
properties.put("spring.cloud.config.discovery.serviceId", "config")
properties.put("eureka.instance.preferIpAddress", true)
properties.put("eureka.instance.statusPageUrlPath", "/health")
new SpringApplicationBuilder()
.bannerMode(Banner.Mode.OFF)
.properties(properties)
.sources(SpringConfiguration.class)
.web(false)
.registerShutdownHook(true)
.build()
I then go on to provide Eureka default zone in the run command, via environmental variables:
--env eureka_client_serviceUrl_defaultZone='http://some-host:8765/eureka/' --env SPRING_CLOUD_CONFIG_LABEL='dev' --env SPRING_CLOUD_INETUTILS_PREFERRED_NETWORKS='10.0'
Application registers successfully in Eureka, but unfortunately it tries to fetch the config prior to that and it's looking for it under the default URL (http://localhost:8888) instead of fetching config server IP from the registry. And yes, it does work if I put all of those properties in the bootstrap.yml file. Can I somehow make it work without using file-resources?
You are passing the properties using SpringApplicationBuilder which is responsible for SpringApplication and ApplicationContext instances.
From the documentation , the properties provided here will be part of ApplicationContext NOT the BootstrapContext. ApplicationContext is the child of BootstrapContext.
You can read more about the Bootstrap Context here -
http://cloud.spring.io/spring-cloud-commons/1.3.x/single/spring-cloud-commons.html#_the_bootstrap_application_context
Bootstrap.yml/properties is used to configure your Bootstrap Context.
You can look at these properties to change the name or location of the file -
spring.cloud.bootstrap.name - bootstrap(default)
spring.cloud.bootstrap.location
You will have to use a file resource(yml or properties).
I've created a personal repository on Git where I have kept my application.properties file.
I've created a cloud config server ('my-config-server') and used the git repository url.
I have bound my spring-boot application that is supposed to access the external properties file with Git repository.
#javax.jws.WebService(
serviceName = "myService",
portName = "my_service",
targetNamespace = "urn://vdc.com/xmlmessaging/SD",
wsdlLocation = "classpath:myService.wsdl",
endpointInterface = "com.my.service.SDType")
#PropertySource("application.properties")
#ConfigurationProperties
public class SDTypeImpl implements SDType {
/*It has various services implementation that use following method**/
private SDObj getObj (BigDecimal value) {
AnnotationConfigApplicationContext context =
new AnnotationConfigApplicationContext(
SDTypeImpl.class);
SDObj obj = context.getBean(SDPropertiesUtil.class).getObj(value);
context.close();
return obj;
}
}
Another Class:
public class SDPropertiesUtil {
#Autowired
public Environment env;
public SDObj getObj(BigDecimal value) {
String valueStr = env.getProperty(value.toString());
/*do logic*/
}
My application starts but fails to load properties file from my git repository.
I believe I should have an application.properties at src/main/resources in my application but since I'm using
#PropertySource("application.properties")
#ConfigurationProperties
I'm telling my application to use the application.properties from an external location and do not use internal properties file. But this is not happening. My application is still using the internal properties file.
The source you included doesn't show your app configuration settings to connect to the Config server. Do you mind sharing it?
This is how the config server could be queried from a client app:
/{application}/{profile}[/{label}]
/{application}-{profile}.yml
/{label}/{application}-{profile}.yml
/{application}-{profile}.properties
/{label}/{application}-{profile}.properties
Let's say a Config server points to a Git repo which includes this file: demo-config-client-development.properties
You should be able to query the Config Server as:
curl http://localhost:8101/demo-config-client-development.properties
Assuming Config Server is running in locally and listening on 8181.
Let's also say you have a client app named: demo-config-client that connects to the Config server and runs using the development Spring profile, this app would now be able to read remote properties hosted in a Git repo through a Config server.
A detailed tutorial could be found at my blog at: http://tech.asimio.net/2016/12/09/Centralized-and-Versioned-Configuration-using-Spring-Cloud-Config-Server-and-Git.html