Facing issue while using DynamoDb with Spring Boot for storing data.
It gives me the following error.
com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException: Cannot do operations on a non-existent table (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ResourceNotFoundException; Request ID: 7ffd4509-e444-4569-8c81-d4e7a1c218ef)
I have started a local instance of DynamoDb using the following command on a windows machine
java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -port 8001 -sharedDb
Created DynamoDBMapper for interacting with DB
#Bean
public DynamoDBMapper mapper() {
return new DynamoDBMapper(amazonDynamoDBConfig());
}
public AmazonDynamoDB amazonDynamoDBConfig() {
return AmazonDynamoDBClientBuilder.standard()
.withEndpointConfiguration(new
AwsClientBuilder.EndpointConfiguration(awsDynamoDBEndPoint, awsRegion))
.withCredentials(new AWSStaticCredentialsProvider(new
BasicAWSCredentials(awsAccessKey, awsSecretKey)))
.build();
}
And called mapper using #Autowiring
#Autowired
private DynamoDBMapper mapper;
When I try to add data using
mapper.save(person);
it gives an error saying Cannot do operations on a non-existent table
Please give me some idea where I am missing the trick here.
Thanks in advance.
The root cause might be because of aws-cli and the application using different aws profiles (credential and region). It will create different db files and use different db files when aws-cli and application connect to dynamodb local.
Please use the below approach to debug.
You must use sharedDB to start your docker instance.
docker run -p 8000:8000 -v $(pwd)/local/dynamodb:/data/ amazon/dynamodb-local -jar DynamoDBLocal.jar -sharedDb -dbPath /data
Please check the aws profile u have created (aws_access_key_id & aws_secret_access_key) . Use these same values in your application to connect to the docker dynamoDB instance .
In your Person.java(Model class) check the table name. Table names are case sensitive for DynamoDB
#DynamoDBTable(tableName = "Person")
Or
#DynamoDBTable(tableName = "person")
Related
I basically followed the steps described here: https://docs.spring.io/spring-data/neo4j/docs/current/reference/html/#configure-spring-boot-project
My application.properties contains the following:
spring.neo4j.uri=neo4j://localhost:7687
spring.neo4j.authentication.username=neo4j
spring.neo4j.authentication.password=verySecret357
I have a Neo4jConfiguration Bean which only specifies the TransactionManager, rest is (supposedly) taken care of by spring-boot-starter-data-neo4j:
#Configuration
public class Neo4jConfiguration {
#Bean
public ReactiveNeo4jTransactionManager reactiveTransactionManager(Driver driver,
ReactiveDatabaseSelectionProvider databaseNameProvider) {
return new ReactiveNeo4jTransactionManager(driver, databaseNameProvider);
}
}
Neo4j (5.3.0) runs in a Docker container I started with
docker run -d --name neo4j -p 7474:7474 -p 7687:7687 -e 'NEO4J_AUTH=neo4j/verySecret357' neo4j:4.4.11-community
I can access it through HTTP on my localhost:7474 and can authenticate using the credentials above.
Now, when I run my springboot app and try to create Nodes in Neo4j, I keep getting the same exception:
org.neo4j.driver.exceptions.AuthenticationException: The client is unauthorized due to authentication failure.
Running in debug, it however seems the client authentication scheme is correctly set:
Any thoughts on what I might be doing wrong ?
Edit: one thing though, I would assume that the "authToken" would contain a base64-encoded String (username:password) as the scheme is basic auth. It looks like it's not the case (using neo4j-java-driver:5.2.0).
Edit: seems to be related to the Docker image. A standalone neo4j instance works fine.
I am having quite a time figuring out where my issue is stemming from. I can run locally, I have built my .jar and ran that locally too.
I have my integration flow set as follows
#Bean
IntegrationFlow integrationFlow(final DataSource datasource) {
return IntegrationFlows.from(
Ftp.inboundStreamingAdapter(template())
.remoteDirectory("/folder/")
.patternFilter("file_name.txt")
.filter(
new FtpPersistentAcceptOnceFileListFilter(metadataStore(datasource), "")),
spec -> spec.poller(Pollers.fixedDelay(5, TimeUnit.SECONDS)))
.transform(streamToBytes())
.handle(handler())
.get()
}
#Bean
FtpRemoteFileTemplate template() {
return new FtpRemoteFileTemplate(ftpSessionFactory());
}
#Bean
public StreamTransformer streamToBytes() {
return new StreamTransformer(); // transforms to byte[]
}
#Bean
public ConcurrentMetadataStore metadataStore(final DataSource dataSource) {
return new JdbcMetadataStore(dataSource);
}
#Bean
public SessionFactory<FTPFile> ftpSessionFactory() {
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost(host);
sf.setPort(port);
sf.setUsername(userName);
sf.setPassword(password);
return sf;
}
I have my datasource and my ftp information set in my application.yml
When I run this locally, I have no problems. When I run gradle build and run my .jar with several different openjdk versions (8u181, 8u191, 11.04), I have no issues.
When I run inside of a docker container using my .jar file, the problem arises.
My docker file
FROM openjdk:8u212-jdk-alpine
WORKDIR /app
COPY build/libs/app-1.0.jar .
RUN apk add --update ttf-dejavu && rm -rf /var/cache/apk/*
ENTRYPOINT ["java", "-jar", "app-1.0.jar"]
I turned DEBUG on and watched output.
Running locally and running the built .jar, I can see that the poller is working and it triggers the SQL queries to the metadataStore table that has been created in my remote db (postgresql).
Running in the docker container, I do not see the sql queries being run. which tells me that somewhere therein lies the issue.
With debug on my startup logs in the console are the same INFOs and WARNs regardless of running locally, running the built .jar, or running in the docker container.
There is this info message that might be of some assistance
Bean with key 'metadataStore' has been registered as an MBean but has no exposed attributes or operations
I checked to see if there might be a hidden SessionFactory connection issue by trying to connect to an invalid host, but I indeed get exceptions in my docker container for the invalid host. So I can confidently say that the FTP connection is valid and running with the correct host and port.
I am thinking it has to do with either the poller or my datasource.
Inside of this application, I am also running Spring Data Rest using JDBC and JPA, would there be any issue with the usage of the datasource bean across the different libraries?
Any help or guidance would be greatly appreciated.
so the default client mode for the DefaultFtpSessionFactory is "ACTIVE", but in my case, inside of a docker container, the client mode must be set to "PASSIVE"
to do this, i needed to add one line of code to the DefaultFtpSessionFactory
you must set client mode to 2 ... sf.setClientMode(2);
below is the final DefaultFtpSessionFactory bean.
#Bean
public SessionFactory<FTPFile> ftpSessionFactory() {
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost(host);
sf.setPort(port);
sf.setUsername(userName);
sf.setPassword(password);
sf.setClientMode(2);
return sf;
}
I have tried a few ways to get sonarQube running in our AWS environment, all successfully. However, SonarQube is unstable. Whenever Elastic beanstalk recycles an instance, my SonarQube environment is wiped out.
Here is what I tried:
Attempt 1: EC2 instance. I create the EC2 instance off of a bitnami ami imageId: ami-0f9cf81913a6dce27
This seemed like pretty simple process. But I prefer elastic beanstalk environment to manage our sonarQube EC2 instances.
Attempt 2: Create a EB Environment using a single docker instance, with this dockerfile:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "sonarqube:7.1"
},
"Ports": [{
"ContainerPort": "9000"
}]
}
This created the EB environment. It creates an RDS instance (with mySql 5.x) to store the scan data (in a database called ebdb). The sonarQube server hosts an internal elasticsearch instance locally for it's search data.
I then have to add a few environment variables to support the RDS instance (jdbc username, password, url endpoint, etc).
I then have to configure the sonarQube security side.
No marketplace features are installed. So I add SonarJava, Groovy, and SonarJS.
I add a login user for scans. All good.
Except, occasionally Elastic Beanstalk will have a health issue and drop the current instance, and re-create a new instance.
In this case, everything is still in tact - security: users, passwords, etc. Except the marketplace features are gone. So code scans will fail until I manually add them back.
The schema for single instance docker container is pretty sparse, I did not see any way to further customize w/ the docker file.
Attempt 3: Use multi-instance docker container. The schema is more robust, perhaps I can configure sonarQube more explicitly. e.g. You can pass environment variables, mysql settings, etc.
I was unable to get this to work. I did learn I needed to set the memory above 2 GB, for elasticsearch to start up. But i was unable to get the sonarQube environment to come up.
I might revisit this later.
Attempt 4: use AMI in elastic beanstalk (with terraform aws provider)
main.tf
resource "aws_elastic_beanstalk_application" "sonarqube" {
name = "SonarQube"
description = "SonarQube for nano-services"
}
resource "aws_elastic_beanstalk_environment" "nonprod" {
name = "${var.application-name}"
application = "${aws_elastic_beanstalk_application.sonarqube.name}"
solution_stack_name = "64bit Amazon Linux 2018.03 v2.10.0 running Docker 17.12.1-ce"
wait_for_ready_timeout = "30m"
setting {
namespace = "aws:autoscaling:updatepolicy:rollingupdate"
name = "Timeout"
value = "PT1H"
}
setting {
namespace = "aws:elasticbeanstalk:environment"
name = "ServiceRole"
value = "aws-elasticbeanstalk-service-role"
}
setting {
namespace = "aws:elasticbeanstalk:command"
name = "DeploymentPolicy"
value = "Rolling"
}
setting {
namespace = "aws:elasticbeanstalk:command"
name = "BatchSizeType"
value = "Fixed"
}
setting {
namespace = "aws:elasticbeanstalk:command"
name = "BatchSize"
value = "1"
}
setting {
namespace = "aws:elasticbeanstalk:command"
name = "IgnoreHealthCheck"
value = "true"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "EC2KeyName"
value = "web-aws-key"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "IamInstanceProfile"
value = "arn:aws:iam::<redacted>:instance-profile/aws-elasticbeanstalk-ec2-role"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "instanceType"
value = "t2.xlarge"
}
setting {
namespace = "aws:elb:listener:443"
name = "ListenerProtocol"
value = "SSL"
}
setting {
namespace = "aws:elb:listener:443"
name = "InstanceProtocol"
value = "SSL"
}
setting {
namespace = "aws:elb:listener:443"
name = "SSLCertificateId"
value = "arn:aws:acm:<redacted>"
}
setting {
namespace = "aws:elb:listener:443"
name = "ListenerEnabled"
value = "true"
}
}
Initially I included the sonarQube AMI:
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "imageId"
value = "ami-0f9cf81913a6dce27"
}
This does create everything. However, the EC2 instances respond too slowly, and EB goes to Grey status. Even though SonarQube is up and running, EB is unaware of it. So I commented this out, and manually modified the image id as a one-off.
wait_for_ready_timeout does assist with this, as that simply keeps terraform from timing out. e.g. It finishes in 22.5 minutes instead of a hard stop at 20 minutes.
In this case, it creates SonarQube with a local mysql database (no RDS instance) w/ elasticsearch being local as well.
SonarQube's market place features are also included, except for Groovy. Which I added.
However, same issue as before. When EB drops an instance and re-creates it, the sonarQube environment is wiped out. This time, the credentials, marketplace features, and everything.
Has anyone run into this problem and figured it out?
I resolved the issue by using ECS (Fargate), instead of the Elastic Beanstalk container.
Steps:
Create an RDS mysql instance in AWS for sonar
Open a mysql shell for this instance, and configure it for sonar, see: Sonar setup with MySql
Create a dockerfile with the plugins you care about, e.g:
FROM sonarqube:latest
ENV SONARQUBE_JDBC_USERNAME=[YOUR-USERNAME] \
SONARQUBE_JDBC_PASSWORD=[YOUR-PASSWORD] \
SONARQUBE_JDBC_URL=jdbc:mysql://[YOUR-RDS-ENDPOINT]:3306/sonar?useSSL=false&useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance
RUN wget "https://sonarsource.bintray.com/Distribution/sonar-java-plugin/sonar-java-plugin-5.7.0.15470.jar" \
&& wget "https://sonarsource.bintray.com/Distribution/sonar-javascript-plugin/sonar-javascript-plugin-4.2.1.6529.jar" \
&& wget "https://sonarsource.bintray.com/Distribution/sonar-groovy-plugin/sonar-groovy-plugin-1.4.jar" \
&& mv *.jar $SONARQUBE_HOME/extensions/plugins \
&& ls -lah $SONARQUBE_HOME/extensions/plugins
EXPOSE 9000
EXPOSE 9092
I exposed 9092 in case i wanted to comment out the mysql connection, and test locally with the internal h2 database at some point.
Verify the docker image runs locally
eval $(docker-machine env)
docker build -t sonar .
docker run -it -d --rm --name sonar -p 9000:9000 -p 9092:9092 sonar:latest
echo $DOCKER_HOST
Open a browser to this ip address, port 9000. e.g. http://192.x.x.x:9000
Create a new ECS repository called sonar to store the docker image.
The AWS interface actually tells you how to publish your docker image, so this should be self-evident.
Tag and push the docker file to the sonar repository
$(aws ecr get-login --no-include-email --region [YOUR-AWS-REGION])
docker tag sonar:latest [YOUR-ECS-DOCKER-IMAGE-URI]/sonar:latest
docker push [YOUR-ECS-DOCKER-IMAGE-URI]/sonar:latest
Create a new fargate cluster, called sonar
Create a new task definition.
For your container, use the ECS docker image URI. I gave mine 6 GB memory and 2 cpus, with 1024 cpu units. Here I exposed port 9000 and 9092. I added the environment vars in the Dockerfile here as well.
Create an ECS service, and include the task. Run it, verify the logs cloudwatch. And hit the public endpoint on port 9000, and done.
I largely borrowed from this: https://www.infralovers.com/en/articles/2018/05/04/sonarqube-on-aws-fargate/
I hope this helps others.
I've created a personal repository on Git where I have kept my application.properties file.
I've created a cloud config server ('my-config-server') and used the git repository url.
I have bound my spring-boot application that is supposed to access the external properties file with Git repository.
#javax.jws.WebService(
serviceName = "myService",
portName = "my_service",
targetNamespace = "urn://vdc.com/xmlmessaging/SD",
wsdlLocation = "classpath:myService.wsdl",
endpointInterface = "com.my.service.SDType")
#PropertySource("application.properties")
#ConfigurationProperties
public class SDTypeImpl implements SDType {
/*It has various services implementation that use following method**/
private SDObj getObj (BigDecimal value) {
AnnotationConfigApplicationContext context =
new AnnotationConfigApplicationContext(
SDTypeImpl.class);
SDObj obj = context.getBean(SDPropertiesUtil.class).getObj(value);
context.close();
return obj;
}
}
Another Class:
public class SDPropertiesUtil {
#Autowired
public Environment env;
public SDObj getObj(BigDecimal value) {
String valueStr = env.getProperty(value.toString());
/*do logic*/
}
My application starts but fails to load properties file from my git repository.
I believe I should have an application.properties at src/main/resources in my application but since I'm using
#PropertySource("application.properties")
#ConfigurationProperties
I'm telling my application to use the application.properties from an external location and do not use internal properties file. But this is not happening. My application is still using the internal properties file.
The source you included doesn't show your app configuration settings to connect to the Config server. Do you mind sharing it?
This is how the config server could be queried from a client app:
/{application}/{profile}[/{label}]
/{application}-{profile}.yml
/{label}/{application}-{profile}.yml
/{application}-{profile}.properties
/{label}/{application}-{profile}.properties
Let's say a Config server points to a Git repo which includes this file: demo-config-client-development.properties
You should be able to query the Config Server as:
curl http://localhost:8101/demo-config-client-development.properties
Assuming Config Server is running in locally and listening on 8181.
Let's also say you have a client app named: demo-config-client that connects to the Config server and runs using the development Spring profile, this app would now be able to read remote properties hosted in a Git repo through a Config server.
A detailed tutorial could be found at my blog at: http://tech.asimio.net/2016/12/09/Centralized-and-Versioned-Configuration-using-Spring-Cloud-Config-Server-and-Git.html
i'm trying to pass command line arguments in my Spring Boot Yarn application and am having difficulties. i understand that i can set these in the yml document spring.yarn.appmaster.launchcontext.arguments but how can it from the command line? like java -jar MyYarnApp.jar {arg0} {arg1} and get access to it from my #YarnContainer?
i've discovered that #YarnProperties maps to spring.yarn.appmaster.launchcontext.arguments but i want to set them from the command line, not in the yml
You are pretty close on this when you found spring.yarn.client.launchcontext.arguments and spring.yarn.appmaster.launchcontext.arguments. We don't have settings which would automatically pass all command-line arguments from a client into an appmaster which would then pass them into a container launch context. Not sure if we even want to do that because you surely want to be on control what happens with YARN container launch context. User using a client could then potentially pass a rogue arguments along a food chain.
Having said that, lets see what we can do with our Simple Single Project YARN Application Guide.
We still need to use those launch context arguments to define our command line parameters to basically map how things are passed from a client into an appmaster into a container.
What I added in application.yml:
spring:
yarn:
client:
launchcontext:
arguments:
--my.appmaster.arg1: ${my.client.arg1:notset1}
appmaster:
launchcontext:
arguments:
--my.container.arg1: ${my.appmaster.arg1:notset2}
Modified HelloPojo in Application class:
#YarnComponent
#Profile("container")
public static class HelloPojo {
private static final Log log = LogFactory.getLog(HelloPojo.class);
#Autowired
private Configuration configuration;
#Value("${my.container.arg1}")
private String arg1;
#OnContainerStart
public void onStart() throws Exception {
log.info("Hello from HelloPojo");
log.info("Container arg1 value is " + arg1);
log.info("About to list from hdfs root content");
FsShell shell = new FsShell(configuration);
for (FileStatus s : shell.ls(false, "/")) {
log.info(s);
}
shell.close();
}
}
Notice how I added arg1 and used #Value to map it with my.container.arg1. We can either use #ConfigurationProperties or #Value which are normal Spring and Spring Boot functionalities and there's more in Boot's reference docs how to use those.
You could then modify AppIT unit test:
ApplicationInfo info = submitApplicationAndWait(Application.class, new String[]{"--my.client.arg1=arg1value"});
and run build with tests
./gradlew clean build
or just build it without running test:
./gradlew clean build -x test
and then submit into a real hadoop cluster with your my.client.arg1.
java -jar build/libs/gs-yarn-basic-single-0.1.0.jar --my.client.arg1=arg1value
Either way you see arg1value logged in container logs:
[2014-07-18 08:49:09.802] boot - 2003 INFO [main] --- ContainerLauncherRunner: Running YarnContainer with parameters [--spring.profiles.active=container,--my.container.arg1=arg1value]
[2014-07-18 08:49:09.806] boot - 2003 INFO [main] --- Application$HelloPojo: Container arg1 value is arg1value
Using format ${my.client.arg1:notset1} also allows you to automatically define a default value notset1 if my.client.arg1 is omitted by user. We're working on Spring Application Context here orchestrated by Spring Boot so all the goodies from there are in your disposal
If you need more precise control of those user facing arguments(using args4j, jopt, etc) then you'd need to have a separate code/jar for client/appmaster/container order to create a custom client main method. All the other Spring YARN getting started guides are pretty much using multi-project builds so look at those. For example if you just want to have first and second argument value without having a need to use full --my.client.arg1=arg1value on a command line.
Let us know if this works for you and if you have any other ideas to make things simpler.