Client authentication failure on Neo4J - spring-boot

I basically followed the steps described here: https://docs.spring.io/spring-data/neo4j/docs/current/reference/html/#configure-spring-boot-project
My application.properties contains the following:
spring.neo4j.uri=neo4j://localhost:7687
spring.neo4j.authentication.username=neo4j
spring.neo4j.authentication.password=verySecret357
I have a Neo4jConfiguration Bean which only specifies the TransactionManager, rest is (supposedly) taken care of by spring-boot-starter-data-neo4j:
#Configuration
public class Neo4jConfiguration {
#Bean
public ReactiveNeo4jTransactionManager reactiveTransactionManager(Driver driver,
ReactiveDatabaseSelectionProvider databaseNameProvider) {
return new ReactiveNeo4jTransactionManager(driver, databaseNameProvider);
}
}
Neo4j (5.3.0) runs in a Docker container I started with
docker run -d --name neo4j -p 7474:7474 -p 7687:7687 -e 'NEO4J_AUTH=neo4j/verySecret357' neo4j:4.4.11-community
I can access it through HTTP on my localhost:7474 and can authenticate using the credentials above.
Now, when I run my springboot app and try to create Nodes in Neo4j, I keep getting the same exception:
org.neo4j.driver.exceptions.AuthenticationException: The client is unauthorized due to authentication failure.
Running in debug, it however seems the client authentication scheme is correctly set:
Any thoughts on what I might be doing wrong ?
Edit: one thing though, I would assume that the "authToken" would contain a base64-encoded String (username:password) as the scheme is basic auth. It looks like it's not the case (using neo4j-java-driver:5.2.0).
Edit: seems to be related to the Docker image. A standalone neo4j instance works fine.

Related

How to debug quarkus lambda locally

I am beginner to Quarkus lambda and when I am looking for how to debug the Quarkus lambda then everyone is showing with REST API endpoints, is there any way to debug the Quarkus app using lambda handler ?
I know how to start the app in dev mode but I am struggling with invoking the handler method.
You can use SAM CLI for local debugging and testing. Here is the official documentation from quarkus.
It's really important that you follow the sequence.
Step-1:
sam local start-api --template target/sam.jvm.yaml -d 5005
Step-2:
Hit your API using your favourite rest client
Step-3
Add a Remote JVM debug configuration in your IDE, set your breakpoints and start debugging.
You can actually just add a Main class and set up a usual Run Configuration.
import io.quarkus.runtime.annotations.QuarkusMain;
import io.quarkus.runtime.Quarkus;
#QuarkusMain
public class Main {
public static void main(String ... args) {
System.out.println("Running main method");
Quarkus.run(args);
}
}
After that, just use curl or Postman to invoke the endpoint.
By default, the lambda handler starts on port 8080.
You can override it by passing
-Dquarkus.lambda.mock-event-server.dev-port=9999
So the curl will look like:
curl -XGET "localhost:9999/hello"
if the definition of the resource class looks like:
#Path("/hello")
public class GreetingResource {
#GET
#Produces(MediaType.TEXT_PLAIN)
public String hello() {
return "hello jaxrs";
}
}
Add a breakpoint in the Resource class and start the Main class in Debug mode. This will actually pause during a debug on a breakpoint.
You can just run mvn quarkus:dev and connect a remote debugger to it on port 5005 as shown in this image
Once quarkus is started in dev mode and you connect the remote debugger you can use Postman to send a request. Your breakpoints will be evaluated.

Spring Integration FTP in Docker container: not triggering flow

I am having quite a time figuring out where my issue is stemming from. I can run locally, I have built my .jar and ran that locally too.
I have my integration flow set as follows
#Bean
IntegrationFlow integrationFlow(final DataSource datasource) {
return IntegrationFlows.from(
Ftp.inboundStreamingAdapter(template())
.remoteDirectory("/folder/")
.patternFilter("file_name.txt")
.filter(
new FtpPersistentAcceptOnceFileListFilter(metadataStore(datasource), "")),
spec -> spec.poller(Pollers.fixedDelay(5, TimeUnit.SECONDS)))
.transform(streamToBytes())
.handle(handler())
.get()
}
#Bean
FtpRemoteFileTemplate template() {
return new FtpRemoteFileTemplate(ftpSessionFactory());
}
#Bean
public StreamTransformer streamToBytes() {
return new StreamTransformer(); // transforms to byte[]
}
#Bean
public ConcurrentMetadataStore metadataStore(final DataSource dataSource) {
return new JdbcMetadataStore(dataSource);
}
#Bean
public SessionFactory<FTPFile> ftpSessionFactory() {
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost(host);
sf.setPort(port);
sf.setUsername(userName);
sf.setPassword(password);
return sf;
}
I have my datasource and my ftp information set in my application.yml
When I run this locally, I have no problems. When I run gradle build and run my .jar with several different openjdk versions (8u181, 8u191, 11.04), I have no issues.
When I run inside of a docker container using my .jar file, the problem arises.
My docker file
FROM openjdk:8u212-jdk-alpine
WORKDIR /app
COPY build/libs/app-1.0.jar .
RUN apk add --update ttf-dejavu && rm -rf /var/cache/apk/*
ENTRYPOINT ["java", "-jar", "app-1.0.jar"]
I turned DEBUG on and watched output.
Running locally and running the built .jar, I can see that the poller is working and it triggers the SQL queries to the metadataStore table that has been created in my remote db (postgresql).
Running in the docker container, I do not see the sql queries being run. which tells me that somewhere therein lies the issue.
With debug on my startup logs in the console are the same INFOs and WARNs regardless of running locally, running the built .jar, or running in the docker container.
There is this info message that might be of some assistance
Bean with key 'metadataStore' has been registered as an MBean but has no exposed attributes or operations
I checked to see if there might be a hidden SessionFactory connection issue by trying to connect to an invalid host, but I indeed get exceptions in my docker container for the invalid host. So I can confidently say that the FTP connection is valid and running with the correct host and port.
I am thinking it has to do with either the poller or my datasource.
Inside of this application, I am also running Spring Data Rest using JDBC and JPA, would there be any issue with the usage of the datasource bean across the different libraries?
Any help or guidance would be greatly appreciated.
so the default client mode for the DefaultFtpSessionFactory is "ACTIVE", but in my case, inside of a docker container, the client mode must be set to "PASSIVE"
to do this, i needed to add one line of code to the DefaultFtpSessionFactory
you must set client mode to 2 ... sf.setClientMode(2);
below is the final DefaultFtpSessionFactory bean.
#Bean
public SessionFactory<FTPFile> ftpSessionFactory() {
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost(host);
sf.setPort(port);
sf.setUsername(userName);
sf.setPassword(password);
sf.setClientMode(2);
return sf;
}

Cannot do operations on a non-existent table

Facing issue while using DynamoDb with Spring Boot for storing data.
It gives me the following error.
com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException: Cannot do operations on a non-existent table (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ResourceNotFoundException; Request ID: 7ffd4509-e444-4569-8c81-d4e7a1c218ef)
I have started a local instance of DynamoDb using the following command on a windows machine
java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -port 8001 -sharedDb
Created DynamoDBMapper for interacting with DB
#Bean
public DynamoDBMapper mapper() {
return new DynamoDBMapper(amazonDynamoDBConfig());
}
public AmazonDynamoDB amazonDynamoDBConfig() {
return AmazonDynamoDBClientBuilder.standard()
.withEndpointConfiguration(new
AwsClientBuilder.EndpointConfiguration(awsDynamoDBEndPoint, awsRegion))
.withCredentials(new AWSStaticCredentialsProvider(new
BasicAWSCredentials(awsAccessKey, awsSecretKey)))
.build();
}
And called mapper using #Autowiring
#Autowired
private DynamoDBMapper mapper;
When I try to add data using
mapper.save(person);
it gives an error saying Cannot do operations on a non-existent table
Please give me some idea where I am missing the trick here.
Thanks in advance.
The root cause might be because of aws-cli and the application using different aws profiles (credential and region). It will create different db files and use different db files when aws-cli and application connect to dynamodb local.
Please use the below approach to debug.
You must use sharedDB to start your docker instance.
docker run -p 8000:8000 -v $(pwd)/local/dynamodb:/data/ amazon/dynamodb-local -jar DynamoDBLocal.jar -sharedDb -dbPath /data
Please check the aws profile u have created (aws_access_key_id & aws_secret_access_key) . Use these same values in your application to connect to the docker dynamoDB instance .
In your Person.java(Model class) check the table name. Table names are case sensitive for DynamoDB
#DynamoDBTable(tableName = "Person")
Or
#DynamoDBTable(tableName = "person")

REST Call to Resource in Dropwizard/Jersey and UTF-8 works only on dev machine

We're using dropwizard 0.7.1 which comes with jetty 9.0.7 and the jersey http client 1.18.1 (yes, it's old...). The os is linux, we're using Java 1.8.
I'm running an integration test locally inside eclipse, which makes a rest call using jersey to a dropwizard application running inside a vagrant box.
One of the tests should verify if I can send non-latin characters to the server.
I'm sending the string "Владимир Արման" inside a String field of a POJO using jersey:
req.type(MediaType.APPLICATION_JSON + "; charset=utf-8")//
.post(ClientResponse.class, user);
The resource I'm sending this too looks like this:
#Path("/users")
#Produces(MediaType.APPLICATION_JSON + "; charset=utf-8")
#Consumes(MediaType.APPLICATION_JSON + "; charset=utf-8")
public class UserServiceResource{
[...]
#POST
public Response createUser(UserCreate user) {...}
(you see we're both on the client and on the server side enforcing the utf-8 charset, which in my opinion, should actually not be necessary?)
This works perfectly locally, the name arrives correctly.
On jenkins, however, this does not work, I only receive "??????" in the service instead of the correct characters. In the log of the client that posts the pojo I still see the correct characters.
The setup is quite similar: jenkins builds a vagrant box, and then runs the tests using maven against this box (the integration test code runs outside the vagrant box, the service runs inside vagrant).
Jenkins uses maven, but when running the integration test locally using maven, it still works fine.
The vagrant box I'm using locally is also build and provisioned by jenkins, in a different job.
We are now trying to investigate if the environment settings might be slightly different, already tried setting
LANG=en_US.UTF-8
LC_ALL=en_US.UTF-8
MAVEN_OPTS="-Dfile.encoding=UTF-8"
but that didn't help.
What mainly puzzles me is that I would expect to work this under any environment settings, as we explicitly enforce UTF-8.
Are there know issues how the environment overrides the encodings set in the client and the server?
This problem just ate my soul. The root issue ended up being with the JDBC configuration. Originally my config looked like this:
database:
driverClass: com.mysql.jdbc.Driver
user: ...
password: ...
url: jdbc:mysql://...
properties:
charSet: UTF-8
hibernate.dialect: org.hibernate.dialect.MySQL5InnoDBDialect
maxWaitForConnection: 1s
validationQuery: "/* DropWizard Health Check */ SELECT 1"
minSize: 5
maxSize: 25
checkConnectionWhileIdle: false
checkConnectionOnBorrow: true
The solution was to update the properties block to define a characterEncoding and useUnicode property. Like this:
properties:
charSet: UTF-8
characterEncoding: UTF-8
useUnicode: true
hibernate.dialect: org.hibernate.dialect.MySQL5InnoDBDialect
Updating my YAML config, in addition to adding a charset parameter to the Content-Type header, fixed the issue for me.

Spring Boot Yarn - Passing Command line arguments

i'm trying to pass command line arguments in my Spring Boot Yarn application and am having difficulties. i understand that i can set these in the yml document spring.yarn.appmaster.launchcontext.arguments but how can it from the command line? like java -jar MyYarnApp.jar {arg0} {arg1} and get access to it from my #YarnContainer?
i've discovered that #YarnProperties maps to spring.yarn.appmaster.launchcontext.arguments but i want to set them from the command line, not in the yml
You are pretty close on this when you found spring.yarn.client.launchcontext.arguments and spring.yarn.appmaster.launchcontext.arguments. We don't have settings which would automatically pass all command-line arguments from a client into an appmaster which would then pass them into a container launch context. Not sure if we even want to do that because you surely want to be on control what happens with YARN container launch context. User using a client could then potentially pass a rogue arguments along a food chain.
Having said that, lets see what we can do with our Simple Single Project YARN Application Guide.
We still need to use those launch context arguments to define our command line parameters to basically map how things are passed from a client into an appmaster into a container.
What I added in application.yml:
spring:
yarn:
client:
launchcontext:
arguments:
--my.appmaster.arg1: ${my.client.arg1:notset1}
appmaster:
launchcontext:
arguments:
--my.container.arg1: ${my.appmaster.arg1:notset2}
Modified HelloPojo in Application class:
#YarnComponent
#Profile("container")
public static class HelloPojo {
private static final Log log = LogFactory.getLog(HelloPojo.class);
#Autowired
private Configuration configuration;
#Value("${my.container.arg1}")
private String arg1;
#OnContainerStart
public void onStart() throws Exception {
log.info("Hello from HelloPojo");
log.info("Container arg1 value is " + arg1);
log.info("About to list from hdfs root content");
FsShell shell = new FsShell(configuration);
for (FileStatus s : shell.ls(false, "/")) {
log.info(s);
}
shell.close();
}
}
Notice how I added arg1 and used #Value to map it with my.container.arg1. We can either use #ConfigurationProperties or #Value which are normal Spring and Spring Boot functionalities and there's more in Boot's reference docs how to use those.
You could then modify AppIT unit test:
ApplicationInfo info = submitApplicationAndWait(Application.class, new String[]{"--my.client.arg1=arg1value"});
and run build with tests
./gradlew clean build
or just build it without running test:
./gradlew clean build -x test
and then submit into a real hadoop cluster with your my.client.arg1.
java -jar build/libs/gs-yarn-basic-single-0.1.0.jar --my.client.arg1=arg1value
Either way you see arg1value logged in container logs:
[2014-07-18 08:49:09.802] boot - 2003 INFO [main] --- ContainerLauncherRunner: Running YarnContainer with parameters [--spring.profiles.active=container,--my.container.arg1=arg1value]
[2014-07-18 08:49:09.806] boot - 2003 INFO [main] --- Application$HelloPojo: Container arg1 value is arg1value
Using format ${my.client.arg1:notset1} also allows you to automatically define a default value notset1 if my.client.arg1 is omitted by user. We're working on Spring Application Context here orchestrated by Spring Boot so all the goodies from there are in your disposal
If you need more precise control of those user facing arguments(using args4j, jopt, etc) then you'd need to have a separate code/jar for client/appmaster/container order to create a custom client main method. All the other Spring YARN getting started guides are pretty much using multi-project builds so look at those. For example if you just want to have first and second argument value without having a need to use full --my.client.arg1=arg1value on a command line.
Let us know if this works for you and if you have any other ideas to make things simpler.

Resources