Serializer for custom type 'janusgraph.RelationIdentifier' not found - spring-boot

Janus Server config
serializers:
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1, config: { serializeResultToString: true }}
Java/Spring boot config
#Bean
public Cluster cluster() {
return Cluster.build()
.addContactPoint(dbUrl)
.port(dbPort)
.serializer(new GraphBinaryMessageSerializerV1())
.maxConnectionPoolSize(5)
.maxInProcessPerConnection(1)
.maxSimultaneousUsagePerConnection(10)
.create();
}
Getting the following error,
Caused by: org.apache.tinkerpop.gremlin.driver.ser.SerializationException: java.io.IOException: Serializer for custom type 'janusgraph.RelationIdentifier' not found
at org.apache.tinkerpop.gremlin.driver.ser.binary.ResponseMessageSerializer.readValue(ResponseMessageSerializer.java:59) ~[gremlin-driver-3.6.1.jar:3.6.1]
at org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1.deserializeResponse(GraphBinaryMessageSerializerV1.java:180) ~[gremlin-driver-3.6.1.jar:3.6.1]
at org.apache.tinkerpop.gremlin.driver.handler.WebSocketGremlinResponseDecoder.decode(WebSocketGremlinResponseDecoder.java:47) ~[gremlin-driver-3.6.1.jar:3.6.1]
at org.apache.tinkerpop.gremlin.driver.handler.WebSocketGremlinResponseDecoder.decode(WebSocketGremlinResponseDecoder.java:35) ~[gremlin-driver-3.6.1.jar:3.6.1]
Note: I didn't define the schema. Migrating the code from AWS NEPTUNE (working code) to JanusGraph.
Any help on why I am getting the above error?
Get queries are working and a few mutations queries also working,...

It looks like you have only defined the serializer for JanusGraph types on the server, but not on the client side. You also need to add the JanusGraphIoRegistry on the client side.
This can be done like this:
TypeSerializerRegistry typeSerializerRegistry = TypeSerializerRegistry.build()
.addRegistry(JanusGraphIoRegistry.instance())
.create();
Cluster.build()
.addContactPoint(dbUrl)
.port(dbPort)
.serializer(new GraphBinaryMessageSerializerV1(typeSerializerRegistry))
.maxConnectionPoolSize(5)
.maxInProcessPerConnection(1)
.maxSimultaneousUsagePerConnection(10)
.create();
or you can alternatively use a config file for it which simplifies the code down to:
import static org.apache.tinkerpop.gremlin.process.traversal.AnonymousTraversalSource.traversal;
GraphTraversalSource g = traversal().withRemote("conf/remote-graph.properties");
(I have already created the GraphTraversalSource here because the client is directly created internally by withRemote().)
This is also described in the JanusGraph documentation under Connecting from Java. Note that I've linked to a version of the docs for the upcoming 1.0.0 release because the documentation for the latest released version right now still uses Gryo instead of GraphBinary. But you can still use this with JanusGraph 0.6 already and it also makes sense to use GraphBinary instead of Gryo, because support for Gryo will be dropped in version 1.0.0.
The config file conf/remote-graph.properties looks then like this (also taken from the JanusGraph documentation):
hosts: [localhost]
port: 8182
serializer: {
className: org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1,
config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
You can also specify the various options that you are currently specifying via the builder. This configuration is documented in the TinkerPop reference docs.

Related

Lambda Layers not installing with Serverless

Currently getting the following error with MongoDB:
no saslprep library specified. Passwords will not be sanitized
We are using Webpack so simply installing the module doesn't work (Webpack just ignores it). I found this thread which talks about how to exclude it from Webpack compilations, but then I have to manually load it into every Lambda function which led me to Lambda Layers.
Following the Serverless guide on using Lambda layers allowed me to get my layer published to AWS and included in all of my functions, but for some reason, it doesn't install the modules. If I download the layer using the AWS GUI, I get a folder with just the package.json and package-lock.json files.
My file structure is:
my-project
|_ layers
|_ saslprep
|_ package.json
and my serverless.yml is:
layers:
saslprep:
path: layers/saslprep
compatibleRuntimes:
- nodejs14.x
This is not my preferred solution as I'd like to use 256, but the way I got around this error/warning was by changing the authMechanism from SCRAM-SHA-256 to SCRAM-SHA-1 in the connection string. The serverless-bundle most likely needs to add this dependency into their package to enable support for Mongo 4.0 SHA256 (my best guess!).
You can specify this authentication mechanism by setting the authMechanism parameter to the value SCRAM-SHA-1 in the connection string as shown in the following sample code.
const { MongoClient } = require("mongodb");
// Replace the following with values for your environment.
const username = encodeURIComponent("<username>");
const password = encodeURIComponent("<password>");
const clusterUrl = "<MongoDB cluster url>";
const authMechanism = "SCRAM-SHA-1";
// Replace the following with your MongoDB deployment's connection string.
const uri =
`mongodb+srv://${username}:${password}#${clusterUrl}/?authMechanism=${authMechanism}`;
// Create a new MongoClient
const client = new MongoClient(uri);
// Function to connect to the server
async function run() {
try {
// Connect the client to the server
await client.connect();
// Establish and verify connection
await client.db("admin").command({ ping: 1 });
console.log("Connected successfully to server");
} finally {
// Ensures that the client will close when you finish/error
await client.close();
}
}
run().catch(console.dir);

Using SocketIo Manager with a default URL

My goal is to add a token in the socketio reconnection from the client (works fine on the first connection, but the query is null on the reconnection, if the server restarted while the client stayed on).
The documentation indicates I need to use the Manager to customize the reconnection behavior (and add a query parameter).
However, I'm getting trouble finding how to use this Manager: I can't find a way to connect to the server.
What I was using without Manager (works fine):
this.socket = io({
query: {
token: 'abc',
}
});
Version with the Manager:
const manager = new Manager(window.location, {
hostname: "localhost",
path: "/socket.io",
port: "8080",
query: {
auth: "123"
}
});
So I tried many approaches (nothing, '', 'http://localhost:8080', 'http://localhost:8080/socket.io', adding those lines to the options:
hostname: "localhost",
path: "/socket.io",
port: "8080" in the options,
But I couldn't connect.
The documentation indicates the default URL is:
url (String) (defaults to window.location)
For some reasons, using window.location as URL refreshes the page infinitely, no matter if I enter it as URL in the io() creator or in the new Manager.
I am using socket.io-client 3.0.3.
Could someone explain me what I'm doing wrong ?
Thanks
Updating to 3.0.4 solved the initial problem, which was to be able to send the token in the initial query.
I also found this code in the doc, which solves the problem:
this.socket.on('reconnect_attempt', () => {
socket.io.opts.query = {
token: 'fgh'
}
});
However, it doesn't solve the problem of the Manager that just doesn't work. I feel like it should be removed from the doc. I illustrated the problem in this repo:
https://github.com/Yvanovitch/socket.io/blob/master/examples/chat/public/main.js

Health Endpoint Metrics not being exported after Spring Boot 2 migration

My Team migrated our Microservices from Spring Boot 1 to Version 2 and since the Actuator changed, our Health Endpoint Metrics exported via prometheus jmx exporter do not work anymore.
The usual /actuator/health is working as expected, but the prometheus-jmx-exporter won't pick it up although several things tried:
I changed the Metainformation in the exporter-config.yaml to reflect the name change in Spring Boot 2
I added the io.micrometer:micrometer-registry-prometheus to our build.gradle to see if this is the issue
I exposed web and jmx endpoints acording to the Spring Boot 2 Documentation
So now I run out of ideas and would appreciate any hints oyu might be able to give me
old prometheus-jmx-exporter exporter-config.yaml:
---
lowercaseOutputName: true
lowercaseOutputLabelNames: true
whitelistObjectNames: ["org.springframework.boot:type=Endpoint,name=healthEndpoint"]
rules:
- pattern: 'org.springframework.boot<type=Endpoint, name=healthEndpoint><(.*, )?(.*)>(.*):'
name: health_endpoint_$1$3
attrNameSnakeCase: true
new prometheus-jmx-exporter exporter-config.yaml:
---
lowercaseOutputName: true
lowercaseOutputLabelNames: true
whitelistObjectNames: ["org.springframework.boot:type=Endpoint,name=Health"]
rules:
- pattern: 'org.springframework.boot<type=Endpoint, name=Health>'
name: health_endpoint_$1$3
attrNameSnakeCase: true
current application properties about actuator endpoints:
management.endpoints.web.exposure.include=info, health, refresh, metrics, prometheus
management.endpoints.jmx.exposure.include=health, metrics, prometheus
in Spring Boot 1 with the old exporter-config.yaml I get results like this:
# HELP health_endpoint_hystrix_status Invoke the underlying endpoint (org.springframework.boot<type=Endpoint, name=healthEndpoint><hystrix, status>status)
# TYPE health_endpoint_hystrix_status untyped
health_endpoint_hystrix_status 1.0
# HELP health_endpoint_status Invoke the underlying endpoint (org.springframework.boot<type=Endpoint, name=healthEndpoint><status>status)
# TYPE health_endpoint_status untyped
health_endpoint_status 1.0
But with all the changes and in Spring Boot 2 I get nothing out of this.
You can cofigure your own health value and add it to the Prometheus Metrics endpoint:
#Configuration
public class HealthMetricsConfiguration {
#Bean
public MeterRegistryCustomizer prometheusHealthCheck(HealthEndpoint healthEndpoint) {
return registry -> registry.gauge("health", healthEndpoint, HealthMetricsConfiguration::healthToCode);
}
public static int healthToCode(HealthEndpoint ep) {
Status status = ep.health().getStatus();
return status.equals(Status.UP) ? 1 : 0;
}
}

Configuring catalog items through manifest.yml file

Using spring-cloud-cloudfoundry-service-broker we developed a service broker.
Initially we defined catalog items within application.yml file which gets bundled inside jar and this all works great.
Instead of bundling catalog items within jar file, we thought of supplying through manifest.yml file while pushing the service to cloud foundry.
But unfortunately application is not getting the catalog items specified in manigest.yml file. Could you please let us know how do we supply catalog items through manifest.yml file?
I have copied my code snippet here.
CatalogConfig.java
#ConfigurationProperties(prefix = "catalog")
#Component
public class CatalogConfig {
private List<ServiceDefinitionProxy> services;
public CatalogConfig() {
super();
}
#Bean
Catalog catalog() {
return new Catalog(services.stream().map(s -> s.unproxy())
.collect(Collectors.toList()));
}
public CatalogConfig(List<ServiceDefinitionProxy> services) {
super();
this.services = services;
}
public List<ServiceDefinitionProxy> getServices() {
return services;
}
public void setServices(List<ServiceDefinitionProxy> services) {
this.services = services;
}
public ServiceDefinitionProxy findServiceDefinition(String serviceId) {
return services.stream().filter(s -> s.getId().equals(serviceId))
.findFirst().get();
}
}
Manifest.yml
---
applications:
- name: my-service-broker
memory: 512M
instances: 1
host: my-service-broker
path: target/my-service-broker-1.0.0-SNAPSHOT.jar
env:
SPRING_PROFILES_DEFAULT: cloud
catalog:
services:
- id: f1478faa-d980-11e5-b5d2-0a1d41d68578
name: api-marketpace
description: API Marketplace
bindable: true
planUpdatable: true
head-type: api
tags:
- api
- Manage API Marketplace
metadata:
displayName: API Marketplace
imageUrl: https://my-service-broker.cf.com/images/logo.PNG
longDescription: API Marketplace.
providerDisplayName: API Team
documentationUrl: https://wikihub.com/display/ASC/Training
supportUrl: https://wikihub.com/display/ASC/Training
plans:
- id: f1478faa-d980-11e5-b5d2-0a1d41d68579
name: unlimited
description: free
metadata:
costs:
- amount:
usd: 0.00
unit: PER MONTH
bullets:
- Basic Unlimited
dashboardClient:
id: api-marketpace
secret: secret
redirectUrl: https://api.cf.com/
That won't work.
The manifest.yml file is used exclusively by the cf CLI to provide options when pushing apps to CF. Deployed applications never see this file or any of its contents. In fact the CF platform itself never sees the file or its contents - it is purely processed by the CLI on the client side.
The application.yml file is used by Spring Boot and the contents are provided to the app via #ConfigurationProperties and other means.
These are two completely separate concepts and mechanisms, both of which happen to use the YAML data format.

'apiKey.id is required' error thrown when using express-stormpath with node.js

I am using express-stormpath with node.js to set up a backend server. This is a snippet of my server.js code where I get an error thrown -
app.use(stormpath.init(app, {
apiKeyFile: './config/.stormpath/apikey.properties',
application: '<API_HREF>',
secretKey: security.stormpath_secret_key
}));
This is the error -
$ node server.js
../webservices/node_modules/express-
stormpath/node_modules/stormpath/lib/authc/RequestAuthenticator.js:8
throw new Error('apiKey.id is required.');
How do I fix this?
I'm assuming you're using the latest version of the express-stormpath library, which is why you're probably having issues. As of the 2.0.0 release, the library uses new configuration options.
Here's an example of the same thing using the new options:
app.use(stormpath.init(app, {
client: {
apiKey: {
file: './config/.stormpath/apikey.properties'
}
},
application: {
href: '<API_HREF>',
}
}));
NOTE: No secretKey is required, as this is generated automatically from your Stormpath API key secret =)
We've made many new changes in the latest library releases that enable all sorts of new, cool stuff! <3

Resources