Configuring catalog items through manifest.yml file - spring-boot

Using spring-cloud-cloudfoundry-service-broker we developed a service broker.
Initially we defined catalog items within application.yml file which gets bundled inside jar and this all works great.
Instead of bundling catalog items within jar file, we thought of supplying through manifest.yml file while pushing the service to cloud foundry.
But unfortunately application is not getting the catalog items specified in manigest.yml file. Could you please let us know how do we supply catalog items through manifest.yml file?
I have copied my code snippet here.
CatalogConfig.java
#ConfigurationProperties(prefix = "catalog")
#Component
public class CatalogConfig {
private List<ServiceDefinitionProxy> services;
public CatalogConfig() {
super();
}
#Bean
Catalog catalog() {
return new Catalog(services.stream().map(s -> s.unproxy())
.collect(Collectors.toList()));
}
public CatalogConfig(List<ServiceDefinitionProxy> services) {
super();
this.services = services;
}
public List<ServiceDefinitionProxy> getServices() {
return services;
}
public void setServices(List<ServiceDefinitionProxy> services) {
this.services = services;
}
public ServiceDefinitionProxy findServiceDefinition(String serviceId) {
return services.stream().filter(s -> s.getId().equals(serviceId))
.findFirst().get();
}
}
Manifest.yml
---
applications:
- name: my-service-broker
memory: 512M
instances: 1
host: my-service-broker
path: target/my-service-broker-1.0.0-SNAPSHOT.jar
env:
SPRING_PROFILES_DEFAULT: cloud
catalog:
services:
- id: f1478faa-d980-11e5-b5d2-0a1d41d68578
name: api-marketpace
description: API Marketplace
bindable: true
planUpdatable: true
head-type: api
tags:
- api
- Manage API Marketplace
metadata:
displayName: API Marketplace
imageUrl: https://my-service-broker.cf.com/images/logo.PNG
longDescription: API Marketplace.
providerDisplayName: API Team
documentationUrl: https://wikihub.com/display/ASC/Training
supportUrl: https://wikihub.com/display/ASC/Training
plans:
- id: f1478faa-d980-11e5-b5d2-0a1d41d68579
name: unlimited
description: free
metadata:
costs:
- amount:
usd: 0.00
unit: PER MONTH
bullets:
- Basic Unlimited
dashboardClient:
id: api-marketpace
secret: secret
redirectUrl: https://api.cf.com/

That won't work.
The manifest.yml file is used exclusively by the cf CLI to provide options when pushing apps to CF. Deployed applications never see this file or any of its contents. In fact the CF platform itself never sees the file or its contents - it is purely processed by the CLI on the client side.
The application.yml file is used by Spring Boot and the contents are provided to the app via #ConfigurationProperties and other means.
These are two completely separate concepts and mechanisms, both of which happen to use the YAML data format.

Related

Serializer for custom type 'janusgraph.RelationIdentifier' not found

Janus Server config
serializers:
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1, config: { serializeResultToString: true }}
Java/Spring boot config
#Bean
public Cluster cluster() {
return Cluster.build()
.addContactPoint(dbUrl)
.port(dbPort)
.serializer(new GraphBinaryMessageSerializerV1())
.maxConnectionPoolSize(5)
.maxInProcessPerConnection(1)
.maxSimultaneousUsagePerConnection(10)
.create();
}
Getting the following error,
Caused by: org.apache.tinkerpop.gremlin.driver.ser.SerializationException: java.io.IOException: Serializer for custom type 'janusgraph.RelationIdentifier' not found
at org.apache.tinkerpop.gremlin.driver.ser.binary.ResponseMessageSerializer.readValue(ResponseMessageSerializer.java:59) ~[gremlin-driver-3.6.1.jar:3.6.1]
at org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1.deserializeResponse(GraphBinaryMessageSerializerV1.java:180) ~[gremlin-driver-3.6.1.jar:3.6.1]
at org.apache.tinkerpop.gremlin.driver.handler.WebSocketGremlinResponseDecoder.decode(WebSocketGremlinResponseDecoder.java:47) ~[gremlin-driver-3.6.1.jar:3.6.1]
at org.apache.tinkerpop.gremlin.driver.handler.WebSocketGremlinResponseDecoder.decode(WebSocketGremlinResponseDecoder.java:35) ~[gremlin-driver-3.6.1.jar:3.6.1]
Note: I didn't define the schema. Migrating the code from AWS NEPTUNE (working code) to JanusGraph.
Any help on why I am getting the above error?
Get queries are working and a few mutations queries also working,...
It looks like you have only defined the serializer for JanusGraph types on the server, but not on the client side. You also need to add the JanusGraphIoRegistry on the client side.
This can be done like this:
TypeSerializerRegistry typeSerializerRegistry = TypeSerializerRegistry.build()
.addRegistry(JanusGraphIoRegistry.instance())
.create();
Cluster.build()
.addContactPoint(dbUrl)
.port(dbPort)
.serializer(new GraphBinaryMessageSerializerV1(typeSerializerRegistry))
.maxConnectionPoolSize(5)
.maxInProcessPerConnection(1)
.maxSimultaneousUsagePerConnection(10)
.create();
or you can alternatively use a config file for it which simplifies the code down to:
import static org.apache.tinkerpop.gremlin.process.traversal.AnonymousTraversalSource.traversal;
GraphTraversalSource g = traversal().withRemote("conf/remote-graph.properties");
(I have already created the GraphTraversalSource here because the client is directly created internally by withRemote().)
This is also described in the JanusGraph documentation under Connecting from Java. Note that I've linked to a version of the docs for the upcoming 1.0.0 release because the documentation for the latest released version right now still uses Gryo instead of GraphBinary. But you can still use this with JanusGraph 0.6 already and it also makes sense to use GraphBinary instead of Gryo, because support for Gryo will be dropped in version 1.0.0.
The config file conf/remote-graph.properties looks then like this (also taken from the JanusGraph documentation):
hosts: [localhost]
port: 8182
serializer: {
className: org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1,
config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
You can also specify the various options that you are currently specifying via the builder. This configuration is documented in the TinkerPop reference docs.

Elsa workflow designer errors

I have completed the below tutorial to correctly configure a working elsa server
Part 2 of Building Workflow Driven .NET Applications with Elsa 2
I made modifications for running it with docker-compose allong with the dependant services.
Everything works as expected except the intellisense in the designer window.
Ive noticed a couple of errors in the browser console as below
this is my startup class
public class Startup
{
public Startup(IConfiguration configuration, IWebHostEnvironment environment)
{
Configuration = configuration;
Environment = environment;
}
private IConfiguration Configuration { get; }
private IWebHostEnvironment Environment { get; }
public void ConfigureServices(IServiceCollection services)
{
var dbConnectionString = Configuration.GetConnectionString("Sqlite");
// Razor Pages (for UI).
services.AddRazorPages();
// Hangfire (for background tasks).
AddHangfire(services, dbConnectionString);
// Elsa (workflows engine).
AddWorkflowServices(services, dbConnectionString);
// Allow arbitrary client browser apps to access the API for demo purposes only.
// In a production environment, make sure to allow only origins you trust.
services.AddCors(cors => cors.AddDefaultPolicy(policy => policy.AllowAnyHeader().AllowAnyMethod().AllowAnyOrigin().WithExposedHeaders("Content-Disposition")));
}
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseExceptionHandler("/Error");
app.UseHsts();
}
app
.UseStaticFiles()
.UseCors(cors => cors
.AllowAnyHeader()
.AllowAnyMethod()
.SetIsOriginAllowed(_ => true)
.AllowCredentials())
.UseRouting()
.UseHttpActivities() // Install middleware for triggering HTTP Endpoint activities.
.UseEndpoints(endpoints =>
{
endpoints.MapRazorPages();
endpoints.MapControllers(); // Elsa API Endpoints are implemented as ASP.NET API controllers.
});
}
private void AddHangfire(IServiceCollection services, string dbConnectionString)
{
services
.AddHangfire(config => config
// Use same SQLite database as Elsa for storing jobs.
.UseSQLiteStorage(dbConnectionString)
.UseSimpleAssemblyNameTypeSerializer()
// Elsa uses NodaTime primitives, so Hangfire needs to be able to serialize them.
.UseRecommendedSerializerSettings(settings => settings.ConfigureForNodaTime(DateTimeZoneProviders.Tzdb)))
.AddHangfireServer((sp, options) =>
{
// Bind settings from configuration.
Configuration.GetSection("Hangfire").Bind(options);
// Configure queues for Elsa workflow dispatchers.
options.ConfigureForElsaDispatchers(sp);
});
}
private void AddWorkflowServices(IServiceCollection services, string dbConnectionString)
{
services.AddWorkflowServices(dbContext => dbContext.UseSqlite(dbConnectionString));
// Configure SMTP.
services.Configure<SmtpOptions>(options => Configuration.GetSection("Elsa:Smtp").Bind(options));
// Configure HTTP activities.
services.Configure<HttpActivityOptions>(options => Configuration.GetSection("Elsa:Server").Bind(options));
// Elsa API (to allow Elsa Dashboard to connect for checking workflow instances).
services.AddElsaApiEndpoints();
}
}
this is my docker compose
version: '3.4'
services:
workflow.web:
image: ${DOCKER_REGISTRY-}workflowweb
ports:
- 5000:80
- 5001:443
build:
context: .
dockerfile: src/Workflow.Web/Dockerfile
networks:
- testnet
email.service:
image: rnwood/smtp4dev:linux-amd64-3.1.0-ci0856
ports:
- 3000:80
- "2525:25"
networks:
- testnet
elsa.dashboard:
image: elsaworkflows/elsa-dashboard:latest
ports:
- "14000:80"
environment:
ELSA__SERVER__BASEADDRESS: "http://localhost:5000"
networks:
- testnet
networks:
testnet:
driver: bridge
Any ideas
Most likely the issue is that the docker image for the dashboard is not compatible with the workflow server hosted by your application.
The cause of this mismatch is that the blog post references Elsa 2.3 NuGet packages, while the dashboard docker image is built from the latest source code in the master branch (which is something that should be fixed to avoid confusion like you're experiencing).
To make the dashboard work (which is built against latest source code), you need to update your workflow server app to reference the latest Elsa preview packages from MyGet (which are also built against latest source code from the master branch).
The following documentation describes how to reference the MyGet feed: https://elsa-workflows.github.io/elsa-core/docs/next/installation/installing-feeds#myget

Health Endpoint Metrics not being exported after Spring Boot 2 migration

My Team migrated our Microservices from Spring Boot 1 to Version 2 and since the Actuator changed, our Health Endpoint Metrics exported via prometheus jmx exporter do not work anymore.
The usual /actuator/health is working as expected, but the prometheus-jmx-exporter won't pick it up although several things tried:
I changed the Metainformation in the exporter-config.yaml to reflect the name change in Spring Boot 2
I added the io.micrometer:micrometer-registry-prometheus to our build.gradle to see if this is the issue
I exposed web and jmx endpoints acording to the Spring Boot 2 Documentation
So now I run out of ideas and would appreciate any hints oyu might be able to give me
old prometheus-jmx-exporter exporter-config.yaml:
---
lowercaseOutputName: true
lowercaseOutputLabelNames: true
whitelistObjectNames: ["org.springframework.boot:type=Endpoint,name=healthEndpoint"]
rules:
- pattern: 'org.springframework.boot<type=Endpoint, name=healthEndpoint><(.*, )?(.*)>(.*):'
name: health_endpoint_$1$3
attrNameSnakeCase: true
new prometheus-jmx-exporter exporter-config.yaml:
---
lowercaseOutputName: true
lowercaseOutputLabelNames: true
whitelistObjectNames: ["org.springframework.boot:type=Endpoint,name=Health"]
rules:
- pattern: 'org.springframework.boot<type=Endpoint, name=Health>'
name: health_endpoint_$1$3
attrNameSnakeCase: true
current application properties about actuator endpoints:
management.endpoints.web.exposure.include=info, health, refresh, metrics, prometheus
management.endpoints.jmx.exposure.include=health, metrics, prometheus
in Spring Boot 1 with the old exporter-config.yaml I get results like this:
# HELP health_endpoint_hystrix_status Invoke the underlying endpoint (org.springframework.boot<type=Endpoint, name=healthEndpoint><hystrix, status>status)
# TYPE health_endpoint_hystrix_status untyped
health_endpoint_hystrix_status 1.0
# HELP health_endpoint_status Invoke the underlying endpoint (org.springframework.boot<type=Endpoint, name=healthEndpoint><status>status)
# TYPE health_endpoint_status untyped
health_endpoint_status 1.0
But with all the changes and in Spring Boot 2 I get nothing out of this.
You can cofigure your own health value and add it to the Prometheus Metrics endpoint:
#Configuration
public class HealthMetricsConfiguration {
#Bean
public MeterRegistryCustomizer prometheusHealthCheck(HealthEndpoint healthEndpoint) {
return registry -> registry.gauge("health", healthEndpoint, HealthMetricsConfiguration::healthToCode);
}
public static int healthToCode(HealthEndpoint ep) {
Status status = ep.health().getStatus();
return status.equals(Status.UP) ? 1 : 0;
}
}

How to fetch properties from config-server consisting of more than one repo

I wanted to fetch properties from two git repos. one is https://username#bitbucket.my.domain.com/share.git - which will have a property file contains some common key value pair and the other one is https://username#bitbucket.my.domain.com/service.git - it will have property files of all the micro services.
While I am deploying the service only one yml file (which is in https://username#bitbucket.my.domain.com/share.git repo) is read by the config server. What I am missing? How to read the property file from another repo i.e. https://username#bitbucket.my.domain.com/service.git too?
I wanted to deploy the service in PCF. So I configured the config-server in PCF with the following json.
{
"count": 1,
"git": {
"label": "feature",
"uri": "https://username#bitbucket.my.domain.com/share.git",
"username": "username",
"password": "password",
"repos": {
"configserver": {
"password": "password",
"label": "feature",
"uri": "https://username#bitbucket.my.domain.com/service.git"
"username": "username"
}
}
}
}
Name of my service is LogDemo and spring profile is active. I have created two yml files and placed in the corresponding repo. (I have given same name two both the files like LogDemo-active.yml). While I am deploying the service only one yml file (which is in https://username#bitbucket.my.domain.com/share.git repo) is read by the config server. /env is giving me the following:
{
"profiles": [
"active",
"cloud"
],
"server.ports": {
"local.server.port": 8080
},
"configService:configClient": {
"config.client.version": "234e59d4a9f80f035f00fdf07e6f9f16e5560a55"
},
"configService:https://username#bitbucket.my.domain.com/share.git/LogDemo-active.yml": {
"key1": "value1",
"key2": "value2"
},
...................
...................
What I am missing? How to read the property file from other repo i.e. https://username#bitbucket.my.domain.com/service.git too?
Below is my bootstrap.yml
spring:
application:
name: LogDemo
mvc:
view:
prefix: /
suffix: .jsp
Here is my manifest file
---
inherit: baseManifest.yml
applications:
- name: LogDemo
host: LogDemo
env:
LOG_LEVEL: INFO
spring.profiles.active: active
TZ: America/New_York
memory: 1024M
domain: my.domain.com
services:
- config-server-comp
When using multiple repos, the repos that will be applied depend on the pattern's defined for those repos. The default pattern is <repo-name>/*. Thus changing the repo name to LogDemo will activate the repo for your app, because the app name, spring.application.name, is LogDemo.
If one or more patterns match, then the repo for the matched patterns will be used. If no pattern matches then the default is used.
Full details are described in the docs here.
https://cloud.spring.io/spring-cloud-config/single/spring-cloud-config.html#_pattern_matching_and_multiple_repositories
If you don't need or want the pattern matching feature, you can use the [composite backend](
https://docs.pivotal.io/spring-cloud-services/2-0/common/config-server/composite-backends.html). The composite backend allows you to define multiple Git repositories. See the first config example here.
https://docs.pivotal.io/spring-cloud-services/2-0/common/config-server/composite-backends.html#general-configuration

SonarQube - specify location of sonar.properties

I'm trying to deploy SonarQube on Kubernetes using configMaps.
The latest 7.1 image I use has a config in sonar.properties embedded in $SONARQUBE_HOME/conf/ . The directory is not empty and contain also a wrapper.conf file.
I would like to mount the configMap inside my container in a other location than /opt/sonar/conf/ and specify to sonarQube the new path to read the properties.
Is there a way to do that ? (environment variable ? JVM argument ? ...)
It is not recommended to modify this standard configuration in any way. But we can have a look at the SonarQube sourcecode. In this file you can find this code for reading the configuration file:
private static Properties loadPropertiesFile(File homeDir) {
Properties p = new Properties();
File propsFile = new File(homeDir, "conf/sonar.properties");
if (propsFile.exists()) {
...
} else {
LoggerFactory.getLogger(AppSettingsLoaderImpl.class).warn("Configuration file not found: {}", propsFile);
}
return p;
}
So the conf-path and filename is hard coded and you get a warning if the file does not exist. The home directory is found this way:
private static File detectHomeDir() {
try {
File appJar = new File(Class.forName("org.sonar.application.App").getProtectionDomain().getCodeSource().getLocation().toURI());
return appJar.getParentFile().getParentFile();
} catch (...) {
...
}
So this can also not be changed. The code above is used here:
#Override
public AppSettings load() {
Properties p = loadPropertiesFile(homeDir);
p.putAll(CommandLineParser.parseArguments(cliArguments));
p.setProperty(PATH_HOME.getKey(), homeDir.getAbsolutePath());
p = ConfigurationUtils.interpolateVariables(p, System.getenv());
....
}
This suggests that you can use commandline parameters or environment variables in order to change your settings.
For my problem, I defined environment variable to configure database settings in my Kubernetes deployment :
env:
- name: SONARQUBE_JDBC_URL
value: jdbc:sqlserver://mydb:1433;databaseName=sonarqube
- name: SONARQUBE_JDBC_USERNAME
value: sonarqube
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: sonarsecret
key: dbpassword
I needed to use also ldap plugin but it was not possible to configure environment variable in this case. As /opt/sonarqube/conf/ is not empty, I can't use configMap to decouple configuration from image content. So, I build my own sonarqube image adding the ldap jar plugin and ldap setting in sonar.properties :
# General Configuration
sonar.security.realm=LDAP
ldap.url=ldap://myldap:389
ldap.bindDn=CN=mysa=_ServicesAccounts,OU=Users,OU=SVC,DC=net
ldap.bindPassword=****
# User Configuration
ldap.user.baseDn=OU=Users,OU=SVC,DC=net
ldap.user.request=(&(sAMAccountName={0})(objectclass=user))
ldap.user.realNameAttribute=cn
ldap.user.emailAttribute=mail
# Group Configuration
ldap.group.baseDn=OU=Users,OU=SVC,DC=net
ldap.group.request=(&(objectClass=group)(member={dn}))

Resources