Accessing VCAP properties on PCF Spring Boot app - spring

I have a simple Spring Boot app running on PCF. I am wondering if there is a better way to access VCAP environment variables from PCF. Specifically, I am trying to access service credentials for a RabbitMQ service instance running on PCF and bound to my application.
At the moment, I access the credentials using System.getenv:
JSONObject vcapServices = new JSONObject(System.getenv("VCAP_SERVICES"));
JSONArray rabbitmq = (JSONArray) vcapServices.get("p-rabbitmq");
JSONObject serviceInfo = (JSONObject) rabbitmq.get(0);
JSONObject credentials = (JSONObject) serviceInfo.get("credentials");
hostname = credentials.getString("hostname");
virtualHost = credentials.getString("vhost");
username = credentials.getString("username");
password = credentials.getString("password");
I was trying to get it working with the #Value annotation to try to access the VCAP environment variables like this:
#Value("${vcap.services.p-rabbitmq.credentials.hostname}")
private String hostname;
but I haven't been able to grab the values yet.
Is there any way I can access VCAP variables via the #Value annotation? Or is there a better way other than System.getenv to get these credentials from PCF once I deploy my application?

Is there any way I can access VCAP variables via the #Value annotation?
Yes, I think you need to change your format slightly. It's vcap.services.<service-instance-name>.credentials.hostname. You have the name of the service provider itself, not the name of your service instance. I can't see that name in the example above, but I've found the Javadoc to be helpful. It lists the following example.
VCAP_SERVICES: {
"rds-mysql-1.0": [
{
"credentials": {
"host": "mysql-service-public.clqg2e2w3ecf.us-east-1.rds.amazonaws.com",
"hostname": "mysql-service-public.clqg2e2w3ecf.us-east-1.rds.amazonaws.com",
"name": "d04fb13d27d964c62b267bbba1cffb9da",
"password": "pxLsGVpsC9A5S",
"port": 3306,
"user": "urpRuqTf8Cpe6",
"username": "urpRuqTf8Cpe6"
},
"label": "rds-mysql-1.0",
"name": "mysql",
"plan": "10mb"
}
]
}
In this example, the service provider name is "rds-mysql-1.0", which would be the same as "p-rabbitmq" in your example. It then has a service instance name of "mysql" & that is what you'd use in your property placeholder.
Ex: vcap.services.mysql.credentials.host
https://docs.spring.io/spring-boot/docs/current/api/org/springframework/boot/cloud/CloudFoundryVcapEnvironmentPostProcessor.html
Having explained all that, it can still be a fair bit of work to wire up all of your properties. Especially if you're utilizing multiple services or in your case, you're using RabbitMQ which has a lot of info that is passed through VCAP_SERVICES.
UPDATE
Spring Cloud Connectors is still an option, leaving it below for historical context, but java-cfenv is the recommended way to access service properties going forward. It plays nicer with Spring Boot and is every bit as capable.
DEPRECATED
Another option to consider is Spring Cloud Connectors. This is a library that will basically handle all of this for you. You ask for a DataSource or ConnectionFactory and it makes sure one is wired into your application.
Ex:
#Bean
public RabbitConnectionFactory rabbitFactory() {
return connectionFactory().rabbitConnectionFactory();
}
See Getting Started if you're interested in checking this out.

Related

Integrate Spring Boot Actuator with New Relic

I am trying to integrate New Relic with Spring Boot actuator. Most of the tutorials and response in StackOverflow itself suggest to use New Relic Java Agent but as per Spring Boot documentation installing Java Agent is not mandatory (unless I misunderstood something) also checked this. So, here is my application.properties currently.
management.metrics.export.newrelic.enabled = true
management.metrics.export.newrelic.api-key = <API_KEY>
management.metrics.export.newrelic.account-id = <ACCOUNT_ID>
logging.level.io.micrometer.newrelic=TRACE
management.metrics.export.newrelic.step=30s
and in the log I am seeing
2021-01-11 12:05:18.315 DEBUG 44635 --- [trics-publisher] i.m.n.NewRelicInsightsApiClientProvider : successfully sent 73 metrics to New Relic.
Based on this logs it looks like it is sending logs. But I have no idea where to see this logs. Ideally I would like to pass app name as well so that I can differentiate metric by app name and preferably by env as well later. Any suggestions?
To add "app name" and "env" to your metrics, you just need to configure the MeterFilter with the common tags:
#Configuration
public class MetricsConfig {
#Bean
public MeterFilter commonTagsMeterFilter(#Value("...") appName, #Value("...") env) {
return MeterFilter.commonTags(Tag.of("app name", appName), Tag.of("env", env);
}
}
Setting the following property you should be able to see what metrics are being sent to NewRelic:
logging.level.io.micrometer.newrelic=TRACE

Spring boot application properties load process change programatically to improve security

I have spring boot micro-service with database credentials define in the application properties.
spring.datasource.url=<<url>>
spring.datasource.username=<<username>>
spring.datasource.password=<<password>>
We do not use spring data source to create the connection manually. Only Spring create the database connection with JPA.(org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaAutoConfiguration)
We only provide the application properties, but spring create the connections automatically to use with the database connection pool.
Our requirement to enhance the security without using db properties in clear text. Two possible methods.
Encrypt the database credentials
Use the AWS secret manager. (then get the credential with the application load)
For the option1, jasypt can be used, since we are just providing the properties only and do not want to create the data source manually, how to do to understand by the spring framework is the problem. If better I can get some working sample or methods.
Regarding the option-2,
first we need to define secretName.
use the secertName and get the database credentials from AWS secret manager.
update the application.properties programatically to understand by spring framework. (I need to know this step)
I need to use either option1 and option2. Mentioned the issues with each option.
What you could do is use environment variables for your properties. You can use them like this:
spring.datasource.url=${SECRET_URL}
You could then retrieve these and start your Spring process using a ProcessBuilder. (Or set the variables any other way)
I have found the solution for my problem.
We need to define org.springframework.context.ApplicationListenerin spring.factories file. It should define the required application context listener like below.
org.springframework.context.ApplicationListener=com.sample.PropsLoader
PropsLoader class is like this.
public class PropsLoader implements ApplicationListener<ApplicationEnvironmentPreparedEvent> {
#Override
public void onApplicationEvent(ApplicationEnvironmentPreparedEvent event) {
ConfigurableEnvironment environment = event.getEnvironment();
String appEnv = environment.getProperty("application.env");
//set new properties based on the application environment.
// calling other methods and depends on the enviornment and get the required value set
Properties props = new Properties();
props.put("new_property", "value");
environment.getPropertySources().addFirst(new PropertiesPropertySource("props", props));
}
}
spring.factories file should define under the resources package and META-INF
folder.
This will set the application context with new properties before loading any other beans.

How to find all DataSources binding to an application

I want to use spring-boot, spring-cloud to get all DataSources binded to the cloud foundry application.
Is there a way to get the list?
If I can get service names, I can also use
AbstractCloudConfig.connectionFactory().dataSource(serviceId)
to create the DataSource.
You can do something like this to enumerate over the list of database services and get a DataSource for each one:
Cloud cloud = abstractCloudConfig.cloud();
List<ServiceInfo> serviceInfos = cloud.getServiceInfos(DataSource.class);
List<DataSource> dataSources = new ArrayList<>();
for (ServiceInfo serviceInfo : serviceInfos) {
dataSources.add(cloud.getServiceConnector(serviceInfo.getId, null));
}
DataSources configuration is set in the container environment inside 'VCAP_SERVICES' variable in Cloud Foundry. System.getenv('VCAP_SERVICES') should list all the datasources in your case.
Refer to:
https://docs.run.pivotal.io/devguide/deploy-apps/environment-variable.html#VCAP-SERVICES

How to store application.properites values using manifest.yml to contain passwords?

So my application.properties would look like:
spring.datasource.url=jdbc:sqlserver://localhost:1433;databaseName=mydb
spring.datasource.username=user
spring.datasource.password=123456
spring.jpa.database-platform=org.hibernate.dialect.SQLServer2012Dialect
I don't want others to be able to see my user and password when they go into my application.properties file.
Is there an alternative way to push values to cloud foundry? Something like manifest.yml?
Attempt to create manifest.yml
I tried to make a manifest file so I can bind it with my application on cloud foundry.
VCAP_SERVICES =
{
"oraclesql": [
{
"name": "OrcaleDb",
"label": "oraclesql",
"tags": [
"oracledb",
"oracle",
"relational"
],
"plan": "free",
"credentials": {
"uri": "jdbc:sqlserver://localhost:1433;databaseName=mydb",
"username": "user",
"password": "123456"
}
}
]
}
Created application.yml
//this works
spring:
application:
name: tester
datasource:
driverClassName: jdbc:sqlserver://localhost:1433;databaseName=mydb
username: user
password: 123456
initialize: false
jpa:
databasePlatform: org.hiberate.dialect.Oracle10gDialect
Yes and no. Unfortunately the manifest.yml doesn't work quite like that. The file provides the settings that will be used for cf push. It's application agnostic and doesn't know anything about Spring, Java, your programming language or your application framework of choice. Thus it cannot make changes to your application.properties file or other framework specific configuration.
Fortunately, there are many ways to configure Spring Boot (see here) and one of them is via environment variables. What's fortunate about this is that you can set environment variables via manifest.yml. Thus if you set the proper environment variables in manifest.yml, you can configure your application.
For reference, here are instructions for setting env variables in manifest.yml.
https://docs.cloudfoundry.org/devguide/deploy-apps/manifest.html#env-block
It looks roughly like this:
---
...
env:
ENV_1: val_1
ENV_2: val_2
Spring Boot will map environment variables to properties using the rules explained here. A basic example would be SPRING_DATASOURCE_USERNAME maps to spring.datasource.username in application.properties.
What you actually want to do here is have your database connection as a cloud foundry service rather than defined in your application.properties. As Daniel's answer suggests, Spring can pick up ENV variables, but setting connection details in the manifest is not the idiomatic way to do this.
By having a service, the connection details are stored in the VCAP_SERVICES variable when bound to the application - spring can read from there. The spring-music app shows the prototypical way of doing this.

How to use Spring Cloud with ElasticMQ

We want to use Spring Cloud for AWS SQS, but it seems to me that it only allows us to specify region. Can we fake it so that it uses ElasticMQ (on localhost:9320 for instance)? I didn't find an easy way to do this without editing hosts file and putting certificates on localhost
I found a way after some research.
You should set an endpoint after AmazonSQS instance is injected in order to override the already set endpoint, as so:
#Autowired
public void setAmazonSqs(AmazonSQS amazonSqs)
{
this.amazonSqs = amazonSqs;
// use elasticMQ if spring default profile is used; no active profiles
if (environment.getActiveProfiles().length == 0)
{
amazonSqs.setEndpoint("http://localhost:9320");
}
}
it is up to you if you're going to use QueueMessagingTemplate, anyway you should modify the injected AmazonSQS instance.

Resources