Config Server: native property source is ignored - spring

This is my bootstrap.yml content file:
server.port: 8888
spring:
application:
name: configserver
profiles:
active: native, git, vault
cloud:
config:
enabled: false
server:
native:
searchLocations: classpath:config/
# searchLocations: file://${native_location}
order: 3
git:
uri: file:///home/jcabre/projects/wsec-sccs/server/repo
order: 2
vault:
host: ${vault_server_host:localhost}
port: ${vault_server_port:8200}
scheme: ${vault_server_scheme:https}
backend: ${vault_backend:configserver}
profileSeparator: /
order: 1
As you can see I've stand up three backends: native, git, vault.
So classpath:/config/application.yml content:
foo: FROM NATIVE APPLICATION
/home/jcabre/projects/wsec-sccs/server/repo/application.yml content:
foo: FROM GIT
And Vault:
$ vault kv get configserver/configclient/
=== Data ===
Key Value
--- -----
foo FROM VAULT
$vault kv get configserver/configclient/dev
=== Data ===
Key Value
--- -----
foo FROM DEV VAULT
When I try to get foo config key using curl:
$ curl -sS -X GET http://localhost:8888/configclient/default -H "X-Config-Token: ${vault_token}" | jq .
{
"name": "configclient",
"profiles": [
"default"
],
"label": null,
"version": null,
"state": null,
"propertySources": [
{
"name": "vault:configclient",
"source": {
"foo": "FROM VAULT"
}
},
{
"name": "file:///home/jcabre/projects/wsec-sccs/server/repo/application.yml",
"source": {
"foo": "FROM GIT APPLICATION"
}
}
]
}
I only get git and vault property sources, but it doesn't send me native.
How can this be happening?
Any ideas?

Not sure if you ever got an answer to this, but I had a similar problem (no native profile when Vault was enabled) so I looked through the code (latest in GitHub).
It would appear that the NativeEnvironmentRepository is only enabled if the native profile is present AND no other environment repositories are configured. So it doesn't look like you are able to do what you want in the question.

Related

Properties with same name are getting over written in spring vault 3.0.0

I have configs in HashiCorp vault with same names in different path.But when i try to access it, i am always end up with the config1 prop1 value is getting overridden by config2 prop1
Vault Path
path/stage/config1
prop1
path/stage/config2
prop1
Spring vault version :spring-cloud-starter-config-3.1.1
Spring boot starter version : 2.7.1
properties.yaml
spring:
application:
name: my-app
cloud:
kubernetes:
enabled: false
cloud.vault:
uri: https://vaulturi
connection-timeout: 5000
read-timeout: 15000
authentication: token
token: ${keeper.token}
namespace: name1/name2
fail-fast: true
kv:
enabled: true
backend: path/stage
default-context: config1
config:
import: vault://path/stage/config1,vault://path/stage/config2
app:
prop1:{$(prop1)}
Can i access prop1:{$(prop1)} like prop1:{$(config1.prop1)}
When i check the value in actuator/env, i get the following response
{
"name": "path/stage/config1",
"properties": {
"prop1": {
"value": "test1"
}
}
},
{
"name": "path/stage/config2",
"properties": {
"prop1": {
"value": "test2"
}
}
}
Can some one help me to fix this
Thanks
Arun
Try to import vault://path/stage and then you have config1.prop1 and config2.prop1
You can then map them to specific properties.

How to use AppRole authentication for Vault using Spring Boot?

In my application we are making two calls from my app for getting secrets from Vault, as shown below:
Login to Vault : POST call to https::/v1/auth/approle/login -- It will take role_id and secret_id as payload and response will be client_token.
Fetch secrets : GET call to https::/v1/secret/data/abc/dev/xyz.json -- It will take headers as X-Vault-Token and X-Vault-Namespace and it will give you the response as below:
{
"request_id": "......",
"lease_id": "",
"renewable": false,
"lease_duration": 0,
"data": {
"data": {
"name": "ABC"
},
"metadata": {
"created_time": "...",
"deletion_time": "",
"destroyed": false,
"version": 1
}
}
Now I want to use Spring Cloud Vault Dependency to make things work through it. Please provide me the proper illustrations to make this work?
Assuming you are running spring boot and have a working Vault server configured for your app.
Add spring cloud vault maven dependency
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-vault-config</artifactId>
</dependency>
Add vault configuration to bootstrap.yaml
spring:
application:
name: abc
cloud:
vault:
host: <vault-server-hostname>
port: <vault-server-port>
scheme: HTTPS
namespace: <name-of-vault-namespace>
authentication: APPROLE
app-role:
role-id: <your-application-role-id>
secret-id: <your-application-secret-id>
role: <your-application-role>
If you run your app with spring profiles, like dev, it will be picked up and added to the vault path.
Now you should be able to inject secrets stored on the path secret/data/abc/dev with #Value("${<name-of-property>}

Getting "internal server error" on passing binary data to AWS Lambda function deployed using serverless framework and apigw-binary plugin

what I'm trying
Passing binary data via Lambda integration in API gateway. Lambda returns back text.
issue
The function returns desired output when API gateway is configured from console. To implement it using serverless framework I installed serverless-apigw-binary plugin. The required binary types show up in API gateway>settings>binary media types. However on calling API I get "internal server error". The function works properly on application/json type input. After enabling-disabling lambda proxy integration and adding mappings via console, I get correct output.
serverless.yml file
org: ------
app: ---------
service: ---------
frameworkVersion: ">=1.34.0 <2.0.0"
plugins:
- serverless-python-requirements
- serverless-offline
- serverless-apigw-binary
provider:
name: aws
runtime: python3.7 #fixed with pipenv
region: us-east-1
memorySize: 128
timeout: 60
profile: ----
custom:
pythonRequirements:
usePipenv: true
useDownloadCache: true
useStaticCache: true
apigwBinary:
types: #list of mime-types
- 'application/octet-stream'
- 'application/zip'
functions:
main:
handler: handler.main
events:
- http:
path: ocr
method: post
integration: lambda
request:
passThrough: WHEN_NO_TEMPLATES
template:
application/zip: '
{
"type": "zip",
"zip": "$input.body",
"lang": "$input.params(''lang'')",
"config": "$input.params(''config'')",
"output_type": "$input.params(''output_type'')"
}'
application/json: '
{
"type": "json",
"image": $input.json(''$.image''),
"lang": "$input.params(''lang'')",
"config": "$input.params(''config'')",
"output_type": "$input.params(''output_type'')"
}'
application/octet-stream: '
{
"type": "img_file",
"image": "$input.body",
"lang": "$input.params(''lang'')",
"config": "$input.params(''config'')",
"output_type": "$input.params(''output_type'')"
}'
handler.py
def main(event, context):
# do something on event and get txt
return txt
edit
I compared swagger definitions and found this
1. API generated from console(working)
paths:
/ocr:
post:
consumes:
- "application/octet-stream"
produces:
- "application/json"
responses:
API generated from serverless framework
paths:
/ocr:
post:
consumes:
- "application/x-www-form-urlencoded"
- "application/zip"
- "application/octet-stream"
- "application/json"
responses:
produces: - "application/json" is missing. How do I add it in serverless?

How to get or set the clustered database username and password in Jelastic JPS

I am trying to set up a Jelastic clustered database as described in Setting Up Auto-Clusterization with Cloud Scripting but I don't see documentation there that describes how to either set or retrieve the cluster username and password.
I did try passing db_user and db_pass to the cluster, names I found in some of the sample JPS files, as well as having those as settings but the credentials were still just the Jelastic generated ones.
Here is the JPS I am trying to use; it includes a simple Debian container that requires the database credentials as environment variables. In this case the Docker container includes just the MariaDB client for testing purpose, the real environment is bit more complex than that, running scripts in the startup that need the database connection.
{
"version": "1.5",
"type": "install",
"name": "Database test",
"skipNodeEmails": true,
"globals":
{
"MYSQL_ROOT_USERNAME": "root",
"MYSQL_ROOT_PASSWORD": "${fn.password(20)}",
"MYSQL_USERNAME": "username",
"MYSQL_PASSWORD": "${fn.password(20)}",
"MYSQL_DATABASE": "database",
"MYSQL_HOSTNAME": "ProxySQL"
},
"nodes":
[
{
"image": "mireiawen/debian-sql",
"count": 1,
"cloudlets": 8,
"nodeGroup": "vds",
"displayName": "SQL worker",
"env":
{
"MYSQL_ROOT_USERNAME": "${globals.MYSQL_ROOT_USERNAME}",
"MYSQL_ROOT_PASSWORD": "${globals.MYSQL_ROOT_PASSWORD}",
"MYSQL_USERNAME": "${globals.MYSQL_USERNAME}",
"MYSQL_PASSWORD": "${globals.MYSQL_PASSWORD}",
"MYSQL_DATABASE": "${globals.MYSQL_DATABASE}",
"MYSQL_HOSTNAME": "${globals.MYSQL_HOSTNAME}"
}
},
{
"nodeType": "mariadb-dockerized",
"nodeGroup": "sqldb",
"count": "2",
"cloudlets": 16,
"cluster":
{
"scheme": "master"
}
}
]
}
This JPS seems to launch the MariaDB master-master cluster correctly with the ProxySQL included, I am just lacking on the documentation about how to either provide the database credentials to the database cluster, or a way to retrieve the generated ones to be used as variables in the JPS to send those to the containers.
The mechanism has been improved so now you can pass custom credentials to cluster using either environment variables or cluster settings:
type: install
name: env. variables
nodes:
nodeType: mariadb-dockerized
nodeGroup: sqldb
count: 2
cloudlets: 8
env:
DB_USER: customuser
DB_PASS: custompass
cluster:
scheme: master
or
type: install
name: cluster settings
nodes:
nodeType: mariadb-dockerized
nodeGroup: sqldb
count: 2
cloudlets: 8
cluster:
scheme: master
db_user: customuser
db_pass: custompass
Thank you for the good question. The mechanism of passing custom credentials should be and will be improved soon. At the moment you can use the example below. In short, we disable automated clustering and enable it again with custom username and password.
---
version: 1.5
type: install
name: Database test
skipNodeEmails: true
baseUrl: https://raw.githubusercontent.com/jelastic-jps/mysql-cluster/master
globals:
logic_jps: ${baseUrl}/addons/auto-clustering/scripts/auto-cluster-logic.jps
MYSQL_USERNAME: username
MYSQL_PASSWORD: ${fn.password(20)}
nodes:
- image: mireiawen/debian-sql
count: 1
cloudlets: 8
nodeGroup: extra
displayName: SQL worker
env:
MYSQL_USERNAME: ${globals.MYSQL_USERNAME}
MYSQL_PASSWORD: ${globals.MYSQL_PASSWORD}
- nodeType: mariadb-dockerized
nodeGroup: sqldb
count: 2
cloudlets: 16
cluster: false
onInstall:
install:
jps: ${globals.logic_jps}
envName: ${env.envName}
nodeGroup: sqldb
settings:
path: ${baseUrl}
scheme: master
logic_jps: ${globals.logic_jps}
db_user: ${globals.MYSQL_USERNAME}
db_pass: ${globals.MYSQL_PASSWORD}
repl_user: repl-${fn.random}
repl_pass: ${fn.password(20)}
After environment is ready, you can test the connection by executing the following command in your docker image:
mysql -h proxy -u $MYSQL_USERNAME -p$MYSQL_PASSWORD

spring cloud config server - No such label: master

My cloud config server was returning the property files but now I am seeing the below error. Please can you let me know how this can be fixed?
This is deployed in pivotal cloud foundry environment.
{
"timestamp": 1464375520539
"status": 404
"error": "Not Found"
"exception": "org.springframework.cloud.config.server.environment.NoSuchLabelException"
"message": "No such label: master"
"path": "/couchbase-data/dev"
}
application.yml
---
spring:
cloud:
config:
server:
git:
uri: https://github.company.com/username/ordering-properties
username: username
password: "{cipher}03f0ac5cc43d913bbd45155f852d1e5c88542878491a1fc89185feea93a40084"
search-paths: couchbase-data
security:
basic:
enabled: true
user:
name: ordering_config
password: "{cipher}dc56acf65f93b5485c87de1a9965e76a2d0b642a0839027deffdbc35f922746f"
manifest.yml
---
name: orderingconfigserver
memory: 2048M
instances: 1
timeout: 180
env:
ENCRYPT_KEY: ORDERING
After I deploy the app , the first hit to the endpoint returns the below error :
{
"timestamp": 1464377154415
"status": 500
"error": "Internal Server Error"
"exception": "java.lang.IllegalStateException"
"message": "Cannot clone or checkout repository"
"path": "/couchbase-data/dev"
}
If your git repo has the main branch name as "main" instead of "master", I would recommend you to add a new property to change default-label as below:
spring.cloud.config.server.git.default-label=main
Check this link for additional info.

Resources