Add JBP_CONFIG option to scdf task - spring

I've updated the java version in my application and now receiving the following error
[ERR] Exception in thread "main" java.lang.UnsupportedClassVersionError: com/spatial/batch/BatchApplication has been compiled by a more recent version of the Java Runtime (class file version 55.0), this version of the Java Runtime only recognizes class file versions up to 52.0
I added JBP_CONFIG_OPEN_JDK_JRE: '{jre: { version: 11.+ }}' to fix this problem and it's helped. Now I want to do the same with scdf task. I've tried to do it like this, but it doesn't work
spring:
cloud:
deployer:
cloudfoundry:
domain: ****
freeDiskSpacePercentage: 15
org: ****
password: "****"
space: ****
url: "****"
username: ****
api-timeout: 360
javaOpts: -Xms512m -Xmx768m
skip:
ssl:
validation: false
stream:
buildpack: java_buildpack_offline
services: "p-rbt"
task:
buildpack: java_buildpack_offline
services: "config-server,config-server-keystore,p-rbt,logstash-syslog,elastic-apm"
disk: 4096
memory: 4096
taskTimeout: 540
health:
check: none
env:
JBP_CONFIG_OPEN_JDK_JRE: '{jre: { version: 11.+ }}'
What is the right way to set environment variables into scdf task?

The location of env is not correct.
It should be still under the manifest file. But not the application yaml.
env:
JBP_CONFIG_OPEN_JDK_JRE: '{jre: { version: 11.+ }}'

Related

Secrets are not read from the vault after migrating to Spring Boot 3 - Getting an error

We are in process of migrating spring boot 3 from 2.7.7(We did an incremental upgrade from 2.6.8 to 2.7.7 and then to 3.0.0). We have almost got our application working except for the secrets are not read from the vault after migrating to Spring Boot 3 - Getting an error - This method requires either a Token (spring.cloud.vault.token) or a token file at ~/.vault-token. The vault integration worked perfectly fine in the previous version of 2.6.8.
**Specifications - **
JDK - 17
Spring boot - 3.0.0
Spring Cloud - 2022.0.0
spring-cloud-starter-vault-config - 4.0.0
**bootstrap.yml - **
bootstrap.yml: |-
spring:
cloud:
vault:
enabled: true
host: pvault.dummy.local
port: 8200
uri: https://localhost:8200
scheme: https
namespace: rpp
authentication: KUBERNETES
generic:
enabled: false
kv:
enabled: true
backend: kv
profile-separator: '/'
application-name: path1/couchbase
ssl:
trust-store: classpath:config/vault-truststore.p12
trust-store-password: password
#trust-store-type: JKS
kubernetes:
role: b2c-isp-bss-role
kubernetes-path: path1
service-account-token-file: /var/run/secrets/kubernetes.io/serviceaccount/token
application.yml -
application.yml: |-
spring:
cloud:
bootstrap:
enabled: true
The migration guide does not suggest any change w.r.t vault. I'm a bit clueless as to where to start the changes.

AWS::CloudFormation::Stack creation through serverless framework failed on localstack

I'm deploying lambda on localstack using serverless framework. I've configured aws credentials in ~/.aws/credentials. When I run the deploy command, get the following error. Couldn't figure out the cause of failure...
Command: serverless deploy --stage local --aws-profile default
Output:
✖ Stack lambda-api-local failed to deploy (12s)
Environment: darwin, node 16.14.0, framework 3.17.0 (local) 3.17.0v (global), plugin 6.2.2, SDK 4.3.2
Credentials: Local, "default" profile
Docs: docs.serverless.com
Support: forum.serverless.com
Bugs: github.com/serverless/serverless/issues
Error:
CREATE_FAILED: lambda-api-local (AWS::CloudFormation::Stack)
undefined
This is my ~/.aws/credentials
[default]
aws_access_key_id = test
aws_secret_access_key = test
This is my serverless.yml
service: lambda-api
plugins:
- serverless-localstack
provider:
name: aws
stage: local
runtime: go1.x
profile: localstack
package:
patterns:
- '!./**'
- './bin/**'
functions:
hello:
handler: bin/lambda-practice
events:
- http:
path: /hello
method: get
custom:
localstack:
debug: true
endpointFile: localstack_endpoints.json
stages:
# Stages for which the plugin should be enabled
- local
host: http://localhost
edgePort: 4567
autostart: true
lambda:
mountCode: true
docker:
sudo: false
I'm trying to deploy and run lambda on localstack through serverless framework

Openshift secret in Spring Boot bootstrap.yml

This how my bootstrap.yml looks like.
spring:
cloud:
config:
uri: http://xxxx.com
username: ****
password: ****
vault:
host: vault-server
port: 8200
scheme: http
authentication: token
token: ${VAULT_ROOT_TOKEN}
application:
name: service-name
management:
security:
enabled: false
Application is starting when I configure secret as a ENV variable in Deployment Config – OSE, as below.
name: VAULT_ROOT_TOKEN
value: *********
But Configuring secret as a ENV variable and fetching the value from OSE secret is not working.
name: VAULT_ROOT_TOKEN
valueFrom:
secretKeyRef:
name: vault-token
key: roottoken
Error that I am getting is
org.springframework.vault.VaultException: Status 400 secret/service-name/default: 400 Bad Request: missing required Host header
Surprise in this scenario, ENV variable is working within the container/POD but somehow it is not able to fetch during the bootstrap procedure.
env | grep TOKEN
VAULT_ROOT_TOKEN=********
My secret configuration in OSE
oc describe secret vault-token
Name: vault-token
Namespace: ****
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
roottoken: 37 bytes
What is missing in my deployment-config or secrets in OSE? How to configure to fetch secret as ENV variable and inject in the bootstrap.yml file?
NOTE : I can't move Vault configuration out of bootstrap.yml.
Openshift Enterprise info:
Version:
OpenShift Master:v3.2.1.31
Kubernetes Master:v1.2.0-36-g4a3f9c5
Finally I was able to achieve this. This is what I have done
Provide the token as an arugument:
java $JAVA_OPTS -jar -Dspring.cloud.vault.token=${SPRING_CLOUD_VAULT_TOKEN} service-name.jar
This is how my configuration looks like:
Deployment Config:
- name: SPRING_CLOUD_VAULT_TOKEN
valueFrom:
secretKeyRef:
name: vault-token
key: roottoken
Bootstrap file:
spring:
cloud:
config:
uri: http://xxxx.com
username: ****
password: ****
vault:
host: vault-server
port: 8200
scheme: http
authentication: token
token: ${SPRING_CLOUD_VAULT_TOKEN}
application:
name: service-name
management:
security:
enabled: false
Thanks for my colleagues who has provided the inputs.

consul-boshrelease deployment errors

I followed their readme:
https://github.com/cloudfoundry-community/consul-boshrelease/
Result:
Deploying
---------
Director task 1990
Deprecation: Ignoring cloud config. Manifest contains 'networks' section.
Started preparing deployment > Preparing deployment. Done (00:00:00)
Started preparing package compilation > Finding packages to compile. Done (00:00:00)
Started compiling packages
Started compiling packages > envconsul/90d4cc3b4e290c3833cf5e32d0b5c99f4a63c0be
Started compiling packages > consul-template/561a4a5d99c375822876d5482ed24f790a0e216b
Started compiling packages > consul/30f12d1e70d89f28b34a433d2b885a03ae41adae
Failed compiling packages > consul-template/561a4a5d99c375822876d5482ed24f790a0e216b: Unknown CPI error 'InvalidCall' with message 'Arguments are not correct, details: 'expected params[:filters][0][:values][0] to be a String, got value nil (class: NilClass) instead.'' in 'create_vm' CPI method (00:00:12)
Failed compiling packages > envconsul/90d4cc3b4e290c3833cf5e32d0b5c99f4a63c0be: Unknown CPI error 'InvalidCall' with message 'Arguments are not correct, details: 'expected params[:filters][0][:values][0] to be a String, got value nil (class: NilClass) instead.'' in 'create_vm' CPI method (00:00:12)
Failed compiling packages > consul/30f12d1e70d89f28b34a433d2b885a03ae41adae: Unknown CPI error 'InvalidCall' with message 'Arguments are not correct, details: 'expected params[:filters][0][:values][0] to be a String, got value nil (class: NilClass) instead.'' in 'create_vm' CPI method (00:00:12)
Failed compiling packages (00:00:12)
Error 100: Unknown CPI error 'InvalidCall' with message 'Arguments are not correct, details: 'expected params[:filters][0][:values][0] to be a String, got value nil (class: NilClass) instead.'' in 'create_vm' CPI method
Task 1990 error
I've tried to track down this Unknown CPI error, to no avail.
I had the same error message during I was deploying bosh to aws. The reason was a mistake in my bosh.yml manifest.
Instead
cloud_properties: {subnet: subnet-6b54e7f1}
I was write
cloud_properties: {subnet-6b54e7f1}
The other reason was that my instance type was m3. It has to be m4.
After my corrections this error message disappeared.
In the end, the problem was errors in their generated manifest file, that their readme tells you to generate by running templates/make_manifest aws-ec2. Aside from the manifest being wrong, the proper command there is actually just templates/make_manifest aws (without the "ec2" portion).
Anyways, here's the manifest file that got this deployed for me. Mind you, the Consul cluster wasn't actually working (500 errors on the dashboard), but, that's a story for another post. Look for the redacted $FOO items, to replace with your own.
compilation:
cloud_properties:
instance_type: m3.medium
availability_zone: us-east-1d
network: consul1
reuse_compilation_vms: true
workers: 6
director_uuid: $YOUR_ID
jobs:
- instances: 3
name: consul
networks:
- name: consul1
persistent_disk: 4096
properties:
consul:
join_host: 0.consul-z1.consul1.consul-aws.microbosh
services:
example: {}
networks:
apps: consul1
resource_pool: small_z1
templates:
- consumes:
consul_servers:
from: consul_leaders
name: consul
provides:
consul_servers:
as: consul_leaders
release: consul
update:
canaries: 0
max_in_flight: 50
name: consul-aws
networks:
- cloud_properties: {}
name: floating
type: vip
- cloud_properties:
subnet: $YOUR_SUBNET
security_groups:
- default
availability_zone: us-east-1d
name: consul1
type: dynamic
properties: {}
releases:
- name: consul
version: latest
resource_pools:
- cloud_properties:
instance_type: m3.medium
availability_zone: us-east-1d
name: small_z1
network: consul1
stemcell:
name: bosh-aws-xen-hvm-ubuntu-trusty-go_agent
version: latest
update:
canaries: 0
canary_watch_time: 1000-60000
max_in_flight: 50
serial: true
update_watch_time: 1000-60000

ERROR: for ansible-container Container command '/usr/local/bin/builder.sh' not found or does not exist

I am getting the below error when I run
$ ansible-container build
ERROR: for ansible-container Container command '/usr/local/bin/builder.sh' not found or does not exist.
ansible/container.yml
version: "1"
services:
web:
image: busybox:latest
registries: {}
ansible/main.yml
- hosts:
tasks:
- name: Copy something
copy: src=start_here.sh dest=/etc/start_here.sh
You are missing the settings for your conductor.
version: "2"
settings:
conductor:
base: "centos:7"
Then you can start the services section
services:
mongo:
from: "centos:7"

Resources