Why can't I pass any url from my Jelastic manifest settings to the lets encrypt manifest? - lets-encrypt

I am trying to do something very simple. I have a Jelastic environment with an nginx load balancer. On that balancer, I want to install the let's encrypt addon with the following manifest:
type: update
name: load balancer
targetNodes:
nodeGroup:
- bl
settings:
fields:
- name: externalDomains
caption: External domain names (;-separated list)
type: string
vtype: domainlist
required: true
onInstall:
- installAddon:
id: letsencrypt
addons:
- id: letsencrypt
name: letsencrypt
onInstall:
- install [bl]:
envName: ${env.envName}
jps: https://github.com/jelastic-jps/lets-encrypt/blob/master/manifest.jps
settings:
customDomains: ${settings.externalDomains}
When I run that manifest, I need to provide an external domain:
Then the installation runs with success, in apparence. Then I click the addons' "Configure" button:
And I see that unfortunately the "External Domain(s)" field is empty:
That's unfortunate, because I set it to ${settings.externalDomains}.
If I, however, install the following manifest, then everything is fine:
type: update
name: load balancer
targetNodes:
nodeGroup:
- bl
onInstall:
- installAddon:
id: letsencrypt
addons:
- id: letsencrypt
name: letsencrypt
onInstall:
- install [bl]:
envName: ${env.envName}
jps: https://github.com/jelastic-jps/lets-encrypt/blob/master/manifest.jps
settings:
customDomains: ${env.envName}.my-provider.com
As long as I write anything manually in the addons' customDomains field, it is fine. As soon as I put there a value from the settings, the value gets discarded. What am I doing wrong?

Parameter that is passed to the customDomains should be passed in the add-on first:
onInstall:
- installAddon:
id: letsencrypt
settings:
externalDomains: ${settings.externalDomains}
Then it can be used in the add-on body.
The full add-on manifest:
type: update
name: load balancer
targetNodes:
nodeGroup:
- bl
settings:
fields:
- name: externalDomains
caption: External domain names (;-separated list)
type: string
vtype: domainlist
required: true
onInstall:
- installAddon:
id: letsencrypt
settings:
externalDomains: ${settings.externalDomains}
addons:
- id: letsencrypt
name: letsencrypt
onInstall:
- install:
envName: ${env.envName}
nodeGroup: bl
jps: https://github.com/jelastic-jps/lets-encrypt/blob/master/manifest.jps
settings:
customDomains: ${settings.externalDomains}
JPS behavior can be checked in the console tab: -
{DOMAIN_URL}/console
The test manifest console logs:
[07:33:10 letsencrypt]: BEGIN INSTALLATION: letsencrypt
[07:33:11 letsencrypt]: BEGIN HANDLE EVENT: {"topic":"application/install","envAppid":"c5b959b2a936d56a23daa6964b15dc19"}
[07:33:11 letsencrypt:1]: install [bl]: {"envName":"env-sup","nodeGroup":"bl","settings":{"customDomains":"domain8.com"}}
[07:33:11]: BEGIN MIXINS INITIALIZATION: Let's Encrypt Free SSL
[07:33:11]: loading mixin [configs/vers.yaml].response: {"result":0}
[07:33:11]: END MIXINS INITIALIZATION: Let's Encrypt Free SSL
[07:33:12 Let's.SSL]: BEGIN INSTALLATION: Let's Encrypt Free SSL
[07:33:12 Let's.SSL]: BEGIN HANDLE EVENT: {"topic":"application/install","envAppid":"c5b959b2a936d56a23daa6964b15dc19"}
[07:33:12 Let's.SSL:1]: setGlobals [bl]: {"nodeId":"","nodeGroup":"bl","withExtIp":"true","webroot":"","webrootPath":"","fallbackToX1":"","deployHook":"","deployHookType":"","undeployHook":"","undeployHookType":"","test":""}
So here you can see whether parameters are passed and displayed successfully.

Related

serverless offline start hot reload not working

I am unable to hot reload when using serverless offline start.
Here is my serverless.yml file
service: function-with-environment-variables
frameworkVersion: ">=3.0.0 <4.0.0"
provider:
name: aws
runtime: nodejs16.x
plugins:
- serverless-offline
functions:
register:
handler: handler.register
events:
- http:
path: /register
method: post
login:
handler: handler.login
events:
- http:
path: /login
method: post
verify:
handler: handler.verify
events:
- http:
path: /verify
method: post
I have also tried using sls offline start still facing same error.
Here is the output of serverless --version
`Running "serverless" from node_modules
Framework Core: 3.24.1 (local) 3.24.1 (global)
Plugin: 6.2.2
SDK: 4.3.2
Try using this command to start your server:
serverless offline start --reloadHandler
reloadHandler Reloads handler with each request. More info here:
https://github.com/dherault/serverless-offline/issues/864#issuecomment-1190950178

kubernetes liveliness is un-authorized

I am trying to define a livenessProbe by passing the value of httpheader as secret. but I am getting un-authorized 401.
- name: mycontainer
image: myimage
env:
- name: MY_SECRET
valueFrom:
secretKeyRef:
name: actuatortoken
key: token
livenessProbe:
httpGet:
path: /test/actuator/health
port: 9001
httpHeaders:
- name: Authorization
value: $MY_SECRET
My secret as follows:
apiVersion: v1
kind: Secret
metadata:
name: actuatortoken
type: Opaque
stringData:
token: Bearer <token>
If I pass the same with actual value as below... it works as expected
- name: mycontainer
image: myimage
livenessProbe:
httpGet:
path: /test/actuator/health
port: 9001
httpHeaders:
- name: Authorization
value: Bearer <token>
Any help is highly appreciated.
What you have will put the literal string $MY_SECRET as the Authorization header which won't work.
You don't want to put the actual value of the secret in your Pod/Deployment/whatever YAML since you don't want plaintext credentials in there.
3 options I can think of:
a) change your app to not require authentication for the /test/actuator/health endpoint;
b) change your app to not require authentication when the requested host is 127.0.0.1 and update the probe configuration to use that as the host;
c) switch from an HTTP probe to a command probe and write the curl/wget command yourself
Answer is being posted as Community wiki as it's from Amit Kumar Gupta comments.

aws serverless - exporting output value for cognito authorizer

I'm trying to share cognito authorizer between my stacks for this I'm exporting my authorizer but when I try to reference it in another service I get the error
Trying to request a non exported variable from CloudFormation. Stack name: "myApp-services-test" Requested variable: "ExtApiGatewayAuthorizer-test".
Here is my stack where I have authorizer defined and exported:
CognitoUserPool:
Type: AWS::Cognito::UserPool
Properties:
# Generate a name based on the stage
UserPoolName: ${self:provider.stage}-user-pool
# Set email as an alias
UsernameAttributes:
- email
AutoVerifiedAttributes:
- email
ApiGatewayAuthorizer:
Type: AWS::ApiGateway::Authorizer
Properties:
Name: CognitoAuthorizer
Type: COGNITO_USER_POOLS
IdentitySource: method.request.header.Authorization
RestApiId: { "Ref": "ProxyApi" }
ProviderARNs:
- Fn::GetAtt:
- CognitoUserPool
- Arn
ApiGatewayAuthorizerId:
Value:
Ref: ApiGatewayAuthorizer
Export:
Name: ExtApiGatewayAuthorizer-${self:provider.stage}
this is successfully exported as I can see it in stack exports list from my aws console.
I try to reference it in another stack like this:
myFunction:
handler: handler.myFunction
events:
- http:
path: /{userID}
method: put
cors: true
authorizer:
type: COGNITO_USER_POOLS
authorizerId: ${myApp-services-${self:provider.stage}.ExtApiGatewayAuthorizer-${self:provider.stage}}
my env info
Your Environment Information ---------------------------
Operating System: darwin
Node Version: 12.13.1
Framework Version: 1.60.5
Plugin Version: 3.2.7
SDK Version: 2.2.1
Components Core Version: 1.1.2
Components CLI Version: 1.4.0
Answering my own question
it looks like I should have imported by output name not output export name, which is bit weird and all the docs I have seen point to export name, but this is how I was able to make it work
replaced this -
authorizerId:${myAppservices-${self:provider.stage}.ExtApiGatewayAuthorizer-${self:provider.stage}}
with -
authorizerId: ${myApp-services-${self:provider.stage}.ApiGatewayAuthorizerId}
If you come across Trying to request a non exported variable from CloudFormation. Stack name: "myApp-services-test" Requested variable: "ExtApiGatewayAuthorizer-test"., when exporting profile i.e.,
export AWS_PROFILE=your_profile
It must be done on the terminal window where you are doing sls deploy not on another terminal window. It is a silly mistake but I don't want anyone else waste their time around that

Openshift/Kubernetes ssh Secret doesn't work with Camel SFTP component

Long story short --->
While passing an ssh-key, which is retrieved from a secret in Openshift to apache-camel SFTP component its not able to connect the server; whereas if I directly pass a path of the actual ssh-key file w/o creating secret to the same component, it works just fine. The exception is, invalid key. I tried to read the key file in java and pass it as ByteArray as a privateKey parameter but no luck. Seems like passing the key as byte is not working as all possible means.
SFTP-COMPONENT Properties->
sftp:
host: my.sftp.server
port: 22
fileDirectory: /to
fileName: /app/home/file.txt
username: sftp-user
privateKeyFilePath: /var/run/secret/secret-volume/ssh-privatekey **(Also tried privateKey param with byte array)**
knownHostsFile: resource:classpath:keys/known_hosts
binary: true
Application Detail:
I am using Openshift 3.11.
Developing Camel-SpringBoot Micro-Integration services configured with fabric8 and spring-cloud-kubernetes plugins for deployment.
I am creating the secret as,
oc secrets new-sshauth sshsecret --ssh-privatekey=$HOME/.ssh/id_rsa
I have tried to refer secret with deployment.yml and bootstrap.yml
Using as env variable with secret-key-ref->
deployment.yml->
- name: SSH_SECRET
valueFrom:
secretKeyRef:
name: sshsecret
key: ssh-privatekey
bootstrap.yml->
spring:
cloud:
kubernetes:
secrets:
enabled: true
enableApi: true
name: sshsecret
Using as mounted volume->
deployment.yml->
volumeMounts:
- mountPath: /var/run/secret/secret-volume
name: secret-volume
volumes:
- name: secret-volume
secret:
secretName: sshsecret
bootstrap.yml->
spring:
cloud:
kubernetes:
secrets:
enabled: true
paths: /var/run/secret/secret-volume
Note: Once the service is deployed I can see the mounted volume is attached with the container and can even bash into the POD and go to the same directory and locate the private key, which completely intact.
Any help will be appreciated. Ask me all questions you need to know to solve this.
It was a very bad mistake from my side. I was using privateKeyUri in camel SFTP component instead of privateKeyFile. I didn't rectify this and always changing those SFTP parameters in config-map directly.
By the way, for those trying to implement similar usecase; use the second option which is, mounting the secret into a volume and then refer the volume path inside Camel. Don't use the secret as ENV variable, so you need not enable secret API inside bootstrap.yml.
Thanks anyway, cheers!
Rito

How do I configure a Google Compute Instance to allow http traffic when deploying from a YAML config file?

As the title suggests, I'm trying to configure a deployment in GCP. At the moment, all the deployment consists of is a single Compute instance, although I am having trouble trying to add the http-server and https-server tags into the config file. The instance is created fine without trying to add the tags. Here is the top of my yaml file:
resources:
- type: compute.v1.instance
name: [redacted]
properties:
zone: europe-west1-b
# Allow http and https traffic
tags:
- http-server
- https-server
machineType: https://www.googleapis.com/compute/v1/projects/[redacted]/zones/europe-west1-b/machineTypes/f1-micro
.......etc
The error I get is:
ERROR: (gcloud.deployment-manager.deployments.create) Error in Operation [operation-1548860751491-580ae3ee63331-467fd040-1f00fce0]: errors:
- code: CONDITION_NOT_MET
location: /deployments/[redacted]/resources/[redacted]>$.properties
message: '"/tags": domain: validation; keyword: type; message: instance does not
match any allowed primitive type; allowed: ["object"]; found: "array"'
This is my first attempt at writing a yaml config file so there could be some simple context issues.
I managed to fix it myself:
tags:
items:
- http-server
- https-server
Add tag to compute instance properties:
tags:
items: ["http-server"]
Add new resource (below the compute instance network interface):
- type: compute.v1.firewall
name: default-allow-http
properties:
targetTags: ["http-server"]
sourceRanges: ["0.0.0.0/0"]
allowed:
- IPProtocol: TCP
ports: ["80"]

Resources