Error: File not found /home/vcap/app/xs-app.json - s4sdk

trying to install the approuter currently, following this tutorial:
https://blogs.sap.com/2017/07/18/step-7-with-sap-s4hana-cloud-sdk-secure-your-application-on-sap-cloud-platform-cloudfoundry/
When pushing the approuter to CF, I get an error:
2019-04-29T08:39:34.43+0200 [APP/PROC/WEB/0] ERR /home/vcap/app/approuter/lib/environment.js:19
2019-04-29T08:39:34.43+0200 [APP/PROC/WEB/0] ERR throw new Error('File not found ' + xsappFile);
2019-04-29T08:39:34.43+0200 [APP/PROC/WEB/0] ERR ^
2019-04-29T08:39:34.43+0200 [APP/PROC/WEB/0] ERR Error: File not found /home/vcap/app/xs-app.json
This is my manifest.yml:
---
applications:
- name: xyz
command: 'node approuter/approuter.js'
host: xyz-93deb1cd-7b72-4060-94e7-30baef85d259
path: approuter
memory: 128M
buildpack: https://github.com/cloudfoundry/nodejs-buildpack
env:
TENANT_HOST_PATTERN: 'xyz(.*).cfapps.eu10.hana.ondemand.com'
destinations: '[{"name":"service-destination", "url": "https://gfuowb4ett234agtuthorizations-srv.cfapps.eu10.hana.ondemand.com", "forwardAuthToken": true}]'
SAP_JWT_TRUST_ACL: '[{"clientid" : "*", "identityzone" : "*"}]'
services:
- my-xsuaa
- service-destination
This is my xs-app.json, which is located in my "approuter" folder.
{
"routes": [{
"source": "/",
"target": "/",
"destination": "service-destination"
}]
}
This is my folder structure
When I move the xs-app.json in the root folder where it seems to be expected, I get the following error message:
xs-app.json/routes/0: Format validation failed (Route references unknown destination "service-destination")

I guess that the approuter is looking into the bound destination service and not the destination environment variable.
I notice that there is a destination service instance with the name 'service-destination' however you have already defined a destination in the environment variable with the same name.
Either one should be present.
I would suggest either maintaining the destination in the environment variables and remove the entry from service section
---
applications:
- name: xyz
command: 'node approuter/approuter.js'
host: xyz-93deb1cd-7b72-4060-94e7-30baef85d259
path: approuter
memory: 128M
buildpack: https://github.com/cloudfoundry/nodejs-buildpack
env:
TENANT_HOST_PATTERN: 'xyz(.*).cfapps.eu10.hana.ondemand.com'
destinations: '[{"name":"service-destination", "url": "https://gfuowb4ett234agtuthorizations-srv.cfapps.eu10.hana.ondemand.com", "forwardAuthToken": true}]'
SAP_JWT_TRUST_ACL: '[{"clientid" : "*", "identityzone" : "*"}]'
# remove the service-destination from here and unbind any destination service if already bound
services:
- my-xsuaa
or dont maintain the destination environment variable - instead maintain it in the destination service instance GUI (from the cockpit)
---
applications:
- name: xyz
command: 'node approuter/approuter.js'
host: xyz-93deb1cd-7b72-4060-94e7-30baef85d259
path: approuter
memory: 128M
buildpack: https://github.com/cloudfoundry/nodejs-buildpack
env:
TENANT_HOST_PATTERN: 'xyz(.*).cfapps.eu10.hana.ondemand.com'
SAP_JWT_TRUST_ACL: '[{"clientid" : "*", "identityzone" : "*"}]'
services:
- my-xsuaa
- service-destination
Note:
keeping the destination name and destination instance name same
might lead to lots of confusion.
Manually unbind the service-destination / destination instance if
you are following first approach. removing the entry from service
section does not unbind it automatically.

Related

Serverless wsgi unable to import werkzeug

I'm having issues deploying my serverless application to AWS. In AWS the logs show:
Unable to import module 'wsgi_handler': No module named 'werkzeug'
I have explicitly specified werkzeug in my requirements.txt but it seems that when I run sls deploy the packages specified are not being put inside the zip file that is uploaded to my S3 bucket.
Below is a copy of my serverless.yml file:
service: serverless-flask
plugins:
- serverless-python-requirements
- serverless-wsgi
- serverless-dynamodb-local
custom:
tableName: 'transactions-table-${self:provider.stage}'
wsgi:
app: app.app # entrypoint is app.app, which means the app object in the app.py module.
packRequirements: false
pythonRequirements:
dockerizePip: true
dynamodb:
stages:
- test
- dev
start:
migrate: true
provider:
name: aws
runtime: python3.6
stage: dev
region: us-east-1
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
Resource:
- { "Fn::GetAtt": ["TransactionsDynamoDBTable", "Arn" ] }
environment:
TRANSACTIONS_TABLE: ${self:custom.tableName}
functions:
app:
handler: wsgi_handler.handler
events:
- http: ANY /
- http: 'ANY {proxy+}'
resources:
Resources:
TransactionsDynamoDBTable:
Type: 'AWS::DynamoDB::Table'
Properties:
AttributeDefinitions:
-
AttributeName: transactionId
AttributeType: S
-
AttributeName: timestamp
AttributeType: S
KeySchema:
-
AttributeName: transactionId
KeyType: HASH
-
AttributeName: timestamp
KeyType: RANGE
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
TableName: ${self:custom.tableName}
Here is my requirements.tx:
boto3==1.11.17
botocore==1.14.17
Click==7.0
docutils==0.15.2
Flask==1.1.1
itsdangerous==1.1.0
Jinja2==2.11.1
jmespath==0.9.4
MarkupSafe==1.1.1
python-dateutil==2.8.1
s3transfer==0.3.3
six==1.14.0
urllib3==1.25.8
Werkzeug==1.0.0
As far as I know, using the serverless-wsgi plugin should handle the installation of this package automatically but I'm seeing no .requirements folder being created in the .serverless folder or in the zipfile that serveless creates.
The requirements.txt file contained inside the zip faile only contains the following:
-i https://pypi.org/simple
I'm not sure what I'm doing wrong but the only solution so far has been to tear down the infrastructure and redeploy with a new url which isn't ideal.
Adding a reference to the lambda layer did the trick for me (see the layers section):
api:
timeout: 30
handler: wsgi_handler.handler
layers:
- {Ref: PythonRequirementsLambdaLayer}
events:
- http: ANY /
You need add your files manually to the package.
In your serverless.yml, add this
package:
individually: true
exclude:
- ./**
include:
- requirements.txt
- <other files>
Once you deploy, you can download the packaged zip from AWS and verify if your files are there.

Call service from existing api gateway using base path mappings

Our API has the following endpoints:
POST /users - create a user
GET /users/{userId} - get a particular user
GET /posts/{postId} - get a particular post
GET /posts/{postId}/users - get the users who contributed to this post
I have defined two services: users-service and posts-service. In these two services I define the lambdas like so. I'm using the serverless-domain-manager plugin to create base path mappings:
/users-service/serverless.yaml:
service: users-service
provider:
name: aws
runtime: nodejs10.x
stage: dev
plugins:
- serverless-domain-manager
custom:
customDomain:
domainName: 'serverlesstesting.example.com'
basePath: 'users'
stage: ${self:provider.stage}
createRoute53Record: true
functions:
create:
name: userCreate
handler: src/create.handler
events:
- http:
path: /
method: post
get:
name: userGet
handler: src/get.handler
events:
- http:
path: /{userId}
method: get
/rooms-service/serverless.yaml:
service: posts-service
provider:
name: aws
runtime: nodejs10.x
stage: dev
plugins:
- serverless-domain-manager
custom:
customDomain:
domainName: 'serverlesstesting.example.com'
basePath: 'posts'
stage: ${self:provider.stage}
createRoute53Record: true
functions:
get:
name: postsGet
handler: src/get.handler
events:
- http:
path: /{postId}
method: get
getUsersForPost:
handler: userGet ?
events: ??
The problem is that the GET /posts/{postId}/users actually calls the same userGet lambda from the users-service. But the source for that lambda lives in the users-service, not the posts-service.
So my question becomes:
How do I reference a service from another service using base path mappings? In other words, is it possible for the posts service to actually make a call to the parent custom domain and into the users base path mapping and its service?
Consider or refer below approach
https://serverless-stack.com/chapters/share-an-api-endpoint-between-services.html

How to access service destination from approuter?

It seems that I configured my approuter successfully:
Approuter
I gave a destination to my service in SCP Cockpit:
destination config in SCP Cockpit
And I maintained the destination in the xs-app.json:
{
"welcomeFile": "/webapp/index.html",
"authenticationMethod": "route",
"logout": {
"logoutEndpoint": "/do/logout"
},
"routes": [
{
"source": "/destination",
"target": "/",
"destination": "service-destination"
}
]
}
My question is now how can I access my service destination via approuter?
Shouldn't it be something like this:
https://qfrrz1oj5pilzrw8zations-approuter.cfapps.eu10.hana.ondemand.com/webapp/index.html/destination
Accessing service via Approuter
...it returns Not found.
Any idea what I'm doing wrong here?
This is my mta.yaml (if relevant):
ID: oDataAuthorizations
_schema-version: '2.1'
version: 0.0.1
modules:
- name: oDataAuthorizations-db
type: hdb
path: db
parameters:
memory: 256M
disk-quota: 256M
requires:
- name: oDataAuthorizations-hdi-container
- name: oDataAuthorizations-srv
type: java
path: srv
parameters:
memory: 1024M
provides:
- name: srv_api
properties:
url: '${default-url}'
requires:
- name: oDataAuthorizations-hdi-container
properties:
JBP_CONFIG_RESOURCE_CONFIGURATION: '[tomcat/webapps/ROOT/META-INF/context.xml: {"service_name_for_DefaultDB" : "~{hdi-container-name}"}]'
- name: xsuaa-auto
- name: approuter
type: html5
path: approuter
parameters:
disk-quota: 256M
memory: 256M
build-parameters:
builder: grunt
requires:
- name: dest_oDataAuthorizations
- name: srv_api
group: destinations
properties:
name: service-destination
url: '~{url}'
forwardAuthToken: true
- name: xsuaa-auto
resources:
- name: oDataAuthorizations-hdi-container
type: com.sap.xs.hdi-container
properties:
hdi-container-name: '${service-name}'
- name: xsuaa-auto
type: org.cloudfoundry.managed-service
parameters:
path: ./cds-security.json
service-plan: application
service: xsuaa
config:
xsappname: xsuaa-auto
tenant-mode: dedicated
- name: dest_oDataAuthorizations
parameters:
service-plan: lite
service: destination
type: org.cloudfoundry.managed-service
You have two hosts:
approuter
srv
The problem:
https://approuter/destination/ will proxy to https://srv/
Notice the root path in the URL. You see the path segment of your destination is ignored by the approuter. Instead it looks for the routes[0].target declaration of your xs-app.json file.
The symptom:
https://srv/ redirects (307) to /odata/v2. So does https://approuter/destination/
https://approuter/odata/v2/ does not exist (404), no route was defined in your xs-app.json
https://approuter/destination/odata/v2/ gives you the expected response.
The solution:
Adapt your xs-app.json to correctly refer the target endpoint path:
"routes": [
{
"source": "/destination",
"target": "/odata/v2",
"destination": "service-destination"
}
Follow up
Since your srv application statically references links to absolute path /odata/v2, you would either have to update each link in srv to use relative paths, or use "/odata/v2/" as your approuter route source to mirror the target. For the latter case you would miss out on the "/destination" path.

Cannot access Kibana dashboard

I am trying to deploy Kibana in my Kubernetes cluster which is on AWS. To access the Kibana dashboard I have created an ingress which is mapped to xyz.com. Here is my Kibana deployment file.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: kibana
labels:
component: kibana
spec:
replicas: 1
selector:
matchLabels:
component: kibana
template:
metadata:
labels:
component: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana-oss:6.3.2
env:
- name: CLUSTER_NAME
value: myesdb
- name: SERVER_BASEPATH
value: /
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 5601
name: http
readinessProbe:
httpGet:
path: /api/status
port: http
initialDelaySeconds: 20
timeoutSeconds: 5
volumeMounts:
- name: config
mountPath: /usr/share/kibana/config
readOnly: true
volumes:
- name: config
configMap:
name: kibana-config
Whenever I deploy it, it gives me the following error. What should my SERVER_BASEPATH be in order for it to work? I know it defaults to /app/kibana.
FATAL { ValidationError: child "server" fails because [child "basePath" fails because ["basePath" with value "/" fails to match the start with a slash, don't end with one pattern]]
at Object.exports.process (/usr/share/kibana/node_modules/joi/lib/errors.js:181:19)
at internals.Object._validateWithOptions (/usr/share/kibana/node_modules/joi/lib/any.js:651:31)
at module.exports.internals.Any.root.validate (/usr/share/kibana/node_modules/joi/lib/index.js:121:23)
at Config._commit (/usr/share/kibana/src/server/config/config.js:119:35)
at Config.set (/usr/share/kibana/src/server/config/config.js:89:10)
at Config.extendSchema (/usr/share/kibana/src/server/config/config.js:62:10)
at _lodash2.default.each.child (/usr/share/kibana/src/server/config/config.js:51:14)
at arrayEach (/usr/share/kibana/node_modules/lodash/index.js:1289:13)
at Function.<anonymous> (/usr/share/kibana/node_modules/lodash/index.js:3345:13)
at Config.extendSchema (/usr/share/kibana/src/server/config/config.js:50:31)
at new Config (/usr/share/kibana/src/server/config/config.js:41:10)
at Function.withDefaultSchema (/usr/share/kibana/src/server/config/config.js:34:12)
at KbnServer.exports.default (/usr/share/kibana/src/server/config/setup.js:9:37)
at KbnServer.mixin (/usr/share/kibana/src/server/kbn_server.js:136:16)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
isJoi: true,
name: 'ValidationError',
details:
[ { message: '"basePath" with value "/" fails to match the start with a slash, don\'t end with one pattern',
path: 'server.basePath',
type: 'string.regex.name',
context: [Object] } ],
_object:
{ pkg:
{ version: '6.3.2',
branch: '6.3',
buildNum: 17307,
buildSha: '53d0c6758ac3fb38a3a1df198c1d4c87765e63f7' },
dev: { basePathProxyTarget: 5603 },
pid: { exclusive: false },
cpu: { cgroup: [Object] },
cpuacct: { cgroup: [Object] },
server: { name: 'kibana', host: '0', basePath: '/' } },
annotate: [Function] }
I followed this guide https://github.com/pires/kubernetes-elasticsearch-cluster
Any idea what might be the issue ?
I believe that the example config in the official kibana repository gives a hint on the cause of this problem, here's the server.basePath setting:
# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""
The fact that the server.basePath cannot end in a slash could mean that kibana interprets your setting as ending in a slash basically. I've not dug deeper into this though.
This error message is interesting:
message: '"basePath" with value "/" fails to match the start with a slash, don\'t end with one pattern'
So this error message are a complement to the documentation: don't end in a slash and don't start with a slash. Something like that.
I reproduced this in minikube using your Deployment manifest but i removed the volume mount parts at the end. Changing SERVER_BASEPATH to /<SOMETHING> works fine, so basically i think you just need to set a proper basepath.

Runtime config variable Google Deployment manager

Cannot create a google deployment manager runtime config variable
resources:
- name: star-config
type: runtimeconfig.v1beta1.config
properties:
name: star-config
- name: igurl_variable
type: runtimeconfig.v1beta1.variable
properties:
name: igurl_variable
value: 'trek'
parent: $(ref.star-config.name)
I checked the logs and I see that the status is set to bad_request when I create the above deployment.
Audit log
status: {
message: "BAD_REQUEST"
}
What could be the reason for the error ?
You should try the with the properties fields as in the official documentation for both the config and variable resources.
The resource file should be something like:
resources:
- name: star-config
type: runtimeconfig.v1beta1.config
properties:
config: star-config
- name: igurl_variable
type: runtimeconfig.v1beta1.variable
properties:
variable: igurl_variable
text: 'trek'
parent: $(ref.star-config.name)

Resources