It seems that I configured my approuter successfully:
Approuter
I gave a destination to my service in SCP Cockpit:
destination config in SCP Cockpit
And I maintained the destination in the xs-app.json:
{
"welcomeFile": "/webapp/index.html",
"authenticationMethod": "route",
"logout": {
"logoutEndpoint": "/do/logout"
},
"routes": [
{
"source": "/destination",
"target": "/",
"destination": "service-destination"
}
]
}
My question is now how can I access my service destination via approuter?
Shouldn't it be something like this:
https://qfrrz1oj5pilzrw8zations-approuter.cfapps.eu10.hana.ondemand.com/webapp/index.html/destination
Accessing service via Approuter
...it returns Not found.
Any idea what I'm doing wrong here?
This is my mta.yaml (if relevant):
ID: oDataAuthorizations
_schema-version: '2.1'
version: 0.0.1
modules:
- name: oDataAuthorizations-db
type: hdb
path: db
parameters:
memory: 256M
disk-quota: 256M
requires:
- name: oDataAuthorizations-hdi-container
- name: oDataAuthorizations-srv
type: java
path: srv
parameters:
memory: 1024M
provides:
- name: srv_api
properties:
url: '${default-url}'
requires:
- name: oDataAuthorizations-hdi-container
properties:
JBP_CONFIG_RESOURCE_CONFIGURATION: '[tomcat/webapps/ROOT/META-INF/context.xml: {"service_name_for_DefaultDB" : "~{hdi-container-name}"}]'
- name: xsuaa-auto
- name: approuter
type: html5
path: approuter
parameters:
disk-quota: 256M
memory: 256M
build-parameters:
builder: grunt
requires:
- name: dest_oDataAuthorizations
- name: srv_api
group: destinations
properties:
name: service-destination
url: '~{url}'
forwardAuthToken: true
- name: xsuaa-auto
resources:
- name: oDataAuthorizations-hdi-container
type: com.sap.xs.hdi-container
properties:
hdi-container-name: '${service-name}'
- name: xsuaa-auto
type: org.cloudfoundry.managed-service
parameters:
path: ./cds-security.json
service-plan: application
service: xsuaa
config:
xsappname: xsuaa-auto
tenant-mode: dedicated
- name: dest_oDataAuthorizations
parameters:
service-plan: lite
service: destination
type: org.cloudfoundry.managed-service
You have two hosts:
approuter
srv
The problem:
https://approuter/destination/ will proxy to https://srv/
Notice the root path in the URL. You see the path segment of your destination is ignored by the approuter. Instead it looks for the routes[0].target declaration of your xs-app.json file.
The symptom:
https://srv/ redirects (307) to /odata/v2. So does https://approuter/destination/
https://approuter/odata/v2/ does not exist (404), no route was defined in your xs-app.json
https://approuter/destination/odata/v2/ gives you the expected response.
The solution:
Adapt your xs-app.json to correctly refer the target endpoint path:
"routes": [
{
"source": "/destination",
"target": "/odata/v2",
"destination": "service-destination"
}
Follow up
Since your srv application statically references links to absolute path /odata/v2, you would either have to update each link in srv to use relative paths, or use "/odata/v2/" as your approuter route source to mirror the target. For the latter case you would miss out on the "/destination" path.
Related
Our API has the following endpoints:
POST /users - create a user
GET /users/{userId} - get a particular user
GET /posts/{postId} - get a particular post
GET /posts/{postId}/users - get the users who contributed to this post
I have defined two services: users-service and posts-service. In these two services I define the lambdas like so. I'm using the serverless-domain-manager plugin to create base path mappings:
/users-service/serverless.yaml:
service: users-service
provider:
name: aws
runtime: nodejs10.x
stage: dev
plugins:
- serverless-domain-manager
custom:
customDomain:
domainName: 'serverlesstesting.example.com'
basePath: 'users'
stage: ${self:provider.stage}
createRoute53Record: true
functions:
create:
name: userCreate
handler: src/create.handler
events:
- http:
path: /
method: post
get:
name: userGet
handler: src/get.handler
events:
- http:
path: /{userId}
method: get
/rooms-service/serverless.yaml:
service: posts-service
provider:
name: aws
runtime: nodejs10.x
stage: dev
plugins:
- serverless-domain-manager
custom:
customDomain:
domainName: 'serverlesstesting.example.com'
basePath: 'posts'
stage: ${self:provider.stage}
createRoute53Record: true
functions:
get:
name: postsGet
handler: src/get.handler
events:
- http:
path: /{postId}
method: get
getUsersForPost:
handler: userGet ?
events: ??
The problem is that the GET /posts/{postId}/users actually calls the same userGet lambda from the users-service. But the source for that lambda lives in the users-service, not the posts-service.
So my question becomes:
How do I reference a service from another service using base path mappings? In other words, is it possible for the posts service to actually make a call to the parent custom domain and into the users base path mapping and its service?
Consider or refer below approach
https://serverless-stack.com/chapters/share-an-api-endpoint-between-services.html
trying to install the approuter currently, following this tutorial:
https://blogs.sap.com/2017/07/18/step-7-with-sap-s4hana-cloud-sdk-secure-your-application-on-sap-cloud-platform-cloudfoundry/
When pushing the approuter to CF, I get an error:
2019-04-29T08:39:34.43+0200 [APP/PROC/WEB/0] ERR /home/vcap/app/approuter/lib/environment.js:19
2019-04-29T08:39:34.43+0200 [APP/PROC/WEB/0] ERR throw new Error('File not found ' + xsappFile);
2019-04-29T08:39:34.43+0200 [APP/PROC/WEB/0] ERR ^
2019-04-29T08:39:34.43+0200 [APP/PROC/WEB/0] ERR Error: File not found /home/vcap/app/xs-app.json
This is my manifest.yml:
---
applications:
- name: xyz
command: 'node approuter/approuter.js'
host: xyz-93deb1cd-7b72-4060-94e7-30baef85d259
path: approuter
memory: 128M
buildpack: https://github.com/cloudfoundry/nodejs-buildpack
env:
TENANT_HOST_PATTERN: 'xyz(.*).cfapps.eu10.hana.ondemand.com'
destinations: '[{"name":"service-destination", "url": "https://gfuowb4ett234agtuthorizations-srv.cfapps.eu10.hana.ondemand.com", "forwardAuthToken": true}]'
SAP_JWT_TRUST_ACL: '[{"clientid" : "*", "identityzone" : "*"}]'
services:
- my-xsuaa
- service-destination
This is my xs-app.json, which is located in my "approuter" folder.
{
"routes": [{
"source": "/",
"target": "/",
"destination": "service-destination"
}]
}
This is my folder structure
When I move the xs-app.json in the root folder where it seems to be expected, I get the following error message:
xs-app.json/routes/0: Format validation failed (Route references unknown destination "service-destination")
I guess that the approuter is looking into the bound destination service and not the destination environment variable.
I notice that there is a destination service instance with the name 'service-destination' however you have already defined a destination in the environment variable with the same name.
Either one should be present.
I would suggest either maintaining the destination in the environment variables and remove the entry from service section
---
applications:
- name: xyz
command: 'node approuter/approuter.js'
host: xyz-93deb1cd-7b72-4060-94e7-30baef85d259
path: approuter
memory: 128M
buildpack: https://github.com/cloudfoundry/nodejs-buildpack
env:
TENANT_HOST_PATTERN: 'xyz(.*).cfapps.eu10.hana.ondemand.com'
destinations: '[{"name":"service-destination", "url": "https://gfuowb4ett234agtuthorizations-srv.cfapps.eu10.hana.ondemand.com", "forwardAuthToken": true}]'
SAP_JWT_TRUST_ACL: '[{"clientid" : "*", "identityzone" : "*"}]'
# remove the service-destination from here and unbind any destination service if already bound
services:
- my-xsuaa
or dont maintain the destination environment variable - instead maintain it in the destination service instance GUI (from the cockpit)
---
applications:
- name: xyz
command: 'node approuter/approuter.js'
host: xyz-93deb1cd-7b72-4060-94e7-30baef85d259
path: approuter
memory: 128M
buildpack: https://github.com/cloudfoundry/nodejs-buildpack
env:
TENANT_HOST_PATTERN: 'xyz(.*).cfapps.eu10.hana.ondemand.com'
SAP_JWT_TRUST_ACL: '[{"clientid" : "*", "identityzone" : "*"}]'
services:
- my-xsuaa
- service-destination
Note:
keeping the destination name and destination instance name same
might lead to lots of confusion.
Manually unbind the service-destination / destination instance if
you are following first approach. removing the entry from service
section does not unbind it automatically.
I am trying to deploy Kibana in my Kubernetes cluster which is on AWS. To access the Kibana dashboard I have created an ingress which is mapped to xyz.com. Here is my Kibana deployment file.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: kibana
labels:
component: kibana
spec:
replicas: 1
selector:
matchLabels:
component: kibana
template:
metadata:
labels:
component: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana-oss:6.3.2
env:
- name: CLUSTER_NAME
value: myesdb
- name: SERVER_BASEPATH
value: /
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 5601
name: http
readinessProbe:
httpGet:
path: /api/status
port: http
initialDelaySeconds: 20
timeoutSeconds: 5
volumeMounts:
- name: config
mountPath: /usr/share/kibana/config
readOnly: true
volumes:
- name: config
configMap:
name: kibana-config
Whenever I deploy it, it gives me the following error. What should my SERVER_BASEPATH be in order for it to work? I know it defaults to /app/kibana.
FATAL { ValidationError: child "server" fails because [child "basePath" fails because ["basePath" with value "/" fails to match the start with a slash, don't end with one pattern]]
at Object.exports.process (/usr/share/kibana/node_modules/joi/lib/errors.js:181:19)
at internals.Object._validateWithOptions (/usr/share/kibana/node_modules/joi/lib/any.js:651:31)
at module.exports.internals.Any.root.validate (/usr/share/kibana/node_modules/joi/lib/index.js:121:23)
at Config._commit (/usr/share/kibana/src/server/config/config.js:119:35)
at Config.set (/usr/share/kibana/src/server/config/config.js:89:10)
at Config.extendSchema (/usr/share/kibana/src/server/config/config.js:62:10)
at _lodash2.default.each.child (/usr/share/kibana/src/server/config/config.js:51:14)
at arrayEach (/usr/share/kibana/node_modules/lodash/index.js:1289:13)
at Function.<anonymous> (/usr/share/kibana/node_modules/lodash/index.js:3345:13)
at Config.extendSchema (/usr/share/kibana/src/server/config/config.js:50:31)
at new Config (/usr/share/kibana/src/server/config/config.js:41:10)
at Function.withDefaultSchema (/usr/share/kibana/src/server/config/config.js:34:12)
at KbnServer.exports.default (/usr/share/kibana/src/server/config/setup.js:9:37)
at KbnServer.mixin (/usr/share/kibana/src/server/kbn_server.js:136:16)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
isJoi: true,
name: 'ValidationError',
details:
[ { message: '"basePath" with value "/" fails to match the start with a slash, don\'t end with one pattern',
path: 'server.basePath',
type: 'string.regex.name',
context: [Object] } ],
_object:
{ pkg:
{ version: '6.3.2',
branch: '6.3',
buildNum: 17307,
buildSha: '53d0c6758ac3fb38a3a1df198c1d4c87765e63f7' },
dev: { basePathProxyTarget: 5603 },
pid: { exclusive: false },
cpu: { cgroup: [Object] },
cpuacct: { cgroup: [Object] },
server: { name: 'kibana', host: '0', basePath: '/' } },
annotate: [Function] }
I followed this guide https://github.com/pires/kubernetes-elasticsearch-cluster
Any idea what might be the issue ?
I believe that the example config in the official kibana repository gives a hint on the cause of this problem, here's the server.basePath setting:
# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""
The fact that the server.basePath cannot end in a slash could mean that kibana interprets your setting as ending in a slash basically. I've not dug deeper into this though.
This error message is interesting:
message: '"basePath" with value "/" fails to match the start with a slash, don\'t end with one pattern'
So this error message are a complement to the documentation: don't end in a slash and don't start with a slash. Something like that.
I reproduced this in minikube using your Deployment manifest but i removed the volume mount parts at the end. Changing SERVER_BASEPATH to /<SOMETHING> works fine, so basically i think you just need to set a proper basepath.
I am trying to create a SNS alarm for my EMR cluster so when EMR cluster is failed i should get notified .
But my issue is i am not able to pass Cluster ID as JobFlowId in the CloudWatch Alarm .
I am create all resources using CloudFomartion Templet .
When i use REF to refer Cluster Id i get below error .
Error
Template validation error: Template format error: Unresolved resource dependencies [FinancialLineItemEmrCluster] in the Resources block of the template
Here is my templet .
I am having issue specially in at JobFlowId while creating the resources EMRAlarm
AWSTemplateFormatVersion: '2010-09-09'
Description: TRF SDI Optmization Full File Creation
Parameters:
AppName:
Default: trfsdioptmization
Description: trfsdioptmization.
Type: String
Environment:
Type: String
Default: nonprod
FinancialIdentifier:
Type: String
Default: 123456789
ApplicationAssetInsightId:
Type: String
Default: 12345678
EnvironmentType:
Type: String
AllowedValues:
- "prod"
- "PRE-PRODUCTION"
- "QUALITY ASSURANCE"
- "nonprod"
Default: nonprod
ResourceOwner:
Type: String
Default: sudarshan.kumar#abcd.com
EnvironmentPhase:
Type: String
Default: nonprod
RegionAbbreviation:
Default: us-east-1
Description: Region Abbreviation e.g. us-east-1 for us-east
Type: String
Resources:
TRFSDIFullfileGeneration:
Type: "AWS::DataPipeline::Pipeline"
#DeletionPolicy: Retain
Properties:
Name: !Sub "${ApplicationAssetInsightId}-tr-fr-${EnvironmentPhase}-${RegionAbbreviation}-${AppName}-DataPipeline"
Description: "Pipeline to create full file for TRFSDI full file Optmization"
Activate: false
PipelineObjects:
-
Id: "FinancialLineItemActivity"
Name: "FinancialLineItemActivity"
Fields:
-
Key: "type"
StringValue: "EmrActivity"
-
Key: "runsOn"
RefValue: "FinancialLineItemEmrCluster"
-
Key: "step"
StringValue: "command-runner.jar,spark-submit,--master,yarn-cluster,--deploy-mode,cluster,--class,start.EntryFileCreation,s3://205147-trf-fr-nonprdo-us-east-1-trfsdioptmization/AJAR/SparkJob-0.1-jar-with-dependencies.jar,FinancialLineItem"
Id: "Default"
Name: "Default"
Fields:
-
Key: "type"
StringValue: "Default"
-
Key: "scheduleType"
StringValue: "ONDEMAND"
-
Key: "failureAndRerunMode"
StringValue: "CASCADE"
-
Key: "role"
StringValue: "DataPipelineDefaultRole"
-
Key: "resourceRole"
StringValue: "DataPipelineDefaultResourceRole"
-
Key: "pipelineLogUri"
StringValue: "s3://205147-tr-fr-nonprod-us-east-1-trfsdioptmization/EMRLOGS"
-
Id: "FinancialLineItemEmrCluster"
Name: "FinancialLineItemEmrCluster"
Fields:
-
Key: "terminateAfter"
StringValue: "30 Minutes"
-
Key: "releaseLabel"
StringValue: "emr-5.9.0"
-
Key: "masterInstanceType"
StringValue: "m3.xlarge"
-
Key: "coreInstanceType"
StringValue: "m3.2xlarge"
-
Key: "coreInstanceCount"
StringValue: "2"
-
Key: "type"
StringValue: "EmrCluster"
-
Key: "applications"
StringValue: "spark"
-
Key: "subnetId"
StringValue: "subnet-86febcab"
-
Key: "onSuccess"
RefValue: "FinancialLineItem_Success"
-
Key: "onFail"
RefValue: "FinancialLineItem_Fail"
EMRAlarm:
Type: AWS::CloudWatch::Alarm
Properties:
AlarmDescription: "Raise alarm if apps running on EMR cluster is killed"
Namespace: AWS/ElasticMapReduce
MetricName: AppsKilled
Dimensions:
- Name: 205147-TRFSDIOPTmization
JobFlowId: !Ref FinancialLineItemEmrCluster
Statistic: Average
Period: 15
EvaluationPeriods: '1'
ComparisonOperator: GreaterThanOrEqualToThreshold
Threshold: 1
AlarmActions:
- "AWSTRF_HEALTH"
There is no resource of the type AWS::EMR::Cluster in your template.
You are referencing something called FinancialLineItemEmrCluster in your CloudWatch Alarm. From the context, I am assuming you are trying to reference a EMR job. However, since you have no parameter or resource in your template named FinancialLineItemEmrCluster, you can not access is.
EMR cluster IDs generally look like this: j-1ABCD123AB1A. You have several options:
If this cluster is in another template, you could create a CloudFormation export in that stack, and use !ImportValue in your template to import it.
Alternatively, you could use a Parameter in your template and pass the ClusterId that way. Add this as a parameter:
Example:
FinancialLineItemEmrCluster:
Description: 'Your EMR cluster id. Example: j-1ABCD123AB1A'
Type: String
A third alternative is to just hardcode it into your template.
In any case, you can not refer directly to a resource in another stack.
If you have not createt an EMR::Cluster at all, and have no Cluster Id, you need to create one first. You could add one to your Template using the AWS::EMR::Cluster resource:
FinancialLineItemEmrCluster:
Type: AWS::EMR::Cluster
Properties:
Instances:
MasterInstanceGroup:
InstanceCount: 1
InstanceType: "m3.xlarge"
Market: "ON_DEMAND"
Name: "Master"
CoreInstanceGroup:
InstanceCount: 2
InstanceType: "m3.xlarge"
Market: "ON_DEMAND"
Name: "Core"
TerminationProtected: true
Name: "TestCluster"
JobFlowRole: "EMR_EC2_DefaultRole"
ServiceRole: "EMR_DefaultRole"
ReleaseLabel: "emr-4.2.0"
Cannot create a google deployment manager runtime config variable
resources:
- name: star-config
type: runtimeconfig.v1beta1.config
properties:
name: star-config
- name: igurl_variable
type: runtimeconfig.v1beta1.variable
properties:
name: igurl_variable
value: 'trek'
parent: $(ref.star-config.name)
I checked the logs and I see that the status is set to bad_request when I create the above deployment.
Audit log
status: {
message: "BAD_REQUEST"
}
What could be the reason for the error ?
You should try the with the properties fields as in the official documentation for both the config and variable resources.
The resource file should be something like:
resources:
- name: star-config
type: runtimeconfig.v1beta1.config
properties:
config: star-config
- name: igurl_variable
type: runtimeconfig.v1beta1.variable
properties:
variable: igurl_variable
text: 'trek'
parent: $(ref.star-config.name)