Hi Experts,
I am following https://open.sap.com/courses/s4h13/items/258qEhXx5kdG8b4SXMSJYp tutorial, after deploying the app I am getting 404 for my servlets in approuter application while same servets are giving me 'http 401' in 'address-manager' as expected.
has anyone done this successfully? if so then please guide me in the right direction.
I have gone through everything I could think of, but I can't get past this issue.
xs-app.json file content
{
"welcomeFile": "index.html",
"routes": [
{
"source": "^/api/(.*)",
"target": "/api/$1",
"destination": "app-destination"
},
{
"source": "^/address-manager/(.*)",
"target": "/address-manager/$1",
"destination": "app-destination"
}],
"logout" : {
"logoutEndpoint": "/logout",
"logoutPage": "/logout.html"
}
}
The destinations environment variable of the approuter on SAP Cloud Platform, Cloud Foundry needs to reference the URL(s) at which you reach the application(s) that you want to access via the route(s) defined in the approuter. (Not to be confused with the destinations environment variable that you may be using as a placeholder in athe backend application built with the SAP S/4HANA Cloud SDK.)
In your case, this should probably be some URL pointing to the address-manager, your target application. In the example value mentioned in your comment, you point to the mock server instead, which is probably not what you want.
Change the destinations environment variable to the following and push / restart the application again. (Insert the URL that points to your address manager application deployment.)
[{"name":"app-destination", "url" :"address-manager-<random text>.cfapps.eu10.hana.ondemand.com/", "forwardAuthToken": true}]
The fact that you can login and logout despite the misconfigured destination is expected, because those paths are actually served by the approuter itself.
Related
I have a spring-boot-application (2.7.1) "App" behind an apache using RedirectRules in .htaccess files on a separate path "/moni" as proxy. The actuator runs on a separate port from the application. On an other server "monitor" I run spring-boot-admin on docker. The application "App" registers successfully on "monitor" and I see the basic data like it is up and has free disk space and uses a database. But all detailed infos are missing and a red error is shown:
Request failed with status code 502
The log in my docker container shows this warning:
Couldn't retrieve info for Instance(
id=aa1909dd6bb9,
version=2,
registration=Registration(
name=ppsb,
managementUrl=https://www.my-app-server/moni/actuator,
healthUrl=https://www.my-app-server/moni/actuator/health,
serviceUrl=https://www.my-app-server/moni,
source=http-api
),
registered=true,
statusInfo=StatusInfo(
status=UP, details={
diskSpace={
status=UP,
details={
total=449085710336,
free=270759964672,
threshold=10485760,
exists=true
}
},
ping={
status=UP
},
livenessState={
status=UP
},
readinessState={
status=UP
},
db={
status=UP,
details={
database=MySQL,
validationQuery=isValid()
}
},
redis={
status=UP,
details={
version=7.0.2
}
}
}
),
statusTimestamp=2022-10-14T20:14:09.754168Z,
info=Info(
values={}
),
endpoints=Endpoints(
endpoints={
sessions=Endpoint(
id=sessions,
url=https://localhost:8880/actuator/sessions
),
caches=Endpoint(
id=caches,
url=https://localhost:8880/actuator/caches
),
loggers=Endpoint(
id=loggers,
url=https://localhost:8880/actuator/loggers
),
health=Endpoint(
id=health,
url=https://www.my-app-server/moni/actuator/health
), env=Endpoint(
id=env,
url=https://localhost:8880/actuator/env
),
...
Why are the URLs in Registration good but in Endpoints not? Only health is ok again.
Problem:
Actually "localhost:8880" is the management host an port I defined for the actuator in my spring-boot application which is different from the port of the application itself. The only location where this is set is the application.properties file of the monitored application by:
management.server.port=8880
Now I searched for many hours to tell these endpoints to either use "https://www.my-app-server/moni" with ssl and an .htaccess file having:
RewriteRule ^(.*) http://localhost:8880/$1 [P,L]
or to use "http://localhost:8880" without ssl directly.
But I don't want "https://localhost:8880"
I already have set
server.forward-headers-strategy=NATIVE
and tried many permutations of sping.boot.admin.* settings. I also tried to read through the spring boot actuators and spring boot admin code. But without getting the reason how to force one url or the other.
Any idea is welcome, how to solve this.
Thanks! It's my first question here, yeah. Let's see how good I wrote it...
Situation:
I've created a website that allows users to create their own simple sub-sites. Initially these are on sub-domains (i.e. newsite.websitecreator.com).These have SSL applied to them via a wildcard certificate *.websitecreator.com. All works fine!
I've also created a means of users being able to purchase a custom domain via an API or route their own domain to point to their subdomain. To achieve this a CNAME is created and pointed to the subdomain. This is routing and fine using an include line in the nginx config which includes all the custom domains:
include /home/forge/websitecreator.com/public/content/websitecreator-customer-domains.conf;
Issue
The main issue is the application of SSL to the custom domains. Obviously the SSL needs to be installed on the server, which has been tried through Forge's LetsEncrypt SSL option within the dashboard, with a view to using the Forge API for future LetsEncrypt when this is automated. However, this is giving me the following error:
Cloning into 'letsencrypt15721234230'...
ERROR: Challenge is invalid! (returned: invalid) (result: {
"type": "http-01",
"status": "invalid",
"error": {
"type": "urn:ietf:params:acme:error:unauthorized",
"detail": "Invalid response from http://newcustomdomain.co.uk/.well-known/acme-challenge/Da6CtvOTJnQVHQyENujDSih81TKuejKuaAWCWXsJKus [88.123.456.9]: \"\u003c!DOCTYPE HTML PUBLIC \\\"-//W3C//DTD HTML 4.01 Frameset//EN\\\" \\\"http://www.w3.org/TR/html4/frameset.dtd\\\"\u003e\u003chtml\u003e\u003chead\u003e\u003cmeta http-eq\"",
"status": 403
},
"url": "https://acme-v02.api.letsencrypt.org/acme/chall-v3/995722342/n7xg9g",
"token": "Da6CtvOTJnQVhh2h3hn2jbSih81TKuejKuaAWCWXsJKus",
"validationRecord": [
{
"url": "http://newcustomdomain.co.uk/.well-known/acme-challenge/Da6CtvOTJnQVHQyENujDSih81TKuejKuaAWCWXsJKus",
"hostname": "newcustomdomain.co.uk",
"port": "80",
"addressesResolved": [
"88.123.123.9"
],
"addressUsed": "66.343.234.9"
}
]
})
Status code 403 tells me that this is unauthorised for some reason.
Question
Despite the approaches tried above, my question is to the SO community is, given the current setup (Forge, Laravel, nginx etc) how would you approach this and any sample code / examples would be greatly appreciated.
I've created a spring boot application that is deployed on heroku.
Everything works fine.. Now, I am trying to use the Text to Speech api from google cloud. This works fine locally but when I want to use it on heroku i get the warning.
Error reading credential file from environment variable
GOOGLE_APPLICATION_CREDENTIALS, value 'config/keyFile.json': File does
not exist.
I've set in heroku the following :
heroku config:set GOOGLE_APPLICATION_CREDENTIALS=‘config/keyFile.json’
No matter where i put the file.. I cannot get it to work.
Who can help?
I got this to work by setting a heroku config variable (say GOOGLE_APPLICATION_CREDENTIALS) with the contents of the GOOGLE_APPLICATION_CREDENTIALS json file and calling the process.env.GOOGLE_APPLICATION_CREDENTIALS where the client needs to be instantiated.
In any case, it is not a best practice to save key files to a remote server (such as heroku), and is safer to call the key using an environment variable.
// Where you need to instantiate the google project client,
var keyValue = JSON.parse(process.env.GOOGLE_APPLICATION_CREDENTIALS);
// set the 'credentials' parameter with keyValue
I had a similar issue with vercel where it was unable to find the serviceaccount.json. I finally got it to work by passing the credenitals to directly to the TextToSpeechClient constructor as an object. Just remember to use environment variables for the sensitive properties.
const client = new textToSpeech.TextToSpeechClient({
credentials: {
type: "service_account",
project_id: "",
private_key_id: "",
private_key:"",
client_email: "",
client_id: "",
auth_uri: "",
token_uri: "",
auth_provider_x509_cert_url: "",
client_x509_cert_url:"",
},
});
You can see the Google documentation here: https://cloud.google.com/nodejs/docs/reference/text-to-speech/latest/text-to-speech/v1.texttospeechclient
The documentation is not very clear as to how the ClientOptions object should look like. I just took a guess at the credentials {credentials:{}} part and it worked.
Hope this helps who what to deploy to the cloud and are struggling with getting the right path to the serviceaccount.json
I was in the proccess of extending the tutorial mentioned in Step 9, with a NodeJS micro service. However I am having some strange issue with the comunication to the backend.
The flow I have is an App Router that directs to an HTML5 micro service (static buildpack) and this consumes either a Java or NodeJS microservice. The Java part works fine along with authentication scopes, but for NodeJS I am always getting 404 (not found) error when I call the respective path /node/hello (hello should return a function output from server).
This is the xs-app.json I am using for routing
{
"welcomeFile": "index.html",
"authenticationMethod": "route",
"websockets": {
"enabled": true
},
"routes": [
{
"source": "/odata/v4/(.*)",
"target": "/odata/v4/$1",
"destination": "business-partner-api"
},
{
"source": "/",
"target": "/",
"destination": "business-partner-frontend"
},
{
"source": "/node/(.*)",
"target": "/$1",
"destination": "business-partner-node"
}
]
}
The issue is on the /node block the others work fine. I have also noticed another strange issue, is that if I replace the default destination (/) from business-partner-frontend to business-partner-node the app router sucessfully calls the node js server with the authentication being propagated so it appears the issue is somehow related with the xs-app file and not in the destination itself.
I have also unsuccessfully tried to add the port to the destination and adding a staticfile mapping the html5 project but without success.
Anything I might be missing on the node part config?
Best Regards,
The issue is probably in the order of your routes, which is important for the routing. The first match of the current path against source will determine the route. In your case, the / of the second route matches all paths, including /node/....
Reorder your routes so that the node destination comes before the frontend destination.
I followed the step-by-step guide here.
I made a simple app that posts a message to the rooms the Integration is installed on per a regex (as described in the tutorial above).
When I initially add the Integration to a hipchat room, it works fine. However, after a period of time it stops working.
The following error appears in my Heroku logs:
JWT verification error: 400 Request can't be verified without an OAuth secret
I assume something with my configuration is wrong or my lack-of-use-of-OAuth, but after googling around I can't find any specific answers on what it should look like.
My config.json looks like this:
"production": {
"usePublicKey": true,
"port": "$PORT",
"store": {
"adapter": "jugglingdb",
"type": "sqlite3",
"database": "store.db"
},
"whitelist": [
"*.hipchat.com"
]
},
And my request handler looks like this:
app.post('/foo',
addon.authenticate(),
function (req, res) {
hipchat.sendMessage(req.clientInfo, req.identity.roomId, 'bar')
.then(function (data) {
res.sendStatus(200);
});
}
);
Any specific direction on configuration and use of Oauth for Hipchat and Heroku would be amazing!
I personally haven't used the jugglingdb adapter with Heroku and don't know if you can actually look into the database, but it seems like somewhere along the way clientInfo disappears from the store.
My suggestion is to start testing locally with ngrok and redis, so that you can troubleshoot locally and then push the working code to Heroku.
Three things I needed to do in order to fix my problem:
Install the Heroku Redis add-on for my Heroku App. (confirm that the Environment Variable for ($REDIS_URL) was added to your app settings).
Add this line to my app.js file:
ac.store.register('redis', require('atlassian-connect-express-redis'));
Change the production.store object in the config.json to be the following:
"store": {
"adapter": "redis",
"url": "$REDIS_URL"
},