How to configure Application Logging Service for SCP application - s4sdk

I have created the hello world application from the SAP Cloud SDK archetypes and pushed this to the cloud foundry environment, binding it to an application logging service instance. My understanding is that this should already provide me with the ability to analyze all logs in the Kibana dashboard of the cloud platform and previously it also worked this way.
However, this time the Kibana dashboard remains empty, so I am wondering if I missed a step or configuration. Looking at the documentation of the service and the respective tutorial blog, I was not able to identify any additional required steps. In the Logs view on the SCP cockpit I can definitely see the entries, but they are not replicated to the ELK stack in the background.

Problem was not SDK related, but seems to have been an incident on the SCP - now works correctly without any changes.

Related

How to easily publish a multi-container asp.net core web app and wep api to a remote kubernetes kluster

So I recently got into docker and kubernetes and I have a kubernetes cluster set up on a remote vm(linux, kubeadm) and I'm wondering if there is a solution suitable for production that I can easily use to deploy my multi-container asp.net core web application. I have been trying to solve this issue for the past week and found nothing that suits my needs. I have been trying to use bridge to kubernetes but I can only get that to work locally on my windows machine and not remotely onto my linux vm. this is the layout of my appliction
Ask me if you need any additional information as I'm still new to this stuff.
Thanks for your help.
I found that Jenkins is just what I needed!

Swagger UI not updated when deploying to AKS

I have a Spring Boot application that exposes multiple APIs and uses swagger for documentation. This service is then deployed to AKS using Helm through Azure DevOps.
When running locally, the swagger documentation looks updated but however, when I deploy it; the documentation goes back to the outdated version. I'm not really sure what is happening during deployment and I am unable to find any help on the forums.
As far as I know; I do not think there is any sort of caching taking place but again I'm not sure.
It sounds like you suspect an incorrect version of your application is running in the cluster following a build and deployment.
Assuming things like local browser caching have been eliminated from the equation, review the state of deployments and/or pods in your cluster using CLI tools.
Run kubectl describe deployment <deployment-name>, the pod template will be displayed which defines which image tag the pods should use. This should correlate with the tag your AzDO pipeline is publishing.
List the pods and describe them to see if the expected image tag is what is running in the cluster after a deployment. If not, check the pods for failures - when describing the pod, pay attention to the lastState object if it exists. Use kubectl logs <podname> to troubleshoot in the application layer.
It can take a few minutes for the new pods to become available depending on configuration.

Can files be created in Pivotal Cloud Foundry environment

i have deployed an application into the pivotal cloud using Spring Integration ,where it should read file and create more files in another folder based on custom logic , and after that is has to ftp those output files to remote directory .The scenario works perfectly fine in local machine ,but in the cloud it doesn't do as expected .Any insights are welcome!Thanks !!
My doubts are -- Since it has to create files in cloud ,is it possible ? are any configurations needed ?
You have to use Volume Services:
This topic describes how Pivotal Cloud Foundry (PCF) app developers can read and write to a mounted file system from their apps. In PCF, a volume service provides a volume so your app can read or write to a reliable, non-ephemeral file system
Before you can use a volume service with your app, your Cloud Foundry administrator must add a volume service to your deployment. See the Enabling NFS Volume Services topic for more information.
Here: https://docs.pivotal.io/pivotalcf/1-10/devguide/services/using-vol-services.html
You can standup an s3 compatible object storage like minio.
And then create a s3-service CUPS and use it in your app. Here's an article that can help with it - https://github.com/cloudfoundry-samples/cf-s3-demo.

Getting started with Fabric8, AWS using stackpoint

I have historically used a lot of manual chaining to get a CI pipeline in place for microservice development so am excited to try Fabric8 as it seems that it will make life a lot easier. Running into some early issues though.
I did manage to get Fabric8 running locally but want to get things running on AWS so I can present a more real world flow to stakeholders. Following the notes on this page Fabric8 on AWS I was able to get a 3 server cluster running using Stackpoint. But, I cannot connect to that cluster to be able to start administering the services. The page references this link (http://fabric8.default.replace.me.io) but it is not working for me. Tried hitting each of the AWS instances by public IP but that failed also. What would be my next steps here?
yeah the getting started guides don't really explain this in great deal. There's a similar issue on the fabric8 issue tracker that we've tried to help answer how to access the console
TL;DR using the AWS loadbalancer can add expense so we deploy an NGINX reverse proxy so you can set up a wildcard DNS. We use and recommend cloudflare for that as its free for this type of use and fast to setup.
We also created a blog to explain the different options how to access apps on kubernetes
Hope that helps!

Nodered boilerplate and Analytics for Apache Hadoop binding issue

I have created the nodered boilerplate and i have binded the Analytics for Apache Hadoop service.
So it clearly appears as a binded service into the dashboard.
But when I launch the Nodered app and add a HDFS node, I get the following message:
"Unbounded Service: Big Insights service not bound. This node wont work"
Any idea of what i am doing wrong? It used to work well for me a few weeks ago.
You will need to attach the BigInsights for Apache Hadoop service service to your app.
Please attach the service and restage/restage your app.

Resources