Spring cloud data flow shell : Stuck on "The stream is being deployed" - spring

I successfully registered three apps named appSink, appSource and appProcessor as follows
dataflow:>app register --name appSource --type source --uri maven://com.example:source:jar:0.0.1-SNAPSHOT --force
Successfully registered application 'source:appSource'
dataflow:>app register --name appProcessor --type processor --uri maven://com.example:processor:jar:0.0.1-SNAPSHOT --force
Successfully registered application 'processor:appProcessor'
dataflow:>app register --name appSink --type sink --uri maven://com.example:sink:jar:0.0.1-SNAPSHOT --force
Successfully registered application 'sink:appSink'
dataflow:>app list
╔══════════╤═════════════╤════════╤════╗
║ source │ processor │ sink │task║
╠══════════╪═════════════╪════════╪════╣
║appSource│appProcessor│appSink│ ║
╚══════════╧═════════════╧════════╧════╝
I then created and deployed a stream as follows:
dataflow:>stream create --name myStream --definition 'appSource | appProcessor | appSink’
Created new stream 'myStream'
dataflow:>stream deploy --name myStream
I get the message
Deployment request has been sent for stream 'myStream'
In the streams list I see
║myStream1 │source-app | processor-app | sink-app│The stream is being deployed. ║
The deployment never finishes it seems. The data flow server logs are just stuck on this
o.s.c.d.spi.local.LocalAppDeployer : Deploying app with deploymentId myStream1.source-app instance 0.
Why is my stream not deploying successfully?

Do you see any java processes running in your local (that correspond to the applications being deployed)?
You can try remote debugging your application deployment using the doc: https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#_remote_debugging
You can also try inheriting the apps logging using
https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#_log_redirect

I am seeing this same problem. I inherited the logging as you suggested. The UI never moves off of Deploying status. There are no errors in the logs and my stream is working when I test it.

Add spring boot actuator dependency in your project, dataflow calls /health and /info to see if the app is deployed or not.

Related

How do I view my app logs in Azure spring boot apps?

I am using Azure Spring Apps, and I am deploying a very simple Spring App
I have deployed my spring app to azure like so :
az spring app deploy --resource-group myResourceGroup --service myService --name myName --artifact-path target/myApp-0.1.0.jar
On the azure portal it says that the deployment has "failed"
I would like to view my app logs to see what went wrong, as everything works fine for me locally.
Is there a straightforward simple way to view my spring boot logs in azure?
You can use Azure CLI to get logs:
az spring app log tail -n xxx -s xxx -g xxx --subscription xxx --lines 200
Refs: https://learn.microsoft.com/en-us/cli/azure/spring-cloud/app/log?view=azure-cli-latest

How can I make sure that Cloud Run waits for my Spring Boot application to start before denying the health check?

I am deploying my Spring Boot application as a compiled jar file running in a docker container deployed to gcp, and deploys it through gcloud cli in my pipeline:
gcloud beta run deploy $SERVICE_NAME --image $IMAGE_NAME --region europe-north1 --project
Which will work and give me the correct response when the application succeeds to start. However, when there's an error and the application fails to start:
Cloud Run error: The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable.
The next time the pipeline runs (with the errors fixed), the gcloud beta run deploy command fails and gives the same error as seen above. While the actual application runs without issues in Cloud Run. How can I solve this?
Currently I have to check Cloud Run manually as I cannot trust my pipeline, and I have to run it twice to make it succeed. Any help will be appreciated. Let me know if you want any extra information.

Show logs of the application deployed on Cloud Run

Is there any way to see logs.I mean I am able to see logs in log section in cloud run It only show me http log or show me the response(like 403 etc) but does not show me the response like (invalid current password etc.) of error.
I see there is --log-driver gcplogs option but don't know where to configure it I mean its a serverless container so not running any docker run command
Google Cloud Logging captures stdout and stderr of services (containers) deployed to Google Cloud Run.
You should be able to view these logs either through the the Cloud Console's Logs Viewer (https://console.cloud.google.com/logs/query) or using gcloud.
If you use gcloud, you can read the last 15-minutes' (--freshness=15m) logs for all Cloud Run services in a project (${PROJECT}) with:
PROJECT="[[YOUR-PROJECT-ID]]
gcloud logging read \
"resource.type=\"cloud_run_revision\"" \
--project=${PROJECT} \
--freshness=15m
For a specific service's stderr:
PROJECT=...
SERVICE=...
gcloud logging read \
"resource.type=\"cloud_run_revision\" resource.labels.service_name=\"${SERVICE}\"" \
--project=${PROJECT} \
--freshness=15m
To that service's stderr text payload only:
gcloud logging read \
"resource.type=\"cloud_run_revision\" resource.labels.service_name=\"${SERVICE}\"" \
--project=${PROJECT} \
--freshness=15m \
--format="value(textPayload)"
It's a powerful tool.
Checkout the full logs buy clicking the popout icon in the LOGS pane. This will show all the logs for your Cloud Run service.

RabbitMQ as Spring Cloud Bus in Kubernetes for Spring Boot Applications

I have developed Spring Boot applications. I have setup admin and RabbitMQ as well as spring cloud bus. When i refresh the end points of applications, it refreshes the properties for application.
Can anyone please help me how to setup RabbitMQ in kubernetes now? I did research to an extent and found in few articles that it needs to be deployed as "Statefulset" rather than "Deployment" https://notallaboutcode.blogspot.de/2017/09/rabbitmq-on-kubernetes-container.html. I could not get why this needs to be done exactly. Also any useful link on deploying RabbitMQ in kubernetes would help.
It depends on what you're looking to do and what tools you have available. I guess your current setup is much like that described in http://www.baeldung.com/spring-cloud-bus. One approach to porting that to kubernetes might be to try to get your setup working with docker-compose first and then you could port that docker-compose to kubernetes deployment descriptors.
A simple way to deploy rabbitmq in k8s would be to set up a Deployment using a rabbitmq docker image. An example of this is https://github.com/Activiti/activiti-cloud-examples/blob/fe732096b5a19de0ad44879a399053f6ae02b095/kubernetes/kubectl/infrastructure.yml#L17. (Notice that file isn't radically different from a docker-compose file so you could port from one to the other.) But that won't be persisting data outside of the Pods so if the cluster were to go down or the Pod/s were to go down then you'd lose message data. The persistence is ephemeral.
So to have non-ephemeral persistence you could instead use a StatefulSet as in the example you point to. Another example is https://wesmorgan.svbtle.com/rabbitmq-cluster-on-kubernetes-with-statefulsets
If you are using helm (or can use helm) then you could use the rabbitmq helm chart, which uses a StatefulSet.
But if your only reason for needing the bus is to trigger refreshes when property changes happen then there are alternative paths available with Kubernetes. I'm guessing you need the hot reloads so you could look at using https://github.com/fabric8io/spring-cloud-kubernetes#propertysource-reload Or if you need the config to come from git specifically then you could look at http://fabric8.io/guide/develop/configuration.html (If you didn't need the hot reloads or git then you could consider versioning your configmaps and upgrading them with your application upgrades like in https://dzone.com/articles/configuring-java-apps-with-kubernetes-configmaps-a )
If you have installed helm in your cluster
helm install stable/rabbitmq
This will install rabbitmqserver on your cluster, the following commands are for obtaining the password and erlang cookie, replace prodding-wombat-rabbitmq for w/e kubernetes decides to name the pod.
kubectl get secret --namespace default prodding-wombat-rabbitmq -o jsonpath="{.data.rabbitmq-password}" | base64 --decode
kubectl get secret --namespace default prodding-wombat-rabbitmq -o jsonpath="{.data.rabbitmq-erlang-cookie}" | base64 --decode
To connect to the pod:
export POD_NAME=$(kubectl get pods --namespace default -l "app=prodding-wombat-rabbitmq" -o jsonpath="{.items[0].metadata.name}")
Then prorxy to localhost so you can connect in your browswer
kubectl port-forward $POD_NAME 5672:5672 15672:15672

Unable to write streamed data to sink file using Spring cloud dataflow

I am trying to create data flow pipeline using spring cloud data flow using shell(Not UI). Source being twitterstream and sink as File. Here is what i did to configure file-sink :
dataflow:>stream create demo --definition "twitterstream --credentials | file --dir=/opt/datastream --mode=APPEND --filename=tweets.txt"
I can consume data from kafka topic but unable to write on above sink location, file is not even created . NO error log while deploying the stream. Eventually i will change it to HDFS from local file system. Is there anything missing ?
PS: I tried default file-sink (without definition), which is supposed to create default file inside /tmp/xd/output, didn't happen either.
On the latest 1.0.0.RELEASE (GA) release, the following stream definition works.
dataflow:>stream create demo --definition "twitterstream | file --directory=/someFolder --mode=APPEND --name=demo.txt"
A couple of things to point out:
1) The twitterstream source does not support --credentials as an OOTB property. See here.
2) The file sink does not support --filename as an OOTB property; you'd have to use --name instead. See here.

Resources