Show logs of the application deployed on Cloud Run - laravel-5

Is there any way to see logs.I mean I am able to see logs in log section in cloud run It only show me http log or show me the response(like 403 etc) but does not show me the response like (invalid current password etc.) of error.
I see there is --log-driver gcplogs option but don't know where to configure it I mean its a serverless container so not running any docker run command

Google Cloud Logging captures stdout and stderr of services (containers) deployed to Google Cloud Run.
You should be able to view these logs either through the the Cloud Console's Logs Viewer (https://console.cloud.google.com/logs/query) or using gcloud.
If you use gcloud, you can read the last 15-minutes' (--freshness=15m) logs for all Cloud Run services in a project (${PROJECT}) with:
PROJECT="[[YOUR-PROJECT-ID]]
gcloud logging read \
"resource.type=\"cloud_run_revision\"" \
--project=${PROJECT} \
--freshness=15m
For a specific service's stderr:
PROJECT=...
SERVICE=...
gcloud logging read \
"resource.type=\"cloud_run_revision\" resource.labels.service_name=\"${SERVICE}\"" \
--project=${PROJECT} \
--freshness=15m
To that service's stderr text payload only:
gcloud logging read \
"resource.type=\"cloud_run_revision\" resource.labels.service_name=\"${SERVICE}\"" \
--project=${PROJECT} \
--freshness=15m \
--format="value(textPayload)"
It's a powerful tool.

Checkout the full logs buy clicking the popout icon in the LOGS pane. This will show all the logs for your Cloud Run service.

Related

mc: <error> while trying to run bitnami/minio-client the container is exiting within a seconds

docker run -it --name mc3 dockerhub:5000/bitnami/minio-client
08:05:31.13
08:05:31.14 Welcome to the Bitnami minio-client container
08:05:31.14 Subscribe to project updates by watching https://github.com/bitnami/containers
08:05:31.14 Submit issues and feature requests at https://github.com/bitnami/containers/issues
08:05:31.15
08:05:31.15 INFO  ==> ** Starting MinIO Client setup **
08:05:31.16 INFO  ==> ** MinIO Client setup finished! ** mc: Configuration written to /.mc/config.json. Please update your access credentials.
mc: Successfully created /.mc/share.
mc: Initialized share uploads /.mc/share/uploads.json file.
mc: Initialized share downloads /.mc/share/downloads.json file.
**mc: /opt/bitnami/scripts/minio-client/run.sh is not a recognized command. Get help using --help flag.
dockerhub:5000/bitnami/minio-client - name of the image
It would be great if someone reach out to help me how to solve this issue as I'm stuck here for more than 2 days
MinIO has two components:
Server
Client
The Server runs continuously, as it should, so it can serve the data.
On the other hand the client, which you are trying to run, is used to perform operations on a running server. So its expected for it to run and then immediately exit as its not a daemon and its not meant to run forever.
What you want to do is to first launch the server container in background (using -d flag)
$ docker run -d --name minio-server \
--env MINIO_ROOT_USER="minio-root-user" \
--env MINIO_ROOT_PASSWORD="minio-root-password" \
minio/minio:latest
Then launch the client container to perform some operation, for example making/creating a bucket, which it will perform on the server and exit immidieatly after which it will clean up the client container (using -rm flag).
$ docker run --rm --name minio-client \
--env MINIO_SERVER_HOST="minio-server" \
--env MINIO_SERVER_ACCESS_KEY="minio-root-user" \
--env MINIO_SERVER_SECRET_KEY="minio-root-password" \
minio/mc \
mb minio/my-bucket
For more information please checkout the docs
Server: https://min.io/docs/minio/container/operations/installation.html
Client: https://min.io/docs/minio/linux/reference/minio-mc.html

How can I make sure that Cloud Run waits for my Spring Boot application to start before denying the health check?

I am deploying my Spring Boot application as a compiled jar file running in a docker container deployed to gcp, and deploys it through gcloud cli in my pipeline:
gcloud beta run deploy $SERVICE_NAME --image $IMAGE_NAME --region europe-north1 --project
Which will work and give me the correct response when the application succeeds to start. However, when there's an error and the application fails to start:
Cloud Run error: The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable.
The next time the pipeline runs (with the errors fixed), the gcloud beta run deploy command fails and gives the same error as seen above. While the actual application runs without issues in Cloud Run. How can I solve this?
Currently I have to check Cloud Run manually as I cannot trust my pipeline, and I have to run it twice to make it succeed. Any help will be appreciated. Let me know if you want any extra information.

Access k8s pod logs generated from ssh exec

I have a filebeat configured to send my k8s cluster logs to Elasticsearch.
When I connect to the pod directly (kubectl exec -it <pod> -- sh -c bash),
the generated output logs aren't being sent to the destination.
Digging at k8s docs, I couldn't find how k8s is handling STDOUT from a running shell.
How can I configure k8s to send live shell logs?
Kubernetes has (mostly) nothing to do with this, as logging is handled by the container environment used to support Kubernetes, which is usually docker.
Depending on docker version, logs of containers could be written on json-file, journald or more, with the default being a json file. You can do a docker info | grep -i logging to check what is the Logging Driver used by docker. If the result is json-file, logs are being written down on a file in json format. If there's another value, logs are being handled in another way (and as there are various logging drivers, I suggest to check the documentation about them)
If the logs are being written on file, chances are that by using docker inspect container-id | grep -i logpath, you'll be able to see the path on the node.
Filebeat simply harvest the logs from those files and it's docker who handles the redirection between the application STDOUT inside the container and one of those files, with its driver.
Regarding exec commands not being in logs, this is an open proposal ( https://github.com/moby/moby/issues/8662 ) as not everything is redirected, just logs of the apps started by the entrypoint itself.
There's a suggested workaround which is ( https://github.com/moby/moby/issues/8662#issuecomment-277396232 )
In the mean time you can try this little hack....
echo hello > /proc/1/fd/1
Redirect your output into PID 1's (the docker container) file
descriptor for STDOUT
Which works just fine but has the problem of requiring a manual redirect.
Use the following process:
Make changes in your application to push logs to STDOUT. You may configure this in your logging configuration file.
Configure file to read those STDOUT logs (which eventual is some docker log file location like /var/log etc)
Start your file as a DeamonSets, so that logs from new pods and nodes can be anatomically pushed to ES.
For better readability of logs, make sure you push logs in json format.

Grafana SigV4 Authentication - Docker image?

we have a problem setting up aws-sigv4 and connecting an AWS AMP workspace via docker images.
TAG: grafana/grafana:7.4.5
Main problem is that in the UI the sigv4 configuration screen does not appear.
Installing grafana:7.4.5 locally via Standalone Linux Binaries works.
Just setting the environment variables,
export AWS_SDK_LOAD_CONFIG=true
export GF_AUTH_SIGV4_AUTH_ENABLED=true
the configuration screen appears.
Connecting and querying data to AMP via corresponding IAM instance role is working flawlessly.
Doing the same in the docker image as ENV Variables does NOT work.
When using grafana/grafana:sigv4-web-identity it works, but it seems to me that this is just a "test image".
How to configure the default grafana image in order to enable sigV4 authentication?
It works for me:
$ docker run -d \
-p 3000:3000 \
--name=grafana \
-e "GF_AUTH_SIGV4_AUTH_ENABLED=true" \
-e "AWS_SDK_LOAD_CONFIG=true" \
grafana/grafana:7.4.5
You didn't provide minimal reproducible example, so it's hard to say what is a problem in your case.
Use variable GF_AWS_SDK_LOAD_CONFIG instead of AWS_SDK_LOAD_CONFIG.

How can I push .Net Framework API to PCF?

I am trying to push my .Net Native API to Pivotal Cloud Foundry. Below is the command I am giving for pushing my API.
cf push API-Name -s windows2012R2 -b binary_buildpack -c "start" -m 1G -p C:/Path
While running it will say "No start command detected" but when I did -c ? it showed me that start was a command. Then when I look at the log file it will show me:
ERR Could not determine a start command. Use the -c flag to 'cf push' to specify a custom start command.
and at the end it will say:
ERR Failed to create container
"reason"=>"CRASHED", "exit_description"=>"failed to initialize container"
Am I running the command wrong or is there something I need to do to my API to make it compatible?
I figured out that I had to set the health check off and my app and all instances are started now.
cf set-health-check NAME none

Resources