How to enable log rotation in envoy file based logging? - api-gateway

I am using envoy as api gateway. For logging, I am using a file-based logging approach (schema - type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog).
The logs are accumulated in a file but how does log rollover work here? Will the log file get bigger and bigger every day?

There is no log rotation available out-of-the-box with Envoy (see issue #1109).
However, you can use a tool like logrotate to handle your access logs file rotation. The following config can be used to rotate logs daily and keep 7 days of logs:
/var/log/envoy/access.log {
daily
rotate 7
missingok
compress
notifempty
nocreate
sharedscripts
copytruncate
}
Just put that config in a file named /etc/logrotate.d/envoy, and check that logrotate is run daily (/etc/cron.daily/logrotate).
You can test this using a container:
FROM envoyproxy/envoy:v1.22.2
RUN apt update && apt install -y logrotate
RUN install -d -m 0755 -o envoy -g envoy /var/log/envoy
# logrotate_envoy.conf is logrotate config file
COPY logrotate_envoy.conf /etc/logrotate.d/envoy
And use this envoy config:
access_log:
- name: envoy.access_loggers.file
typed_config:
"#type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
path: /var/log/envoy/access.log

Related

Having problems setting up Logstash

I've succesfully been able to set up Elasticsearch, Kibana etc and when I run: 'sudo systemctl status elasticsearch' it is all running fine.
However, when I execute 'sudo systemctl status logstash' this is the output:
It fails to start logstash, I've read numerous articles online saying it's something to do with path or config perhaps but I've had no luck finding a correct working solution.
I have JDK downloaded and followed the guide on the logstash documentation site so I'm unsure to as why logstash is not being allowed to run.
This is the output when I try to find out the logstash version.
The error message is
No configuration found in the configured sources
This means that you don't have any pipeline configuration in /etc/logstash/conf.d that Logstash can run, so it stops.
run logstash, logstash will read pipelines.yml to find your conf location
Logstash will find your .conf file from pipelines.yml. By default it will looking at /etc/logstash/conf.d/ as pipelines.yml show.
You have to move your configuration file to the path so logstash could find it.
or you want to run with specified file with it will ignore the pipeline.yml so logstash will directly go into your .conf
/usr/share/logstash/bin/logstash -f yourconf.conf
I will suggest you to do 1. but 2 is good for debugging your configuration file.

How get syslogd running on DDEV

I installed Drupal syslog module and I want to make sure that syslog configuration is working correctly. But checking inside the web container, I don't know if syslogd is running, and I haven't found where the logs might be.
As Randy said, you can add the syslog daemon as a post-start hook in your config.yaml. We use it as follows:
hooks:
post-start:
- exec: sudo syslogd
- exec: printf "\n\n#Drupal logs\n#local0.* /var/log/drupal.log\n" | sudo tee -a /etc/syslog.conf
This starts syslogd and configures it to write everything in facility (I guess its called like that) "local0.*" to /var/log/drupal.log.
To see what's in there, simply ssh into DDEV (ddev ssh) and run tail -f /var/log/syslog.
After enabling the syslog module you need to go to admin/config/development/logging and select "LOG_LOCAL0" as syslog facility and set a syslog format.
You have to install syslogd in your web container by adding webimage_extra_packages: [inetutils-syslogd] to your config.yaml and then restart. This will add the necessary package.
After starting your project, you should ddev ssh and run syslogd. That will manually start syslogd. If you want this to always start up, you can add running syslogd to a post-start exec hook.
The logs from syslogd go into /var/log/syslog
It probably makes more sense to use a different syslogd server, so you could use one of the many syslog images on hub.docker.com as a third-party service in ddev, or you could run syslogd on your workstation, or you could also use something like https://www.papertrail.com/

Docker image to be reread by kubernetes google cloud

as I am green to that subject, could you pls. help.
I deploy docker image to gcloud kubernetes.
What to do to make the cluster reread the docker image when a new one would appear?
My code is:
sudo docker build -t gcr.io/${PROJECT_ID}/sf:$ENV .
sudo docker push gcr.io/${PROJECT_ID}/sf:$ENV
sudo gcloud container clusters create sf:$ENV --num-nodes=3
sudo kubectl run sfmill-web$ENV --image=gcr.io/${PROJECT_ID}/sf:$ENV --port 8088
sudo kubectl expose deployment sfmill-web$ENV --type=LoadBalancer --port 8088 --target-port 8088
kubectl set image deployment/sfmill-web$ENV sf=sf:$ENV
I encourage you to explore use Kubernetes configuration files to define resources.
You can explore the YAML for your deployment with:
kubectl get deployment/sfmill-web$ENV --output=yaml > ${PWD}/sfmill-web$ENV.yaml
You could then tweak the value of the image property and then reapply this to your cluster using:
kubectl apply --filename=${PWD}/sfmill-web$ENV.yaml
The main benefit to the configuration file approach is that you're effectively creating code to manage your infrastructure and, each time you change your code, you could check it into source control thereby knowing what you did at each stage.
Using kubectl is great but it makes it more challenging to recreate the cluster from scratch.... Which kubectl command did I perform next? Yes, you could (bash) script all your kubectl commands too which would help but configuration files remain the ideal solution.
HTH

Codedeploy setup

I am trying to deploy Laravel on AWS using code deploy, I have attached a sample yml file as well.
As the hook:BeforeInstall: will configure the php, mysql and other configuration that will be needed to run the Laravel Application. I need to know whenever the deployment is made will that hook trigger each time? as I don't want to install the php mysql each time, what I want to do is it should run only on first time and for all other deployments it should not install configurations again.
version: 0.0
os: linux
files:
- source: /*
destination: /var/www/html/my/directory
hooks:
BeforeInstall:
- location: scripts/install_dependencies
timeout: 300
runas: root
- location: scripts/start_server
timeout: 300
runas: root
for first time install php and mysql you can write shell script
for second time write another shell script
each time it call different shell script file....
you can refer this one...
where there is yaml file and script folder contain sh files
https://github.com/enzyme-ops/laravel-code-deploy-template

Bitbucket Pipelines - is it possible to download additional file to project via curl?

We have a separate build for the frontend and backend of the application, where we need to pull the dist build of frontend to backend project during the build. During the build the 'curl' cannot write to the desired location.
In detail, we are using SpringBoot as backend for serving Angular 2 frontend. So we need to pull the frontend files to src/main/resources/static folder.
image: maven:3.3.9
pipelines:
default:
- step:
script:
- curl -s -L -v --user xxx:XXXX https://api.bitbucket.org/2.0/repositories/apprentit/rent-it/downloads/release_latest.tar.gz -o src/main/resources/static/release_latest.tar.gz
- tar -xf -C src/main/resources/static --directory src/main/resources/static release_latest.tar.gz
- mvn package -X
As a result of this the build fails with output of CURL.
* Failed writing body (0 != 16360)
Note: I've tried the same with maven-exec-plugin, the result was the same. The solution works on local machine naturally.
I would try running these commands from a local docker run of the image you're specifying (maven:3.3.9). I found that the most helpful way to debug things that were behaving differently in Pipelines vs. in my local environment.
To your specific question yes, you can download external content from the Pipeline run. I have a Pipeline that clones other repos from BitBucket via HTTP into the running container.

Resources