How to enable GZip compression for Gitlab Pages? - caching

When using Gitlab Pages to render my site which is slow in page-ranking. I can't find any solution on how to do following in GitLab (non-enterprise version)
Specify HTTP Cache Headers for various page resources like for an image, so that it can be cached.
Specify/Enable compression for GZip as page-ranking mentions compression disabled in gitlab.io.

It looks like specifying HTTP Cache Headers is still not possible. But at least they have hardcoded "max-age=600" for all the resources here.
You can compress contents of your public folder via .gitlab-ci.yml:
script:
npm install
npm run build
gzip -k -6 -r public

GitLab has support for serving compressed assets if you pre-compress them in the pages CI Job already. See the documentation.
Note that you can and also should use brotli compression as it's optimized for web content and supported by most modern browsers.
There is also a suggested snippet for your .gitlab-ci.yml:
pages:
# Other directives
script:
# Build the public/ directory first
- find public -type f -regex '.*\.\(htm\|html\|txt\|text\|js\|css\)$' -exec gzip -f -k {} \;
- find public -type f -regex '.*\.\(htm\|html\|txt\|text\|js\|css\)$' -exec brotli -f -k {} \;
I haven't found a way of influencing cache behavior. I am also looking for this.

Enable GZip compression for GitLab Pages
If you add the precompressed .gz versions of the files of your static site, then nginx can serve them instead of the regular ones. Add this line to your .gitlab-ci.yml file:
image: alpine:latest
pages:
stage: deploy
script:
- mkdir .temp
- cp -r * .temp
- mv .temp public
- gzip -k -9 $(find public -type f)
artifacts:
paths:
- public
only:
- master
This command compresses all files found in the public directory with the maximum compression ratio.

Related

Kong custom golang plugin not working in kubernetes/helm setup

I have written custom golang kong plugin called go-wait following the example from the github repo https://github.com/redhwannacef/youtube-tutorials/tree/main/kong-gateway-custom-plugin
The only difference is I created a custom docker image so kong would have the mentioned plugin by default in it's /usr/local/bin directory
Here's the dockerfile
FROM golang:1.18.3-alpine as pluginbuild
COPY ./charts/custom-plugins/ /app/custom-plugins
RUN cd /app/custom-plugins && \
for d in ./*/ ; do (cd "$d" && go mod tidy && GOOS=linux GOARCH=amd64 go build .); done
RUN mkdir /app/all-plugin-execs && cd /app/custom-plugins && \
find . -type f -not -name "*.*" | xargs -i cp {} /app/all-plugin-execs/
FROM kong:2.8
COPY --from=pluginbuild /app/all-plugin-execs/ /usr/local/bin/
COPY --from=pluginbuild /app/all-plugin-execs/ /usr/local/bin/plugin-ref/
# Loop through the plugin-ref directory and create an entry for all of them in
# both KONG_PLUGINS and KONG_PLUGINSERVER_NAMES env vars respectively
# Additionally append `bundled` to KONG_PLUGINS list as without it any unused plugin will case Kong to error out
#### Example Env vars for a plugin named `go-wait`
# ENV KONG_PLUGINS=go-wait
# ENV KONG_PLUGINSERVER_NAMES=go-wait
# ENV KONG_PLUGINSERVER_GO_WAIT_QUERY_CMD="/usr/local/bin/go-wait -dump"
####
RUN cd /usr/local/bin/plugin-ref/ && \
PLUGINS=$(ls | tr '\n' ',') && PLUGINS=${PLUGINS::-1} && \
echo -e "KONG_PLUGINS=bundled,$PLUGINS\nKONG_PLUGINSERVER_NAMES=$PLUGINS" >> ~/.bashrc
# Loop through the plugin-ref directory and create an entry for QUERY_CMD entries needed to load the plugin
# format KONG_PLUGINSERVER_EG_PLUGIN_QUERY_CMD if the plugin name is `eg-plugin` and it should point to the
# plugin followed by `-dump` argument
RUN cd /usr/local/bin/plugin-ref/ && \
for f in *; do echo "$f" | tr "[:lower:]" "[:upper:]" | tr '-' '_' | \
xargs -I {} sh -c "echo 'KONG_PLUGINSERVER_{}_QUERY_CMD=' && echo '\"/usr/local/bin/{} -dump\"' | tr [:upper:] [:lower:] | tr '_' '-'" | \
sed -e '$!N;s/\n//' | xargs -i echo "{}" >> ~/.bashrc; done
This works fine in the docker-compose file and docker container. But when I tried to use the same image in the kubernetes environment along with kong-ingress-controller, I started running into errors "failed to fill-in defaults for plugin: go-wait" and/or a bunch of other errors including "plugin 'go-wait' enabled but not installed" in the logs and I ended up not being able to enable it.
Has anyone tried including go plugins in their kubernetes/helm kong setup. If so please shed some light on this
Update: Found the answer I was looking for, along with setting the environment variables generated by the image, there's modifications in the _helpers.tpl file of the kong helm chart itself.
The reason is that in the deployment charts, the configuration expects plugins to be configured in values-custom.yml used to override the default settings.
But the helm chart seems to be specific to values and plugins being loaded via configMaps which turned out to be a huge bottleneck, as any binary plugin you will generate in golang for kong is going to exceed the maximum allowed limit of the configMaps in kubernetes.
That's the whole reason I had set out on this endeavor to make the plugins part of my image.
TL;dr
I was able to clone the repo to my local system, make the changes for the following patch for loading the plugins from values without having to club them with the lua plugins. (Credits : Answer of thatbenguy from the discussion https://discuss.konghq.com/t/how-to-load-go-plugins-using-kong-helm-chart/5717/10)
--- a/charts/kong/templates/_helpers.tpl
+++ b/charts/kong/templates/_helpers.tpl
## -530,6 +530,9 ## The name of the service used for the ingress controller's validation webhook
{{- define "kong.plugins" -}}
{{ $myList := list "bundled" }}
+{{- range .Values.plugins.goPlugins -}}
+{{- $myList = append $myList .pluginName -}}
+{{- end -}}
{{- range .Values.plugins.configMaps -}}
{{- $myList = append $myList .pluginName -}}
{{- end -}}
Add the following block to my values-custom.yml and I was good to go.
Hopefully this helps anyone else also trying to write custom plugins for kong in golang for use in helm charts.
env:
database: "off"
plugins: bundled,go-wait
pluginserver_names: go-wait
pluginserver_go_wait_query_cmd: "/usr/local/bin/go-wait -dump"
plugins:
goPlugins:
- pluginName: "go-wait"
NOTE : Please remember all this still depends on you having the prebuilt custom kong plugins in your image, in my case I had built an image from the above dockerfile contents (in question) and pushed that to my own docker hub repo and replaced the image in the values-custom.yml using the following block
image:
repository: chalukyaj/kong-custom-image
tag: "1.0.1"
PS: As you guys might have noticed, the only disappointment I have with this is that the environment variables couldn't just be picked from the docker image's ~/.bashrc, which would have made this awesome. But nonetheless, this works, and I couldn't find a single post which showed how to use the new go-pdk (instead of the older go-pluginserver) to build the go plugins and use them in helm.

how to download file VIA wget while target path include Wildcards

here is elegant example how to download file and copy it to
/etc/yum.repo.d folder
example
REPOSITORY_SERVER=master_machine01
wget -nd -r -P /etc/yum.repos.d/ -A ".repo" "http://$REPOSITORY_SERVER/ambari/centos7/2.6.2.2-1/ambari.repo"
after above command ambari.repo file will copied to /etc/yum.repos.d/
note: the file amabri.rep path is
ls -ltr /var/www/html/ambari/centos7/2.6.2.2-1/ambari.repo
-rw-r--r-- 1 root users 304 Jun 11 2018 /var/www/html/ambari/centos7/2.6.2.2-1/ambari.repo
so this is the simple case
now what about path could be as ( with diff path's )
$REPOSITORY_SERVER/ambari/centos7/2.6.2.3-1/ambari.repo
or
$REPOSITORY_SERVER/ambari/centos7/2.6.2.2-4/ambari.repo
then how to use the cli with Wildcards
we try the following
wget -nd -r -P /etc/yum.repos.d/ -A ".repo" "http://$REPOSITORY_SERVER/ambari/centos7/*/ambari.repo"
but we get
HTTP request sent, awaiting response... 404 Not Found
2021-11-28 18:40:07 ERROR 404: Not Found.
or even with backslash
wget -nd -r -P /etc/yum.repos.d/ -A ".repo" "http://$REPOSITORY_SERVER/ambari/centos7/\*/ambari.repo"
BUT WITH THE SAME ERROR
any idea how to resolve this issue?
how to use the cli with Wildcards
It is not possible to perform a glob expansion with HTTP protocol. These are very unrelated technologies.
how to resolve this issue?
Devise and implement a method of getting the available files under certain path from an HTTP server. For example, contact the server administrator and ask him about it. Potentially, if the HTTP server supports serving a directory listing, recursively filter the listing to find all matching paths. Or find and query some other site that contains all the links and filter the obtained answer to extract all links, for example. Etc.

Gitlab CI cache update

On GitlabCI I have caching setup and it is working properly:
cache:
key: gradle
paths:
- .gradle/caches
before_script:
- export GRADLE_USER_HOME=`pwd`/.gradle
In order to speed up the process of uploading my cache (<20s); and take advantage of this; I delete the "extra" files which have been updated during the build:
after_script:
- rm -rf .gradle/caches/$GRADLE_VERSION/
- rm .gradle/caches/journal-1/file-access.bin
- find .gradle/caches/ -name "*.lock" -type f -delete
I expect CI to skip uploading the cache, since none of the files have been updated anymore. i.e.
Result of
- find .gradle/caches/ -mmin -5 -exec ls -la {} +
is an empty list as well.
But that is not the case and my cache is uploaded on every job.
Am I missing something else? Has anyone else ran into this?

Magento media url not working

I want to clear that I have already went through this post:
Magento media url - get rid of 403 Forbidden
but I am not still able to get image url from media library. I have tried all these URLs:
localhost/magento/media/customer/pic1.jpg (access forbidden error)
localhost/magento/media/index.html/customer/pic1.jpg (object not found error but image is present)
localhost/magento/media/index.html (working but of no use)
Correct link is localhost/magento/media/customer/pic1.jpg.
Try to set correct files ownership and permissions.
For apache2 server default configuration:
chown -R www-data:www-data /var/www/html/magento
find /var/www/html/magento/media/ -type f -exec chmod 600 {} \;
find /var/www/html/magento/media/ -type d -exec chmod 700 {} \;
I have to make a index.html and .htaccess file same as media folder(as in my referenced link) and to place in media/customer too. it has been fixed now and working with url : localhost/magento/media/customer/pic1.jpg.

Amazon S3 Command Line Copy all objects to themselves setting Cache control

I have an Amazon S3 bucket with about 300K objects in it and need to set the Cache-control header on all of them. Unfortunately it seems like the only way to do this, besides one at a time, is by copying the objects to themselves and setting the cache control header that way:
http://docs.aws.amazon.com/cli/latest/reference/s3/cp.html
Is the documentation for the Amazon S3 CLI copy command but I have been unsuccessful setting the cache control header using it. Does anyone have an example command that would work for this. I am trying to set cache-control to max-age=1814400
Some background material:
Set cache-control for entire S3 bucket automatically (using bucket policies?)
https://forums.aws.amazon.com/thread.jspa?messageID=567440
By default, aws-cli only copies a file's current metadata, EVEN IF YOU SPECIFY NEW METADATA.
To use the metadata that is specified on the command line, you need to add the '--metadata-directive REPLACE' flag. Here are some examples.
For a single file
aws s3 cp s3://mybucket/file.txt s3://mybucket/file.txt --metadata-directive REPLACE \
--expires 2100-01-01T00:00:00Z --acl public-read --cache-control max-age=2592000,public
For an entire bucket:
aws s3 cp s3://mybucket/ s3://mybucket/ --recursive --metadata-directive REPLACE \
--expires 2100-01-01T00:00:00Z --acl public-read --cache-control max-age=2592000,public
A little gotcha I found, if you only want to apply it to a specific file type, you need to exclude all the files, then include the ones you want.
Only jpgs and pngs
aws s3 cp s3://mybucket/ s3://mybucket/ --exclude "*" --include "*.jpg" --include "*.png" \
--recursive --metadata-directive REPLACE --expires 2100-01-01T00:00:00Z --acl public-read \
--cache-control max-age=2592000,public
Here are some links to the manual if you need more info:
http://docs.aws.amazon.com/cli/latest/userguide/using-s3-commands.html
http://docs.aws.amazon.com/cli/latest/reference/s3/cp.html#options

Resources