Google's documentation mentions that cdn can enable dynamic compression, and an example is provided below, but I can't successfully set the dynamic compression strategy
document
document link
The error I see when I execute the command is as follows, is it changed to other flags?And from the log, the size of the transferred image is not compressed.
error
It looks like there are two problems. First, you appear to have an old version of gcloud that was released prior to the dynamic compression Beta. The right way to upgrade gcloud will depend on your platform, but it's typically gcloud components update or sudo apt-get upgrade google-cloud-sdk.
Once you have an up-to-date gcloud installation, you'll likely run into the second problem. Google's documentation incorrectly advises you to use gcloud compute, but you need to use gcloud beta compute while the feature remains in Beta:
$ gcloud compute backend-services update YOUR_BACKEND_SERVICE_NAME --compression-mode=AUTOMATIC ERROR: (gcloud.compute.backend-services.update) unrecognized arguments:
--compression-mode flag is available in one or more alternate release tracks. Try:
gcloud alpha compute backend-services update --compression-mode
gcloud beta compute backend-services update --compression-mode
--compression-mode=AUTOMATIC (did you mean '--custom-response-header'?)
To search the help text of gcloud commands, run:
gcloud help -- SEARCH_TERMS
$
You can now enable Dynamic Compression from within the UX. Under the Backend Service/Bucket.
Enable Dynamic Compression
You may need to purge the old content in order for the new compressed content to be served. Cloud CDN will not evict a valid object just because you changed the configuration.
Related
We have build a few Microservices (MS) which have been deployed to our company's K8s clusters.
For current deployment, any one of our MSs will be built as a Docker image and they deployed manually using the following steps; and it works fine:
Create Configmap
Installing a Service.yaml
Installing a Deployment.yaml
Installing an Ingress.yaml
I'm now looking at Helm v3 to simplify and encapsulate these deployments. I've read a lot of the Helm v3 documentation, but I still haven't found the answer to some simple questions and I hope to get an answer here before absorbing the entire doc along with Go and SPRIG and then finding out it won't fit our needs.
Our Spring MS has 5 separate application.properties files that are specific to each of our 5 environments. These properties files are simple multi-line key=value format with some comments preceded by #.
# environment based values
key1=value1
key2=value2
Using helm create, I installed a chart called ./deploy in the root directory which auto-created ./templates and a values.yaml.
The problem is that I need to access the application.properties files outside of the Chart's ./deploy directory.
From helm, I'd like to reference these 2 files from within my configmap.yaml's Data: section.
./src/main/resource/dev/application.properties
./src/main/resources/logback.xml
And I want to keep these files in their current format, not rewrite them to JSON/YAML format.
Does Helm v3 allow this?
Putting this as answer as there's no enough space on the comments!
Check the 12 factor app link I shared above, in particular the section on configuration... The explanation there is not great but the idea is behind is to build one container and deploy that container in any environment without having to modify it plus to have the ability to change the configuration without the need to create a new release (the latter cannot be done if the config is baked in the container). This allows, for example, to change a DB connection pool size without a release (or any other config parameter). It's also good from a security point of view as you might not want the container running in your lower environments (dev/test/whatnot) having production configuration (passwords, api keys, etc). This approach is similar to the Continuous Delivery principle of build once, deploy anywhere.
I assume that when you run the app locally, you only need access to one set of configuration, so you can keep that in a separate file (e.g. application.dev.properties), and have the parameters that change between environments in helm environment variables. I know you mentioned you don't want to do this, but this is considered a good practice nowadays (might be considered otherwise in the future...).
I also think it's important to be pragmatic, if in your case you don't feel the need to have the configuration outside of the container, then don't do it, and probably using the suggestion I gave to change a command line parameter to pick the config file works well. At the same time, keep in mind the 12 factor-app approach in case you find out you do need it in the future.
I have a single repository that hosts my lambda functions on github. I would like to be able to deploy the new versions whenever new logic is pushed to master.
I did a lot of reasearch and found a few different approaches, but nothing really clear. Would like to know what others feel would be the best way to go about this, and maybe some detail (if possible) into how that pipeline is setup.
Thanks
Welcome to StackOverflow. You can improve your question by reading this page.
You can setup a CI/CD pipeline using CircleCI with its GitHub integration (which is an online Service, so you don't need to maintain anything, like a Jenkins server, for example)
Upon every commit to your repository, a CircleCI build will be triggered. Once the build process is over, you can declare sls deploy, sam deploy, use Terraform or even create a script to upload the .zip file from your GitHub repo to an S3 Bucket and then, within your script, invoke the create-function command. There's an example how to deploy Serverless applications using CircleCI along with the Serverless Framework here
Other options include TravisCI, AWS Code Deploy or even maintain your own CI/CD Server. The same logic applies to all of these tools though: commit -> build -> deploy (using one of the tools you've chosen).
EDIT: After #Matt's answer, it clicked that the OP never mentioned the Serverless Framework (I, somehow, thought he was already using it, so I pointed the OP to tutorials using the Serverless Framework already). I then decided to update my answer with a few other options for serverless deployment
I know that this isn't exactly what you asked for but I use Serverless Framework (https://serverless.com) for deployment and I love it. I don't do my deployments when I push to my repo. Instead I push to my repo after I've deployed. I like this flow because a deployment can fail due to so many things and pushing to GitHub is much less likely to fail. I this way, I prevent pushing code that failed to deploy to my master branch.
I don't know if you're familiar with the framework but it is super simple. The website describes the simple steps to creating and deploy a function like this.
1 # Step 1. Install serverless globally
2 $ npm install serverless -g
3
4 # Step 2. Create a serverless function
5 $ serverless create --template hello-world
6
7 # Step 3. deploy to cloud provider
8 $ serverless deploy
9
10 # Your function is deployed!
11 $ http://xyz.amazonaws.com/hello-world
There are also a number of plugins you can use to integrate easily with custom domains on APIGateway, prune older versions of lambda functions that might be filling up your limits, etc...
Overall, I've found it to be the easiest way to manage and deploy my lambdas. Hope it helps!
Given that you're using AWS Lambda, you may want to consider CodePipeline to automate your release process. [SAM(https://docs.aws.amazon.com/lambda/latest/dg/serverless_app.html) may also be interesting.
I too had the same problem. I wanted to manage 12 lambdas with 1 git repository. I solved it by introducing travis-ci. travis-ci saved the time and really useful in many ways. We can check the logs whenever we want and you can share the logs to anyone by sharing the URL. The sample documentation of all steps can be found here. You can go through it. 👍
Is it possible to specify to run a Google Cloud Build in a specific region and zone?
The documentation seems to outline how to run a kubectl in a specific region/zone for deploying containers, but it doesn't seem to document where to run the cloud build itself. I've also not found this setting in the automatic build trigger configuration.
The latest public doc on regionalization can be found here: https://cloud.google.com/build/docs/locations
(Earlier answer on this topic follows)
This is not yet an available feature, however, at Google Next we announced an EAP / soon to be Alpha feature called "Custom Workers" which will enable this functionality for you. You can watch the demo here: https://www.youtube.com/watch?v=IUKCbq1WNWc&feature=youtu.be
I spent a week trying to set up Safe-guard and Openshift in docker-container and completely torn apart...
I am working at a project where I plan to have clients, who can be given access to only those indices. X-pack, Safe-guard enterprise work perfectly - unfortunately until I get any clients I cannot pay yearly fees of several thousands $.
I tried to setup Safe-guard, turn off enterprise mode and then install openshift-elasticsearch-plugin
If I install them both after many tunings - I got an error that you cannot enable functionality in openshift that already enabled by safeguard.
When I install only openshift-elasticsearch-plugin and set all settings - it says "Failed authentication for null".
Here is the repository https://github.com/SvitlanaShepitsena/Lana
I have a small issue (somehow sleep does not work) so in order to start the cluster you need:
docker-compose up
docker ps
docker exec [container-id] -it /bin/bash
./sgadmin.sh
After 1 week of work I am desperate and beg for help :-).
The openshift-elasticsearch-plugin is designed to add specific features to the openshift logging stack. It, among other things, provides dynamic ACLs for users based on their openshift permissions. I would suggest containerizing an Elasticsearch image and adding the Searchguard plugins directly. Alternatively, versions of Elasticsearch later then the the one the plugin is designed for (2.4.4) are able to utilize XPACK that provides similar security.
Its preinstalled https://hub.docker.com/r/elastic/elasticsearch and can be configured as described https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
I have several projects in gcloud, call them e.g. "staging-project" and "production-project". I have created an image, call it "stagign-image-1" in "staging-project", which I use for new instances. And I would like to use this image in "production-project" as well.
As far as I know, it is possible to do it using gcloud command line tool - you log there using your private google account, which has access to both projects and do:
gcloud config set project "production-project
gcloud compute instances create production-instance-from-staging-image --image staging-image-1 --image-project staging-project
This works fine for me, but I have few colleagues who don't like command line so much. So is there a way how to achieve this in the gcloud web console? When I list images in production-project, I simply do not see the staging-image-1 and I found no way how to select it. :(
--image-project is not currently supported in the developers console.
An issue has been filed to fix that. Thanks.