Is there a way we can import a file into YAML in GCP Deployment Manager - google-deployment-manager

I am trying to create a configuration file in GCP Deployment Manager and i have a metadata file which needs to imported as text.
I know on how do it in .py file, but wondering on how to do it in YAML.
I tried different but none seem to work.

Although Deployment Manager can use the imports statement to import Jinja2 or Python templates into the root configuration file, plain YAML cannot be imported. This is limitation of YAML. It does not have "import" or "include" functionality.
A similar question has been discussed here: https://stackoverflow.com/a/15437697/11602913.
In a pure YAML deployment file, metadata can be provided literally, as described in the document
Google Cloud Platform for AWS Professionals: Infrastructure Deployment Tools:
resources:
- name: my-first-vm-template
type: compute.v1.instance
properties:
...
metadata:
items:
- key: startup-script
value: "STARTUP-SCRIPT-CONTENTS"
If metadata should be loaded from a file, you have to use Jinja2 templates. There is an example at codelabs.developers.google.com:
Deploy Your Infrastructure Using Deployment Manager > Creating your deployment configuration
imports:
- path: instance.jinja
- path: ../startup-script.sh
name: startup-script.sh
resources:
- name: my-instance
type: instance.jinja
properties:
metadata-from-file:
startup-script: startup-script.sh

Related

Envoy dynamic configuration from filesystem, multiple path for config

I would like to define multiple path for dynamic resources, like this:
node:
id: id_1
cluster: test
dynamic_resources:
cds_config:
- path: /root/envoy/dynamic_fs/cds_ssh.yaml
- path: /root/envoy/dynamic_fs/cds_https.yaml
- path: /root/envoy/dynamic_fs/cds_db.yaml
lds_config:
- path: /root/envoy/dynamic_fs/lds_ssh.yaml
- path: /root/envoy/dynamic_fs/lds_https.yaml
- path: /root/envoy/dynamic_fs/lds_db.yaml
I read the official envoy documentation, didn't find a solution. It is possible to achieve something like this? If config is big, with two yaml it's hard to read.
Thanks in advance!
No, you cannot. You can read in the docs that CDS, LDS, and ADS only accept a single source of config.

How to print/debug all jobs of includes in GitLab CI?

In Gitlab CI it is possible to include one or many files in a .gitlab-ci.yml file.
It is even possible to nest these includes.
See https://docs.gitlab.com/ee/ci/yaml/includes.html#using-nested-includes.
How can I see the resulting CI file all at once?
Right now, when I debug a CI cycle, I open every single include file and
combine the resulting file structure by myself. There has to be a better way.
Example
Content of https://company.com/autodevops-template.yml:
variables:
POSTGRES_USER: user
POSTGRES_PASSWORD: testing_password
POSTGRES_DB: $CI_ENVIRONMENT_SLUG
production:
stage: production
script:
- install_dependencies
- deploy
environment:
name: production
url: https://$CI_PROJECT_PATH_SLUG.$KUBE_INGRESS_BASE_DOMAIN
only:
- master
Content of .gitlab-ci.yml:
include: 'https://company.com/autodevops-template.yml'
image: alpine:latest
variables:
POSTGRES_USER: root
POSTGRES_PASSWORD: secure_password
stages:
- build
- test
- production
production:
environment:
url: https://example.com
This should result in the following file structure:
image: alpine:latest
variables:
POSTGRES_USER: root
POSTGRES_PASSWORD: secure_password
POSTGRES_DB: $CI_ENVIRONMENT_SLUG
stages:
- build
- test
- production
production:
stage: production
script:
- install_dependencies
- deploy
environment:
name: production
url: https://example.com
only:
- master
→ How can I see this output somewhere?
Environment
Self-Hosted GitLab 13.9.1
As I continued to search for a solution, I read about a “Pipeline Editor“ which was released in GitLab version 13.8 this year. Turns out that the feature I am looking for was added to this editor just some days ago:
Version 13.9.0 (2021-02-22) „View an expanded version of the CI/CD configuration” → see Release Notes for a feature description and introduction video.
To view the fully expanded CI/CD configuration as one combined file,
go to the pipeline editor’s »View merged YAML« tab. This tab displays an
expanded configuration where:
Configuration imported with include is copied into the view.
Jobs that use extends display with the extended configuration merged into the job.
YAML anchors are replaced with the linked configuration
Usage: Open project → Module »CI / CD« → Submodule »Editor« → Tab »View merged YAML«
GitLab 15.1 offers an altervative:
Link to included CI/CD configuration from the pipeline editor
A typical CI/CD configuration uses the include keyword to import configuration stored in other files or CI/CD templates. When editing or troubleshooting your configuration though, it can be difficult to understand how all the configuration works together because the included configuration is not visible in your .gitlab-ci-yml, you only see the include entry.
In this release, we added links to all included configuration files and templates to the pipeline editor.
Now you can easily access and view all the CI/CD configuration your pipeline uses, making it much easier to manage large and complex pipelines.
See Documentation and Issue.

google deployment manager, can you import files in jinja template that you call directly with --template?

https://cloud.google.com/deployment-manager/docs/configuration/templates/create-basic-template
I can deploy a template directly like this: gcloud deployment-manager deployments create a-single-vm --template vm_template.jinja
But what if that template depends on other files that need to be imported? If using a --config file you can define import in that file and call the template as a resource. But you cant pass parameter/properties to a config file. I want to call a template directly to pass --properties via the command line but that template also needs to import other files.
EDIT: What I needed was a top level jinja template instead of a config. My confusion was that you cant use imports in a jinja template without a schema file- it was failing and I thought it wasnt supported. So the solution was just swap out the config with a jinja template (with schema file) and then I can use --properies
Maybe you can try importing the dependent files into your config file as follows:
imports:
- path: vm-template.jinja
- path: vm-template-2.jinja
# In the resources section below, the properties of the resources are replaced
# with the names of the templates.
resources:
- name: vm-1
type: vm-template.jinja
- name: vm-2
type: vm-template-2.jinja
and Set Arbitrary Metadata insito create a special variable that you can pass and might use in other applications outside of Deployment Manager:
properties:
size:
type: integer
default: 2
description: Number of Mongo Slaves
variable-x: ultra-secret-sauce
More info about gcloud deployment-manager deployments create optional flags and example can be found here.
More info about passing properties using a Schema can be found here
Hope it helps

Serverless - Lambda Layers "Cannot find module 'request'"

When I deploy my serverless api using:
serverless deploy
The lambda layer gets created but when I go to run the function is gives me this error:
"Cannot find module 'request'"
But if I upload the .zip file manually through the console (the exactly same file thats uploaded when I deploy), it works fine.
Any one have any idea why this is happening?
environment:
SLS_DEBUG: "*"
provider:
name: aws
runtime: nodejs8.10
stage: ${opt:api-type, 'uat'}-${opt:api, 'payment'}
region: ca-central-1
timeout: 30
memorySize: 128
role: ${file(config/prod.env.json):ROLE}
vpc:
securityGroupIds:
- ${file(config/prod.env.json):SECURITY_GROUP}
subnetIds:
- ${file(config/prod.env.json):SUBNET}
apiGateway:
apiKeySourceType: HEADER
apiKeys:
- ${file(config/${opt:api-type, 'uat'}.env.json):${opt:api, "payment"}-APIKEY}
functions:
- '${file(src/handlers/${opt:api, "payment"}.serverless.yml)}'
package:
# individually: true
exclude:
- node_modules/**
- nodejs/**
plugins:
- serverless-offline
- serverless-plugin-warmup
- serverless-content-encoding
custom:
contentEncoding:
minimumCompressionSize: 0 # Minimum body size required for compression in bytes
layers:
nodejs:
package:
artifact: nodejs.zip
compatibleRuntimes:
- nodejs8.10
allowedAccounts:
- "*"
Thats what my serverless yaml script looks like.
I was having a similar error to you while using the explicit layers keys that you are using to define a lambda layer.
My error (for the sake of web searches) was this:
Runtime.ImportModuleError: Error: Cannot find module <package name>
I feel this is a temporary solution b/c I wanted to explicitly define my layers like you were doing, but it wasn't working so it seemed like a bug.
I created a bug report in Serverless for this issue. If anyone else is having this same issue they can track it there.
SOLUTION
I followed this this post in the Serverless forums based on these docs from AWS.
I zipped up my node_modules under the folder nodejs so it looks like this when it is unzipped nodejs/node_modules/<various packages>.
Then instead of using the explicit definition of layers I used the package and artifact keys like so:
layers:
test:
package:
artifact: test.zip
In the function layer it is referred to like this:
functions:
function1:
handler: index.handler
layers:
- { Ref: TestLambdaLayer }
The TestLambdaLayer is a convention of <your name of layer>LambdaLayer as documented here
Make sure you run npm install inside your layers before deploying, ie:
cd ~/repos/repo-name/layers/utilityLayer/nodejs && npm install
Otherwise your layers will get deployed without a node_modules folder. You can download the .zip of your layer from the Lambda UI to confirm the contents of that layer.
If anyone face a similar issue Runtime.ImportModuleError, is fair to say that another cause of this issue could be a package exclude statement in the serverless.yml file.
Be aware that if you have this statement:
package:
exclude:
- './**'
- '!node_modules/**'
- '!dist/**'
- '.git/**'
It will cause exactly the same error, on runtime once you've deployed your lambda function (with serverless framework). Just, ensure to remove the ones that could create a conflict across your dependencies
I am using typescript with the serverless-plugin-typescript and I was having a same error, too.
When I switched from
const myModule = require('./src/myModule');
to
import myModule from './src/myModule';
the error disappeared. It seems like the files were not included into the zip file by serverless when I was using require.
PS: Removing the serverless-plugin-typescript and switching back to javascript also solved the problem.

task config not found

I am getting
task config 'get-snapshot-jar/infra/hw.yml
not found error. I have written a very simple pipeline .yml, this yml will connect to artifactory resource and run another yml which is defined in task section.
my pipeline.yml looks like:
resources:
- name: get-snapshot-jar
type: docker-image
source: <artifactory source>
repository: <artifactory repo>
username: {{artifactory-username}}
password: {{artifactory-password}}
jobs:
- name: create-artifact
plan:
- get: get-snapshot-jar
trigger: true
- task: copy-artifact-from-artifact-repo
file: get-snapshot-jar/infra/hw.yml
Artifactiory is working fine after that I am getting an error
enter image description here
copy-artifact-from-artifact-repo
task config 'get-snapshot-jar/infra/hw.yml' not found
You need to specify an input for your copy-artifact-from-artifact-repo task which passes the get-snapshot-jar resource to the tasks docker container. Take a look at this post where someone runs into a similar problem Trigger events in Concourse .
Also your file variable looks weird. Your are referencing a docker-image resource which, according to the official concourse-resource github repo, has no yml files inside.
Generally i would keep my task definitions as close as possible to the pipeline code. If you have to reach out to different repos you might loose the overview if your pipeline keeps growing.
cheers,

Resources