Can Google Cloud Build recurse through directories of artifacts? - google-cloud-build

My workspace looks like this:
|
|--> web-app
|
|--> src
|--> build
|
|--> fonts
|--> static
My cloudbuild.json looks like this:
{
"steps" : [
{
...
},
],
"artifacts": {
"objects": {
"location": "gs://my_bucket/",
"paths": [
"web-app/build/**"
]
}
}
}
What I'm hoping for is that Google Cloud Build will recurse through the contents of the build/ folder and copy the files & directories to my storage bucket. Instead it only copies the files that are rooted in the build/ directory, ignores the directories and gives a warning about using the -r option of gsutil cp.
Here is the build output:
...
Artifacts will be uploaded to gs://my_bucket using gsutil cp
web-app/build/**: Uploading path....
Omitting directory "file://web-app/build/fonts". (Did you mean to do cp -r?)
Omitting directory "file://web-app/build/static". (Did you mean to do cp -r?)
Copying file://web-app/build/index.html [Content-Type=text/html]...
Copying file://web-app/build/asset-manifest.json [Content-Type=application/json]...
Copying file://web-app/build/favicon.ico [Content-Type=image/vnd.microsoft.icon]...
Copying file://web-app/build/manifest.json [Content-Type=application/json]...
Copying file://web-app/build/service-worker.js [Content-Type=application/javascript]...
/ [5/5 files][ 28.4 KiB/ 28.4 KiB] 100% Done
Operation completed over 5 objects/28.4 KiB.
web-app/build/**: 5 matching files uploaded
5 total artifacts uploaded to gs://my_bucket/
Uploading manifest artifacts-d4a2b3e4-97ba-4eb0-b226-e0c914ac4f61.json
Artifact manifest located at gs://my_bucket/artifacts-d4a2b3e4-97ba-4eb0-b226-e0c914ac4f61.json
DONE
The documentation https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames#directory-by-directory-vs-recursive-wildcards suggests that this shouldn't be the case.
I guess I could use the gsutil cloud builder but my suspicion is that I don't need to and that I'm doing something wrong here.

There's currently (2018-11) no way to copy an artifacts directory recursively one-to-one. Your best bet is to use a gsutil step in your cloudbuild.yaml file (as you mentioned already), similar to:
steps:
- ....
- name: 'gcr.io/cloud-builders/gsutil'
args: ['-m', 'cp', '-r', 'web-app/build*', 'gs://my_bucket/$BUILD_ID']

In the end of 2022 gcloud storage was released and is a lot faster to copy files than gsutils according to the official blogpost.
steps:
- ....
- name: "gcr.io/cloud-builders/gcloud-slim"
args: [
"storage",
"cp",
"--recursive",
"build",
"gs://my_bucket/$BUILD_ID",
]

Related

install a node red module from a git repo

for node red, how do you install a node?
I downloaded some code from github that is for node red and placed the contents in this directory:
~/.node-red/node_modules/volttron
Looks like this:
How do I install it, so I can pull the module out of the node red pallet?
The repository you link to includes a readme with instructions for how to install it. Nowhere does it say to copy anything into the node_modules directory.
Step one says:
Copy all files from volttron/examples/NodeRed to your .node-red/nodes
directory.
The instructions included in that directory say to place the files in the ~/.node-red/nodes/volttron directory (you will need to make the nodes dir) not ~/.node-red/node_modules/volttron. But even then it won't work out of the box as it requires the python-shell npm module to also be installed.
A slightly better approach will be to do the following:
Copy the files to ~/.node-red/node_modules/volttron.
In order for Node-RED to locate the nodes in the node_modules directory there must be a package.json file. This also needs to include node-red section listing the nodes.
The package.json also needs to include the required pre-requisite modules in this case python-shell
As a short term work around you can create a package.json in the ~/.node-red/node_modules/volttron directory with the other files and containing the following:
{
"name" : "volttron",
"version" : "0.0.1",
"description" : "A sample node for node-red",
"dependencies": {
"python-shell": "^3.0.1"
},
"keywords": [ "node-red" ],
"node-red" : {
"nodes": {
"volttron": "volttron.js"
}
}
}
Then run npm install while in the volttron directory. You will need to restart Node-RED for the node to be discovered

Serverless.yml AWS Lambda in Windows 10: The symlinked folder is not in final package

My folder structure like this:
Root ---
common
search
elastic_client
elastic_client.py
elastic_delete
elastic_delete.py
requirements.txt
elastic_client (symlink)
../elastic_client
function1
elastic_delete (symlink)
../common/search/elastic_delete
serverless.yml
functions:
elastic-delete:
handler: elastic_delete.lambda_handler
module: elastic_delete
package:
include:
- elastic_delete/**
When I do "sls deploy", the elastic_client folder is not getting deployed/not in the final .zip file, that means the elastic_client.py is not getting packed. This issue is only in Windows 10. In Mac, I don't see this issue.
I created symlinks with the command mklink.
I don't have a windows machine, but typically the way I do this is packaging functionality of the framework. https://www.serverless.com/framework/docs/providers/google/guide/packaging/
At least for MacOS, I just include the directories I want (relative from where the serverless.yml files are, and they are included into the directory of the deployed package.
Hope that helps.

Got "ZIP does not support timestamps before 1980" while deploying a Go Cloud Function on GCP via Triggers

Problem:
I am trying to deploy a function with this step in a second level compilation (second-level-compilation.yaml)
- name: 'gcr.io/cloud-builders/gcloud'
args: ['beta', 'functions',
'deploy', '${_FUNCTION_NAME}',
'--source', 'path/to/function',
'--runtime', 'go111',
'--region', '${_GCP_CLOUD_FUNCTION_REGION}',
'--entry-point', '${_ENTRYPOINT}',
'--env-vars-file', '${_FUNCTION_PATH}/.env.${_DEPLOY_ENV}.yaml',
'--trigger-topic', '${_TRIGGER_TOPIC_NAME}',
'--timeout', '${_FUNCTION_TIMEOUT}',
'--service-account', '${_SERVICE_ACCOUNT}']
I get this error from Cloud Build using the Console.
Step #1: Step #11: ERROR: (gcloud.beta.functions.deploy) Error creating a ZIP archive with the source code for directory path/to/function: ZIP does not support timestamps before 1980
Here is the global flow:
The following step is in a first-level compilation (first-level-compilation.yaml). Which is triggered by Cloud build using a Github repository (via Application GitHub Cloud Build) :
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args: ['-c', 'launch-second-level-compilation.sh ${_MY_VAR}']
The script "launch-second-level-compilation.sh" does specific operations based on ${_MY_VAR} and then launches a second-level compilation passing a lot of substitutions variables with "gcloud builds submit --config=second-level-compilation.yaml --substitutions=_FUNCTION_NAME=val,_GCP_CLOUD_FUNCTION_REGION=val,....."
Then, the "second-level-compilation.yaml" described at the beginning of this question is executed, using the substitutions values generated and passed through the launch-second-level-compilation.sh script.
The main idea here is to have a generic first-level-compilation.yaml in charge of calling a second-level compilation with specific dynamically generated substitutions.
Attempts / Investigations
As described in this issue Cloud Container Builder, ZIP does not support timestamps before 1980, I tried to "ls" the files in the /workspace directory. But none of the files at the /workspace root have strange DATE.
I changed the path/to/function from a relative path to /workspace/path/to/function, but no success, without surprise as the directory ends to be the same.
Please make sure you don't have folders without files. For example:
|--dir
|--subdir1
| |--file1
|--subdir2
|--file2
In this example dir doesn't directly contain any file, only subdirectories. During local deployment gcp sdk puts dir into tarball without copying last modified field.
So it is set to 1st Jan 1970 that causes problems with ZIP.
As possible workaround just make sure every directory contains at least one file.

Include file from node_modules to nyc

I want to use nyc to generate code-coverage. I am building some of my projects into node_modules to use them in other projects. When writing tests I want to test the files inside node_modules and therefore I want to include files from node_modules.
Project-Example-Structure
1. foo (directory)
1.1 bar (directory)
1.1.1 node_modules (directory)
1.1.1.1 someFile.js // I want to include this!
1.1.2 foobar
1.1.2.1 foobar.js // this file works
1.1.3 .nycrc
.nycrc
{
"reporter": [
"html",
"text"
],
"all": true,
"cwd": "../",
"report-dir": "./bar/test-run/coverage",
"include": [
"./bar/**/foobar/*.js",
"./bar/**/node_modules/*.js",
]
}
Execute in terminal
nyc mocha
Explanation
nyc uses the .nycrc. cwd: change-working-directory. I want to be able to include files of parent-directory. Sadly include seems not to be able to use "../".
Inside the include-flag I am specifying which files should be included:
"./bar/foobar/foobar.js" does somehow not work.
But: "./bar/**/foobar/foobar.js" includes foorbar.js.
Expected behaiviour
someFile.js should be included. foorbar.js should be included.
Observed behaiviour
someFile.js is not included. foorbar.js is included.
Environment
MacOS Sierra
nyc 11.8.0
You have to modify your config files with
{
"include": [
"node_modules/**/<fileName>.js"
],
"excludeNodeModules": false
}

qooxdoo: Building scss using scss.py

I tried to compile scss to css by scss.py.
I finally found that I had to create the folder structure qooxdoo-3.0-sdk/tool/pylib/scss/sass/frameworks and copy qooxdoo-3.0-sdk/framework/source/resource/qx/mobile/scss/* into it.
Do I have to add some path reference?
"compile-css" :
{
"let" :
{
"SCSS_CMD" : "${PYTHON_CMD} ${QOOXDOO_PATH}/tool/bin/scss.py"
},
"shell" :
{
"command" :
[
"${SCSS_CMD} --output=${QOOXDOO_PATH}/framework/source/qx/mobile/css/ios.css ${QOOXDOO_PATH}/framework/source/resource/qx/mobile/scss/ios.scss"
]
}
}
I'm not sure what you mean with "path reference". Normally you won't have to create the sass/frameworks directory path and copy files manually. You would be working and generating files in your app directories only.
Can you provide more context and what you are trying to achieve? :)
I assume you created a mobile app (./qooxdoo-3.0-sdk/create-application.py -n myApp -t mobile). This provides you already with a watch job for scss (watch-scss) [1] in your config.json. So there you can see how we use tool/bin/scss.py [2]. This is also covered in the dedicated manual page, which you might have found already [3].
[1] http://manual.qooxdoo.org/3.0/pages/tool/generator/default_jobs_actions.html#watch-scss
[2] https://github.com/qooxdoo/qooxdoo/blob/master/component/skeleton/mobile/config.tmpl.json
[3] http://manual.qooxdoo.org/3.0/pages/mobile/theming.html

Resources