I have written Terraform to create a Lambda function in AWS.
This includes specifying my python code zipped.
Running from Command Line into my tech box, all goes well.
The terraform apply action sees my zip moved into AWS and used to create the lambda.
Key section of code :
resource "aws_lambda_function" "meta_lambda" {
filename = "get_resources.zip"
source_code_hash = filebase64sha256("get_resources.zip")
.....
Now, to get this into other environments, I have to push my Terraform via Azure DevOps.
However, when I try to build in DevOps, I get the following :
Error: Error in function call on main.tf line 140, in resource
"aws_lambda_function" "meta_lambda": 140: source_code_hash =
filebase64sha256("get_resources.zip") Call to function
"filebase64sha256" failed: no file exists at get_resources.zip.
I have a feeling that I am missing a key concept here as I can see the .zip in the repo - so do not understand how it is not found by the build?
Any hints/clues as to what I am doing wrong, gratefully welcome.
Chaps, I'm afraid that I may have just been over my head here - new to terraform & DevOps !
I had a word with our (more) tech folks and they have sorted this.
The reason I think yours if failing is becuase the Tar Terraform step
needs to use a different command line so it gets the zip file included
into the artifacts. tar -cvpf terraform.tar .terraform .tf tfplan
tar --recursion -cvpf terraform.tar --exclude='/.git' --exclude='.gitignore' .
.. it that means anything to you !
Whatever they did, it works !
As there is a bounty on this, I still going to assign it as I am grateful for the input !
Sorry if this was a it of a newbie error.
You can try building your package with terraform AWS lambda build module. As it has been very useful for the processTerraform Lambda build module
According to the document example, in the source_code_hash argument, filebase64sha256 ("get_resources.zip") needs to be enclosed in double quotes.
You can refer to this document for details.
Related
I am a newbie in go and go-swagger. I am following steps in Simple Server tutorial in goswagger.io.
I am using Ubuntu 18.04, swagger v0.25.0 and go 1.15.6.
Following the same steps, there are a few differences of the files generated. For instance, goswagger.io's has find_todos_okbody.go and get_okbody.go in models but mine does not. Why is that so?
Link to screenshot of my generated files vs
Link to screenshot of generated files by swagger.io
Starting the server as written in the tutorial go install ./cmd/todo-list-server/ gives me the following error. Can anyone please help with this?
# my_folder/swagger-todo-list/restapi
restapi/configure_todo_list.go:41:8: api.TodosGetHandler undefined (type *operations.TodoListAPI has no field or method TodosGetHandler)
restapi/configure_todo_list.go:42:6: api.TodosGetHandler undefined (type *operations.TodoListAPI has no field or method TodosGetHandler)
The first step in goswagger.io todo-list is swagger init spec .... Which directory should I run this command in? I ran it in a newly created folder in my home directory. However, from the page, it shows the path to be ~/go/src/github.com/go-swagger/go-swagger/examples/tutorials/todo-list. I am not sure whether I should use go get ..., git clone ... or create those folders. Can someone advise me?
Thanks.
This is likely the documentation lagging behind the version of the code that you are running. As long as it compiles, the specific files the tool generates isn't so crucial.
This is a compilation error. When you do go install foo it will try to build the foo package as an executable and then move that to your GOPATH/bin directory. It seems that the generated code in restapi/configure_todo_list.go isn't correct for the operations code generated.
All you need to run this tutorial yourself is an empty directory and the swagger tool (not its source code). You run the commands from the root of this empty project. In order not to run into GOPATH problems I would initialise a module with go mod init todo-list-example before doing anything else.
Note that while the todo-list example code exists inside the go-swagger source, it's there just for documenting example usage and output.
What I would advice for #2 is to make sure you're using a properly released version of go-swagger, rather than installing from the latest commit (which happens when you just do a go get), as I have found that to be occasionally unstable.
Next, re-generate the entire server, but make sure you also regenerate restapi/configure_todo_list.go by passing --regenerate-configureapi to your swagger generate call. This file isn't always refreshed because you're meant to modify it to configure your app, and if you changed versions of the tool it may be different and incompatible.
If after that you still get the compilation error, it may be worth submitting a bug report at https://github.com/go-swagger/go-swagger/issues.
Thanks #EzequielMuns. The errors in #2 went away after I ran go get - u -f ./... as stated in
...
For this generation to compile you need to have some packages in your GOPATH:
* github.com/go-openapi/runtime
* github.com/jessevdk/go-flags
You can get these now with: go get -u -f ./...
I think it's an error of swagger code generation. You can do as folloing to fix this:
delete file configure_todo_list.go;
regenerate code.
# swagger generate server -A todo-list -f ./swagger.yml
Then, you can run command go install ./cmd/todo-list-server/, it will succeed.
I have a boiler template saved in my local. How do I create a template using it? I tried the below command, but it did not work:
serverless create --template-path '.\Boiler plate\' --name UserRegistration
I got the following error:
TypeError [ERR_INVALID_ARG_TYPE]: The "path" argument must be of type string. Received undefined
at validateString (internal/validators.js:120:11)
at Object.join (path.js:375:7).....
.........
None of the solutions I find online worked for me.
The error says the serverless command argument path is undefined. On the serverless create documentation page, there is an example listed that says:
serverless create --template-path path/to/my/template/folder --path path/to/my/service --name my-new-service
This will copy the path/to/my/template/folder folder into path/to/my/service and rename the service to my-new-service.
In order to solve your problem, you need to provide a valid template-path pointing to a local Serverless template and provide a 'target path' using --path to which your template will be copied. So you command will probably look like this:
serverless create --template-path '.\Boiler plate' --path /target/for/your/template.yml --name UserRegistration
Note: I haven't adjusted '.\Boiler plate\' in this command. Are you sure it's correct using a backslash \ after . ?
i have added "versioned: true" in the "catalog.yml" file of the "hello_world" tutorial.
example_iris_data:
type: pandas.CSVDataSet
filepath: data/01_raw/iris.csv
versioned: true
Then when I used
"kedro run" to run the tutorial, it has error as below:
"VersionNotFoundError: Did not find any versions for CSVDataSet".
May i know what is the right way for me to do versioning for the "iris.csv" file? thanks!
Try versioning one of the downstream outputs. For example, add this entry in your catalog.yml, and run kedro run
example_train_x:
type: pandas.CSVDataSet
filepath: data/02_intermediate/example_iris_data.csv
versioned: true
And you will see example_iris.data.csv directory (not a file) under data/02_intermediate. The reason example_iris_data gives you an error is that it's the starting data and there's already iris.csv in data/01_raw so, Kedro cannot create data/01_raw/iris.csv/ directory because of the name conflict with the existing iris.csv file.
Hope this helps :)
The reason for the error is that when Kedro tries to load the dataset, it looks for a file in data/01_raw/iris.csv/<load_version>/iris.csv and, of course, cannot find such path. So if you really want to enable versioning for your input data, you can move iris.csv like:
mv data/01_raw/iris.csv data/01_raw/iris.csv_tmp
mkdir data/01_raw/iris.csv
mv data/01_raw/iris.csv_tmp data/01_raw/iris.csv/<put_some_timestamp_here>/iris.csv
You wouldn't need to do that for any intermediate data as this path manipulations are done by Kedro automatically when it saves a dataset (but not on load).
I need to run stanza ner in a platform without any access to external network. The code stanza.download('en') fails. Running without the download function, gives me an exception
Exception: Resources file not found at: \home\stanza_resources\resources.json. Try to download the model again
Is there a way to download and cache all the required modules in a resource directory and point this directory to stanza pipeline?
Thanks
It looks like both download and the Pipeline class take an argument for directory dir
So the below code works
stanza.download('en', dir='resources/', processors={ner_processor: package})
nlp_pipeline = stanza.Pipeline('en', dir='resources/', processors={ner_processor: package})
Problem:
I am trying to deploy a function with this step in a second level compilation (second-level-compilation.yaml)
- name: 'gcr.io/cloud-builders/gcloud'
args: ['beta', 'functions',
'deploy', '${_FUNCTION_NAME}',
'--source', 'path/to/function',
'--runtime', 'go111',
'--region', '${_GCP_CLOUD_FUNCTION_REGION}',
'--entry-point', '${_ENTRYPOINT}',
'--env-vars-file', '${_FUNCTION_PATH}/.env.${_DEPLOY_ENV}.yaml',
'--trigger-topic', '${_TRIGGER_TOPIC_NAME}',
'--timeout', '${_FUNCTION_TIMEOUT}',
'--service-account', '${_SERVICE_ACCOUNT}']
I get this error from Cloud Build using the Console.
Step #1: Step #11: ERROR: (gcloud.beta.functions.deploy) Error creating a ZIP archive with the source code for directory path/to/function: ZIP does not support timestamps before 1980
Here is the global flow:
The following step is in a first-level compilation (first-level-compilation.yaml). Which is triggered by Cloud build using a Github repository (via Application GitHub Cloud Build) :
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args: ['-c', 'launch-second-level-compilation.sh ${_MY_VAR}']
The script "launch-second-level-compilation.sh" does specific operations based on ${_MY_VAR} and then launches a second-level compilation passing a lot of substitutions variables with "gcloud builds submit --config=second-level-compilation.yaml --substitutions=_FUNCTION_NAME=val,_GCP_CLOUD_FUNCTION_REGION=val,....."
Then, the "second-level-compilation.yaml" described at the beginning of this question is executed, using the substitutions values generated and passed through the launch-second-level-compilation.sh script.
The main idea here is to have a generic first-level-compilation.yaml in charge of calling a second-level compilation with specific dynamically generated substitutions.
Attempts / Investigations
As described in this issue Cloud Container Builder, ZIP does not support timestamps before 1980, I tried to "ls" the files in the /workspace directory. But none of the files at the /workspace root have strange DATE.
I changed the path/to/function from a relative path to /workspace/path/to/function, but no success, without surprise as the directory ends to be the same.
Please make sure you don't have folders without files. For example:
|--dir
|--subdir1
| |--file1
|--subdir2
|--file2
In this example dir doesn't directly contain any file, only subdirectories. During local deployment gcp sdk puts dir into tarball without copying last modified field.
So it is set to 1st Jan 1970 that causes problems with ZIP.
As possible workaround just make sure every directory contains at least one file.