Why does terraformer not find plugin? - windows

I am new to the world of terraform. I am trying to use terraformer on a GCP project, but keep getting plugin not found:
terraformer import google --resources=gcs,forwardingRules,httpHealthChecks --connect=true --
regions=europe-west1,europe-west4 --projects=myproj
2021/02/12 08:17:03 google importing project myproj region europe-west1
2021/02/12 08:17:04 google importing... gcs
2021/02/12 08:17:05 open \.terraform.d/plugins/windows_amd64: The system cannot find the path specified.
Prior to the above, installed terraformer thus: choco install terraformer I downloaded the Google provider, terraform-provider-google_v3.56.0_x5.exe, placed it in my terraform dir, and did terraform init. I checked %AppData\Roaming\terraform.d\plugins\windows_amd64 and the plugins\windows_amd64 did not exist, so I created it, dropped in the Google exe and redid the init.
My main.tf was:
provider "google" {
project = "{myproj"
region = "europe-west2"
zone = "europe-west2-a"
}
Could it be the chocolatey install causing this issue? I have never used it before but the alternative install looks daunting!

If you install from Chocolatey, it seems it looks for the provider files at C:\.terraform.d\plugins\windows_amd64. If you create those directories and drop the terraform-provider-xxx.exe file in there it will be picked up.
I would also suggest running it with the --verbose flag.

Remember to execute "terraform init"
Once you install the terraformer, check this
Create the versions.tf file
versions.tf example
terraform {
required_providers {
google = {
source = "hashicorp/aws"
}
}
required_version = ">= 0.13"
}
Execute terraform init
Start the import process, something like:
terraformer import aws --profile [my-aws-profile] --resources="*" --regions=eu-west-1
Where --resources="*" will get everything from aws, but you could specify route53 or other resources
terraformer import aws --profile [my-aws-profile] --resources="route53" --regions=eu-west-1

The daunting instructions worked!
Run git clone <terraformer repo>
Run go mod download
Run go build -v for all providers OR build with one provider go run build/main.go {google,aws,azure,kubernetes and etc}
Run terraform init against an versions.tf file to install the plugins required for your platform. For example, if you need plugins for the google provider, versions.tf should contain:
terraform {
required_providers {
google = {
source = "hashicorp/google"
}
}
required_version = ">= 0.13"
}
making sure to run that version and not the chocolatey one.
go mod download the flag -x will give output while that command runs, otherwise you might think nothing is happening.

Related

Golang google cloud functions using go modules fail build when importing subdirectories

I am trying to deploy a google cloud function using go modules and I am having problem with it picking up code in subdirectories when using go modules.
It appears that inside the gcloud functions deploy build it is duplicating the package name. So for instance locally (and what i expect is this import)
import "github.com/lwaddicor/gofunctionmodules/demoa"
however it will only work in the gcloud functions deploy if set to
import "github.com/lwaddicor/gofunctionmodules/gofunctionmodules/demoa"
If anyone has any ideas what is happening here it would be much appreciated. Thanks
Structure
demoa/demo.go
func.go
go.mod
go.mod
module github.com/lwaddicor/gofunctionmodules
go 1.13
func.go
package gofunctionmodules
import (
"net/http"
// This is what i would expect, and it builds locally but it wont deploy
"github.com/lwaddicor/gofunctionmodules/demoa"
// With this one it deploys but locally wont build
//"github.com/lwaddicor/gofunctionmodules/gocloudfunctions/demoa"
)
// Demo demonstrates the issue with package management in cloud functions
func Demo(w http.ResponseWriter, r *http.Request) {
w.Write(demo.Demo())
}
package demoa
// Demo returns a demo string
func Demo() []byte {
return []byte("demo")
}
Then deploying a new cloud function using
gcloud functions deploy demofunction --entry-point Demo --project my-demo-project --runtime go113 --trigger-http --allow-unauthenticated
Deploying function (may take a while - up to 2 minutes)...failed.
ERROR: (gcloud.functions.deploy) OperationError: code=3, message=Build failed: {"error":{"errorType":2,"canonicalCode":2,"errorId":"32633c29","errorMessage":"
serverless_function_app imports
github.com/lwaddicor/gofunctionmodules/gofunctionmodules imports
github.com/lwaddicor/gofunctionmodules/gocloudfunctions/demo: git ls-remote -q origin in /layers/google.go.gomod/gopath/pkg/mod/cache/vcs/48848540be122d75f1d2e6549925c34394ce402803530dc7962ab1442de3b180: exit status 128:
fatal: could not read Username for 'https://github.com': terminal prompts disabled
Confirm the import path was entered correctly.
If this is a private repository, see https://golang.org/doc/faq#git_https for additional information."}}
Edit: removed the github link as it just downloaded the dependency from the public github repo. Changed demo package to demoa to prevent using cached github copy

DevOps : AWS Lambda .zip with Terraform

I have written Terraform to create a Lambda function in AWS.
This includes specifying my python code zipped.
Running from Command Line into my tech box, all goes well.
The terraform apply action sees my zip moved into AWS and used to create the lambda.
Key section of code :
resource "aws_lambda_function" "meta_lambda" {
filename = "get_resources.zip"
source_code_hash = filebase64sha256("get_resources.zip")
.....
Now, to get this into other environments, I have to push my Terraform via Azure DevOps.
However, when I try to build in DevOps, I get the following :
Error: Error in function call on main.tf line 140, in resource
"aws_lambda_function" "meta_lambda": 140: source_code_hash =
filebase64sha256("get_resources.zip") Call to function
"filebase64sha256" failed: no file exists at get_resources.zip.
I have a feeling that I am missing a key concept here as I can see the .zip in the repo - so do not understand how it is not found by the build?
Any hints/clues as to what I am doing wrong, gratefully welcome.
Chaps, I'm afraid that I may have just been over my head here - new to terraform & DevOps !
I had a word with our (more) tech folks and they have sorted this.
The reason I think yours if failing is becuase the Tar Terraform step
needs to use a different command line so it gets the zip file included
into the artifacts. tar -cvpf terraform.tar .terraform .tf tfplan
tar --recursion -cvpf terraform.tar --exclude='/.git' --exclude='.gitignore' .
.. it that means anything to you !
Whatever they did, it works !
As there is a bounty on this, I still going to assign it as I am grateful for the input !
Sorry if this was a it of a newbie error.
You can try building your package with terraform AWS lambda build module. As it has been very useful for the processTerraform Lambda build module
According to the document example, in the source_code_hash argument, filebase64sha256 ("get_resources.zip") needs to be enclosed in double quotes.
You can refer to this document for details.

How can I deploy beego app to Heroku

I'm creating a web app in beego that I need to have running in Heroku.
I can get it running normally if I just specify the binary for the app in the Procfile. But I want to have swagger available in the app, so I need to use bee to start the app like:
bee run -downdoc=true -gendoc=true
so that it automatically creates and downloads all the swagger related icons, html, etc.
But, if I specify that as the command in Procfile (even after adding bee to vendor so that it is available), it fails because the app doesn't have the go command available in runtime. The exact error is this:
0001 There was an error running 'go version' command: exec: "go": executable file not found in $PATH
How can I bypass this without adding the whole swagger specification to heroku (and github, since it is a repository)?
You can not run bee command on heroku, because it is a executable program.
But you can run beego app on heroku with adding project dependencies. In order to that you should use tools like https://github.com/kardianos/govendor.
1. After install govendor try following steps in your project folder;
$ govendor init
This command will create ./vendor/vendor.json file in current directory.
{
"comment": "https://github.com/ismailakbudak/beego-api-example",
"heroku": {
"install" : [ "./..." ],
"goVersion": "go1.11"
},
"ignore": "test",
"package": [],
"rootPath": "reporting-api"
}
Add heroku tag like above example. There is a information about this configuration on heroku docs in here https://devcenter.heroku.com/articles/go-dependencies-via-govendor#build-configuration
2. After this add beego package dependency with this command
$ govendor fetch github.com/astaxie/beego
It will download the beego packages to ./vendor directory
3. Lastly you should configure listen ports and beego runmode as prod for heroku in main.go file
To deploy your app without problem, default config must be runmode = prod in conf/app.conf. I tried to set it default as dev and change it from heroku config vars as below, but it compiles packages before the set runmode and gives exception with this message panic: you are running in dev mode.
func main() {
log.Println("Env $PORT :", os.Getenv("PORT"))
if os.Getenv("PORT") != "" {
port, err := strconv.Atoi(os.Getenv("PORT"))
if err != nil {
log.Fatal(err)
log.Fatal("$PORT must be set")
}
log.Println("port : ", port)
beego.BConfig.Listen.HTTPPort = port
beego.BConfig.Listen.HTTPSPort = port
}
if os.Getenv("BEEGO_ENV") != "" {
log.Println("Env $BEEGO_ENV :", os.Getenv("BEEGO_ENV"))
beego.BConfig.RunMode = os.Getenv("BEEGO_ENV")
}
beego.BConfig.WebConfig.DirectoryIndex = true
beego.BConfig.WebConfig.StaticDir["/"] = "swagger"
beego.Run()
}
You can use BEEGO_ENV=dev bee run to continue develop your app without change it again

K8s Go client library fails to find package on go get

We wrote some Go code to talk to our Kubernetes cluster and fetch the IP of a Service exposed. We do it like so:
(import "gopkg.in/kubernetes/kubernetes.v1/pkg/client/restclient")
(import kubectl "gopkg.in/kubernetes/kubernetes.v1/pkg/client/unversioned")
svc, err := c.Services(k8sNS).Get(svcName)
if err != nil {
panic(l.Errorf("Could not retrieve svc details. %s", err.Error()))
}
svcIP := svc.Status.LoadBalancer.Ingress[0].IP
go get works fine, and our script executes when we do go run ... and everybody is happy. Now, as of yesterday (from the time this question is posted) on the same script - go get fails. The error is like so:
[09.07.2016 10:56 AM]$ go get
package k8s.io/kubernetes/pkg/apis/authentication.k8s.io/install: cannot find package "k8s.io/kubernetes/pkg/apis/authentication.k8s.io/install" in any of:
/usr/local/go/src/k8s.io/kubernetes/pkg/apis/authentication.k8s.io/install (from $GOROOT)
/home/ckotha/godir/src/k8s.io/kubernetes/pkg/apis/authentication.k8s.io/install (from $GOPATH)
We have not specifically used authentication package in our code. Are we importing kubernetes libraries correctly? is there another way to do this ?
ls on $GOPATH/k8s.io/kubernetes/pkg/apis/ and found this:
:~/godir/src/k8s.io/kubernetes/pkg/apis
[09.07.2016 10:53 AM]$ ls
abac apps authentication authorization autoscaling batch certificates componentconfig extensions imagepolicy OWNERS policy rbac storage
It looks like a package you imported has changed.
You can update existing repositories:
go get -u
The -u flag instructs get to use the network to update the named
packages and their dependencies. By default, get uses the network to
check out missing packages but does not use it to look for updates to
existing packages.
You do use gopkg.io to pin the version to v1, but I think you want to be more specific, eg, v1.3.6 (EDIT: this won't work because gopkg.in doesn't permit package selectors more specific than the major version.).
Alternatively, a good way to ensure code stays the same is to compile your binary and execute that, instead of using go run.

Boxen download archieves from S3-bucket

I am attempting to download a file from a private S3-bucket, through Boxen puppet scripts. However, I haven't found any examples how to do so. All I found was readme's discussing the environment variables (which I set).
But how can I download an archive from S3 and install it locally? Any good examples? Is this done through homebrew or a puppet script?
Thanks
puppet-minecraft does this with a publicly reachable AWS S3 bucket. Perhaps it can help you.
Check out the manifest init.pp >here<, where you'll find this snippet showing the URL that's grabbing the item from AWS S3.
package { 'Minecraft':
source => 'https://s3.amazonaws.com/Minecraft.Download/launcher/Minecraft.dmg',
provider => 'appdmg'
}
Mincraft isn't the only example. I found others in my repo by running on my Mac.:
mdfind -onlyin /opt/boxen/repo/shared s3 | grep manifest
/opt/boxen/repo/shared/vmware_fusion/manifests/init.pp
/opt/boxen/repo/shared/ruby/manifests/version.pp
/opt/boxen/repo/shared/minecraft/manifests/init.pp
/opt/boxen/repo/shared/java/manifests/init.pp
/opt/boxen/repo/shared/istatmenus4/manifests/init.pp
/opt/boxen/repo/shared/heroku/manifests/init.pp
/opt/boxen/repo/shared/github_for_mac/manifests/init.pp

Resources