Goland IDE build and run with aws-vault - go

Trying to google something for Goland vs Golang is proving to be quite hard. Everything I am searching seems to come back for code or switching profiles. That is all already handled.
I had a project that was taking in json and processing the data. I was able to use the run and debug button to build and debug my go code with the default configuration.
That changed I am pulling data files from S3 and that requires authentication to aws which we use aws-vault for.
The issue I am running into is in this configuration there is no additional settings. There is a checkbox to Run after build but no way for me to say Run with aws-vault
Now I have to uncheck Run after build and add the flag
-gcflags="-N -l" -o app
and then attach to that process with Shift + Option + fn + F5.
What I am looking for is being able to run aws-vault exec user -- go ... within the IDE so I do not have a build step, a run step and then manually attaching to the process.

Figured out at least what I feel is a better solution that allows you to run any code (including cli) that is using an AWS SDK.
I am on a mac so osascript works for me but the prompt can be whatever your os supports. Or if you have a Yubikey you can use prompt=ykman.
In ~/.aws there are 2 files config and credentials these tell the SDK how to auth.
To start in ~/.aws/config there is a profile for each role that is needed. Default is a role that you assume all the others are ones that the code would escalate to.
[default]
output=json
region=<your region>
mfa_serial=arn:aws:iam::<you>
[profile dev-base]
source_profile=default
role_arn=arn:aws:iam::<account to escalate to>
[profile staging-base]
source_profile = default
role_arn = arn:aws:iam::<account to escalate to>
[dev]
region = <your region>
[staging]
region = <your region>
Note: one oddity is that I had to put the role in this file with the region so that the role exists.
This may not be needed if you are not using java. You could put the full role in the previous file (but I also use java so this is my setup) in ~/.aws/credentials
[dev]
ca_bundle = /Users/<username>/.aws/cert.pem
credential_process=aws-vault exec dev-base -j --prompt=osascript
[staging]
ca_bundle = /Users/<username>/.aws/cert.pem
credential_process=aws-vault exec master-base -j --prompt=osascript
Note: An oddity here is that ca_bundle is specified. Something in golang was not happy with using the AWS_CA_BUNDLE and this appears to work.
Now when the code is ran a pop-up displays asking for an MFA token.
Also, when running any aws cli command you can use the --profile ie aws s3 ls --profile dev that you want to use and the pop-up will appear.
Editing these file manually when using aws-vault might not be the best way to do it but at the moment this is how we manage them and this seems to give the best workflow.

Related

Google cloud app engine - How to edit code using SSH and debug-mode

I am trying to debug an application I have deployed to google cloud app engine. Reading the docs, I figured out that in order to do so I have to enter the debug mode using
gcloud app --project [Project ID] instances enable-debug
afterwards I am able to SSH into my instance and access root. Now I would like to edit some of the files. However, trying to use vim or nano does not seem to work.
Is there a way to edit those files without re-deploying the entire app?
Once you SSH into the App Engine instance and open a shell into the Docker container, you'll need to download the package list before installing nano or vim:
apt-get update && apt-get install nano
Then you can edit your app's files (which are in /app):
nano composer.json
The deployed app runs live code. It is not generally feasible to edit it. Moreover, changes made to the running container are not permanent; in fact they and are lost at the first re-start.
You may find some information on the Debugging an Instance page.
Unrelated to the above, an actual command-line editor is offered in the cloud shell.

Parse Server S3 Adapter Deprecated

The Parse S3 Adapter's requirement of S3_ACCESS_KEY and S3_SECRET_KEY is now deprecated. It says to use the environment variables: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. We are have setup an AWS user with an Access Key ID and we have our secret key as well. We have updated to the latest version of the adapter and removed our old S3_X_Key variables. Unfortunately, as soon as we do this we are unable to access, upload or change files on our S3 bucket. The user does have access to our buckets properties and if we change it back to use the explicit S3_ACCESS_KEY and secret everything works.
We are hosting on Heroku and haven't had any issues until now.
What else needs to be done to set this up?
This deprecation notice is very vague on how to fix this.
(link to notice: https://github.com/parse-server-modules/parse-server-s3-adapter#deprecation-notice----aws-credentials)
I did the following steps and it's working now:
Installed Amazon's CLI
http://docs.aws.amazon.com/cli/latest/userguide/installing.html
Configured CLI by creating a user and then creating key id and secret
http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
Set the S3_BUCKET env variable
export S3_BUCKET=
Installed files adapter using command
npm install --save #parse/s3-files-adapter
In my parse-server's index.js added the files adapter
var S3Adapter = require('#parse/s3-files-adapter');
var s3Adapter = new S3Adapter();
var api = new ParseServer({
appId: 'my_app',
masterKey: 'master_key',
filesAdapter: s3Adapter
})
Arjav Dave's answer below is best if you are using AWS or a hosting solution where you can login to the server and run the AWS Configure command on the server. Or if you are running everything locally.
However, I was asking about Heroku and this goes for any server environment where you can set ENV variables.
Really it comes down to just a few steps. If you have a previous version setup you are going to switch your file adapter to just read:
filesAdapter: 'parse-server-s3-adapter',
(or whatever your npm installed package is called some are using the #parse/... one)
Take out the require statement and don't create any instance variables of S3Adapter or anything like that in your index.js.
Then in Heroku.com create config vars or with the CLI: heroku config:set AWS_ACCESS_KEY_ID=abc and heroku config:set AWS_SECRET_ACCESS_KEY=abc
Now run and test your uploading. All should be good.
The new adapter uses the environment variables for access and you just have to tell it what file adapter is installed in the index.js file. It will handle the rest. If this isn't working it'll be worth testing the IAM profile setup and making sure it's all working before coming back to this part. See below:
Still not working? Try running this example (edit sample.js to be your bucket when testing):
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/getting-started-nodejs.html
Completely lost and no idea where to start?
1 Get Your AWS Credentials:
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/getting-your-credentials.html
2 Setup Your Bucket
https://transloadit.com/docs/faq/how-to-set-up-an-amazon-s3-bucket/
(follow the part on IAM users as well)
3 Follow IAM Best Practices
https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
Then go back to the top of this posting.
Hope that helps anyone else that was confused by this.

How to push container to Google Container Registry (unable to create repository)

EDIT: I'm just going to blame this on platform inconsistencies. I have given up on pushing to the Google Cloud Container Registry for now, and have created an Ubuntu VM where I'm doing it instead. I have voted to close this question as well, for the reasons stated previously, and also as this should probably have been asked on Server Fault in the first place. Thanks for everyone's help!
running $ gcloud docker push gcr.io/kubernetes-test-1367/myapp results in:
The push refers to a repository [gcr.io/kubernetes-test-1367/myapp]
595e622f9b8f: Preparing
219bf89d98c1: Preparing
53cad0e0f952: Preparing
765e7b2efe23: Preparing
5f2f91b41de9: Preparing
ec0200a19d76: Preparing
338cb8e0e9ed: Preparing
d1c800db26c7: Preparing
42755cf4ee95: Preparing
ec0200a19d76: Waiting
338cb8e0e9ed: Waiting
d1c800db26c7: Waiting
42755cf4ee95: Waiting
denied: Unable to create the repository, please check that you have access to do so.
$ gcloud init results in:
Welcome! This command will take you through the configuration of gcloud.
Settings from your current configuration [default] are:
[core]
account = <my_email>#gmail.com
disable_usage_reporting = True
project = kubernetes-test-1367
Your active configuration is: [default]
Note: this is a duplicate of Kubernetes: Unable to create repository, but I tried his solution and it did not help me. I've tried appending :v1, /v1, and using us.gcr.io
Edit: Additional Info
$ gcloud --version
Google Cloud SDK 116.0.0
bq 2.0.24
bq-win 2.0.18
core 2016.06.24
core-win 2016.02.05
gcloud
gsutil 4.19
gsutil-win 4.16
kubectl
kubectl-windows-x86_64 1.2.4
windows-ssh-tools 2016.05.13
+
$ gcloud components update
All components are up to date.
+
$ docker -v
Docker version 1.12.0-rc3, build 91e29e8, experimental
The first image push requires admin rights for the project. I had the same problem trying to push a new container to GCR for a team project, which I could resolve by updating my permissions.
You might also want to have a look at docker-credential-gcr. Hope that helps.
What version of gcloud and Docker are you using?
Looking at your requests, it seems as though the Docker client is not attaching credentials, which would explain the access denial.
I would recommend running gcloud components update and seeing if the issue reproduces. If it still does, feel free to reach out to us on gcr-contact at google.com so we can help you debug the issue and get your issue resolved.
I am still not able to push a docker image from my local machine, but authorizing a compute instance with my account and pushing an image from there works. If you run into this issue, I recommend creating a Compute Engine instance (for yourself), authorizing an account with gcloud auth that can push containers, and pushing from there. I have my source code in a Git repository that I can just pull from to get the code.
Thanks for adding your Docker version info. Does downgrading Docker to a more stable release (e.g. 1.11.2) help at all? Have you run 'docker-machine upgrade'?
It seems like you're trying to run gcloud docker push from an Google Compute Engine instance without a proper security scope of read/write access to Google Cloud Storage (it's where Google Container Registry stores the images of your containers behind the scene).
Try to create another instance, but this time with proper access scopes, i.e.:
gcloud compute --project "kubernetes-test-1367" instances create "test" --zone "us-east1-b" --machine-type "n1-standard-1" --network "default" --scopes default="https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring.write","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management","https://www.googleapis.com/auth/devstorage.full_control" --image "/debian-cloud/debian-8-jessie-v20160629" --boot-disk-size "10" --boot-disk-type "pd-standard" --boot-disk-device-name "test-1"
Once you create new instance, ssh into it and then try to re-run the gcloud docker push gcr.io/kubernetes-test-1367/myapp command
I checked for
gcloud auth list
to see my application is the active account and not my personal Google account. After setting
gcloud config set account example#gmail.com
I was able to push
gcloud docker -- push eu.gcr.io/$PROJECT_ID/my-docker:v1
So I can continue http://kubernetes.io/docs/hellonode/
I had a similar issue and it turned out that I had to enable billing for the project. When you have a new Google Cloud account you can enable only so many projects with billing. Once I did that it worked.
Also this could be the cause of this problem (was in my case):
Important: make sure the Compute Engine API is enabled for your project on the
Source: https://pinrojas.com/2016/09/12/your-personal-kubernetes-image-repo-in-a-few-steps-gcr-io/
If anyone is still having this problem while trying to push a docker image to gcr, even though they've authenticated an account that should have the permission to do so, try running gcloud auth configure-docker and pushing again.

Running docker as non-root user OR running jenkins on tomcat as root user

I am trying to build a docker image using docker-maven plugin, and plan to execute the mvn command using jenkins. I have jenkins.war deployed on a tomcat instance instead of a standalone app, which runs as a non-root user.
The problem is that docker needs to be run as root user, so maven commands need to be run as root user, and hence jenkins/tomcat needs to run as root user which is not a good practice (although my non-root-user is also sudoer so I guess won't matter much).
So bottom line, I see two solutions : Either run docker as non-root user (and need help on how to do that)
OR
Need to run jenkins as root (And not sure how to achieve that as I changed environment variable /config and still its not switching to root).
Any advice on which solution to choose and how to implement it ?
The problem is that docker needs to be run as root user, so maven commands need to be run as root user,
No, a docker run can be done with a -u (--user) parameter in order to use a non-root user inside the container.
Either run docker as non-root user
Your user (on the host) needs to be part of the docker group. Then you can run the docker service with that user.
As commented, this is not very secure.
See:
"chrisfosterelli/dockerrootplease"
"Understanding how uid and gid work in Docker containers"
That last links ends with the following findings:
If there’s a known uid that the process inside the container is executing as, it could be as simple as restricting access to the host system so that the uid from the container has limited access.
The better solution is to start containers with a known uid using the--user (you can use a username also, but remember that it’s just a friendlier way of providing a uid from the host’s username system), and then limiting access to the uid on the host that you’ve decided the container will run as.
Because of how uids and usernames (and gids and group names) map from a container to the host, specifying the user that a containerized process runs as can make the process appear to be owned by different users inside vs outside the container.
Regarding that last point, you now have user namespace (userns) remapping (since docker 1.10, but I would advice 17.06, because of issue 33844).
I am also stuck on how to setup a docker build server.
Here's where I see ground truth right now...
Docker commands require root privileges
This is because if can run arbitrary docker commands, you have the same powers as root on the host. (You can build a container runnings as root internally, with a filesystem mount to anywhere on the host, thus allowing any root action.)
The "docker" group is a big lie IMHO. It's effectively the same as making the members root.
The only way I can see to wrap docker with any kind of security for non-root use is to build custom bash scripts to launch very specific docker commands, then to carefully audit the security implications of those commands, then add those scripts to the sudoers file (granting passwordless sudo to non-root users).
In the world where we integrate docker into development pipelines (e.g. putting docker commands in Maven builds or allow developers to make arbitrary changes to build definitions for a docker build server), I have idea how you maintain any security.
From a lot of searching and research debugging this issue in the the last week.
I found to run a maven docker container as non root would be to pass the user flag
eg -u 1000
But for this to work correctly the user needs to be in the /passwd directory of the image
to work around this you can add the host (Jenkins) /etc/passwd directory to the docker image and use a non root user.
From your system commmand arguments on the docker run container add the following to mount the correct volumes to the mvn image to allow the host non root user to get mapped inside the maven container.
-v /share:/share -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro -v "$HOME/.m2":/var/maven/.m2:z -w /usr/src/mymaven -e MAVEN_CONFIG=/var/maven/.m2 -e MAVEN_OPTS="-Duser.home=/var/maven"
I know this might not be the most informative answer but it should work to run a mvn container as non root specifically to run otj-embedded-pg for integration tests that pass fine locally but fail on a Jenkins server.
See this link OTJ_EMBEDDED_RUN_IN_CI_SERVER
As most of the posters on that thread suggest creating a new image there is no need to do that and you can run the latest maven docker image with the commands listed above and it works as it should
Hope this helps somebody that might get stuck on this issue and save them a few hours work.

Accessing Meteor Settings in a Self-Owned Production Environment

According to Meteor's documentation, we can include a settings file through the command line to provide deployment-specific settings.
However, the --settings option seems to only be available through the run and deploy commands. If I am running my Meteor application on my own infrastructure - as outlined in the Running on Your Own Infrastructure section of the documentation - there doesn't seem to be a way to specify a deployment-specific settings file anywhere in the process.
Is there a way to access Meteor settings in a production environment, running on my own infrastructure?
Yes, include the settings contents in an environmental variable METEOR_SETTINGS. For example,
export METEOR_SETTINGS='{"privateKey":"MY_KEY", "public":{"publicKey":"MY_PUBLIC_KEY", "anotherPublicKey":"MORE_KEY"}}'
And then run the meteor app as normal.
This will populate the Meteor.settings object has normal. For the settings above,
Meteor.settings.privateKey == "MY_KEY" #Only on server
Meteor.settings.public.publicKey == "MY_PUBLIC_KEY" #Server and client
Meteor.settings.public.anotherPublicKey == "MORE_KEY" #Server and client
For our project, we use an upstart script and include it there (although upstart has a slightly different syntax). However, if you are starting it with a normal shell script, you just need to include that export statement before your node command. You could, for example, have a script like:
export METEOR_SETTINGS='{"stuff":"real"}'
node /path/to/bundle/main.js
or
METEOR_SETTINGS='{"stuff":"real"}' node /path/to/bundle/main.js
You can find more information about bash variables here.

Resources