Parse Server S3 Adapter Deprecated - heroku

The Parse S3 Adapter's requirement of S3_ACCESS_KEY and S3_SECRET_KEY is now deprecated. It says to use the environment variables: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. We are have setup an AWS user with an Access Key ID and we have our secret key as well. We have updated to the latest version of the adapter and removed our old S3_X_Key variables. Unfortunately, as soon as we do this we are unable to access, upload or change files on our S3 bucket. The user does have access to our buckets properties and if we change it back to use the explicit S3_ACCESS_KEY and secret everything works.
We are hosting on Heroku and haven't had any issues until now.
What else needs to be done to set this up?
This deprecation notice is very vague on how to fix this.
(link to notice: https://github.com/parse-server-modules/parse-server-s3-adapter#deprecation-notice----aws-credentials)

I did the following steps and it's working now:
Installed Amazon's CLI
http://docs.aws.amazon.com/cli/latest/userguide/installing.html
Configured CLI by creating a user and then creating key id and secret
http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
Set the S3_BUCKET env variable
export S3_BUCKET=
Installed files adapter using command
npm install --save #parse/s3-files-adapter
In my parse-server's index.js added the files adapter
var S3Adapter = require('#parse/s3-files-adapter');
var s3Adapter = new S3Adapter();
var api = new ParseServer({
appId: 'my_app',
masterKey: 'master_key',
filesAdapter: s3Adapter
})

Arjav Dave's answer below is best if you are using AWS or a hosting solution where you can login to the server and run the AWS Configure command on the server. Or if you are running everything locally.
However, I was asking about Heroku and this goes for any server environment where you can set ENV variables.
Really it comes down to just a few steps. If you have a previous version setup you are going to switch your file adapter to just read:
filesAdapter: 'parse-server-s3-adapter',
(or whatever your npm installed package is called some are using the #parse/... one)
Take out the require statement and don't create any instance variables of S3Adapter or anything like that in your index.js.
Then in Heroku.com create config vars or with the CLI: heroku config:set AWS_ACCESS_KEY_ID=abc and heroku config:set AWS_SECRET_ACCESS_KEY=abc
Now run and test your uploading. All should be good.
The new adapter uses the environment variables for access and you just have to tell it what file adapter is installed in the index.js file. It will handle the rest. If this isn't working it'll be worth testing the IAM profile setup and making sure it's all working before coming back to this part. See below:
Still not working? Try running this example (edit sample.js to be your bucket when testing):
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/getting-started-nodejs.html
Completely lost and no idea where to start?
1 Get Your AWS Credentials:
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/getting-your-credentials.html
2 Setup Your Bucket
https://transloadit.com/docs/faq/how-to-set-up-an-amazon-s3-bucket/
(follow the part on IAM users as well)
3 Follow IAM Best Practices
https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
Then go back to the top of this posting.
Hope that helps anyone else that was confused by this.

Related

Goland IDE build and run with aws-vault

Trying to google something for Goland vs Golang is proving to be quite hard. Everything I am searching seems to come back for code or switching profiles. That is all already handled.
I had a project that was taking in json and processing the data. I was able to use the run and debug button to build and debug my go code with the default configuration.
That changed I am pulling data files from S3 and that requires authentication to aws which we use aws-vault for.
The issue I am running into is in this configuration there is no additional settings. There is a checkbox to Run after build but no way for me to say Run with aws-vault
Now I have to uncheck Run after build and add the flag
-gcflags="-N -l" -o app
and then attach to that process with Shift + Option + fn + F5.
What I am looking for is being able to run aws-vault exec user -- go ... within the IDE so I do not have a build step, a run step and then manually attaching to the process.
Figured out at least what I feel is a better solution that allows you to run any code (including cli) that is using an AWS SDK.
I am on a mac so osascript works for me but the prompt can be whatever your os supports. Or if you have a Yubikey you can use prompt=ykman.
In ~/.aws there are 2 files config and credentials these tell the SDK how to auth.
To start in ~/.aws/config there is a profile for each role that is needed. Default is a role that you assume all the others are ones that the code would escalate to.
[default]
output=json
region=<your region>
mfa_serial=arn:aws:iam::<you>
[profile dev-base]
source_profile=default
role_arn=arn:aws:iam::<account to escalate to>
[profile staging-base]
source_profile = default
role_arn = arn:aws:iam::<account to escalate to>
[dev]
region = <your region>
[staging]
region = <your region>
Note: one oddity is that I had to put the role in this file with the region so that the role exists.
This may not be needed if you are not using java. You could put the full role in the previous file (but I also use java so this is my setup) in ~/.aws/credentials
[dev]
ca_bundle = /Users/<username>/.aws/cert.pem
credential_process=aws-vault exec dev-base -j --prompt=osascript
[staging]
ca_bundle = /Users/<username>/.aws/cert.pem
credential_process=aws-vault exec master-base -j --prompt=osascript
Note: An oddity here is that ca_bundle is specified. Something in golang was not happy with using the AWS_CA_BUNDLE and this appears to work.
Now when the code is ran a pop-up displays asking for an MFA token.
Also, when running any aws cli command you can use the --profile ie aws s3 ls --profile dev that you want to use and the pop-up will appear.
Editing these file manually when using aws-vault might not be the best way to do it but at the moment this is how we manage them and this seems to give the best workflow.

How can I run AWS Lambda locally and access DynamoDB?

I try to run and test an AWS Lambda service written in Golang locally using SAM CLI. I have two problems:
The Lambda does not work locally if I use .zip files. When I deploy the code to AWS, it works without an issue, but if I try to run locally with .zip files, I get the following error:
A required privilege is not held by the client: 'handler' -> 'C:\Users\user\AppData\Local\Temp\tmpbvrpc0a9\bootstrap'
If I don't use .zip, then it works locally, but I still want to deploy as .zip and it is not feasible to change the template.yml every time I want to test locally
If I try to access AWS resources, I need to set the following environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN
However, if I set these variables in template.yml and then use sam local start-api --env-vars to fill them with the credentials, then the local environment works and can access AWS resources, but when I deploy the code to the real AWS, it gives an error, since these variables are reserved. I also tried to use different names for these variables, but then the local environment does not work, and also tried to omit these from template.yml and just use the local env-vars, but environment variables must be present in template.yml and cannot be created with env-vars, can only fill existing variables with values.
How can I make local env work but still be able to deploy to AWS?
For accessing AWS resources you need to look at IAM permissions rather than using programmatic access keys, check this document out for cloudformation.
To be clear virtually nothing deployed on AWS needs those keys, it's all about applying permissions to X(lambda, ec2 etc etc) - those keys are only really needed for the aws cli and some local envs like serverless and sam
The serverless framework now supports golang, if you're new I'd say give that a go while you get up to speed with IAM/Cloudformation.

Cassandra Astra securely deploying to heroku

I am developing an app using python and Cassandra(Astra provider) and trying to deploy it on Heroku.
The problem is connecting to the database requires the credential zip file to be present locally- https://docs.datastax.com/en/astra/aws/doc/dscloud/astra/dscloudConnectPythonDriver.html
'/path/to/secure-connect-database_name.zip'
and Heroku does not have support for uploading credentials files.
I can configure the username and password as environment variable but the credential zip file can't be configured as an environment variable.
heroku config:set CASSANDRA_USERNAME=cassandra
heroku config:set CASSANDRA_PASSWORD=cassandra
heroku config:set CASSANDRA_KEYSPACE=mykeyspace
Is there any way through which I can use the zip file an environment variable, I thought of extracting all files and configuring each file an environment variable in Heroku.
but I am not sure what to specify instead of Cluster(cloud=cloud_config, auth_provider=auth_provider) if I started using the extracted files from an environment variable?
I know I can check in the credential zip inside my private git repo that way it works but checking credentials does not seem secure.
Another idea that came to my mind was to store it in S3 and get the file during deployment and extract it inside the temp directory for usage.
Any pointers or help is really appreciated.
If you can checkin secure bundle into repo, then it should be easy - you just need to point to it from the cloud config map, and take username/password from the configured secrets via environment variables:
from cassandra.cluster import Cluster
from cassandra.auth import PlainTextAuthProvider
import os
cloud_config = {
'secure_connect_bundle': '/path/to/secure-connect-dbname.zip'
}
auth_provider = PlainTextAuthProvider(
username=os.environ['CASSANDRA_USERNAME'],
password=os.environ['CASSANDRA_PASSWORD'])
cluster = Cluster(cloud=cloud_config, auth_provider=auth_provider)
session = cluster.connect()
Idea about storing the file on S3, and downloading - isn't very bad as well. You can implement it in the script itself, to get file, and you can use environment variables to pass S3 credentials as well, so file won't be accessible in the repository, plus it would be easier to exchange the secure bundles if necessary.

How to push container to Google Container Registry (unable to create repository)

EDIT: I'm just going to blame this on platform inconsistencies. I have given up on pushing to the Google Cloud Container Registry for now, and have created an Ubuntu VM where I'm doing it instead. I have voted to close this question as well, for the reasons stated previously, and also as this should probably have been asked on Server Fault in the first place. Thanks for everyone's help!
running $ gcloud docker push gcr.io/kubernetes-test-1367/myapp results in:
The push refers to a repository [gcr.io/kubernetes-test-1367/myapp]
595e622f9b8f: Preparing
219bf89d98c1: Preparing
53cad0e0f952: Preparing
765e7b2efe23: Preparing
5f2f91b41de9: Preparing
ec0200a19d76: Preparing
338cb8e0e9ed: Preparing
d1c800db26c7: Preparing
42755cf4ee95: Preparing
ec0200a19d76: Waiting
338cb8e0e9ed: Waiting
d1c800db26c7: Waiting
42755cf4ee95: Waiting
denied: Unable to create the repository, please check that you have access to do so.
$ gcloud init results in:
Welcome! This command will take you through the configuration of gcloud.
Settings from your current configuration [default] are:
[core]
account = <my_email>#gmail.com
disable_usage_reporting = True
project = kubernetes-test-1367
Your active configuration is: [default]
Note: this is a duplicate of Kubernetes: Unable to create repository, but I tried his solution and it did not help me. I've tried appending :v1, /v1, and using us.gcr.io
Edit: Additional Info
$ gcloud --version
Google Cloud SDK 116.0.0
bq 2.0.24
bq-win 2.0.18
core 2016.06.24
core-win 2016.02.05
gcloud
gsutil 4.19
gsutil-win 4.16
kubectl
kubectl-windows-x86_64 1.2.4
windows-ssh-tools 2016.05.13
+
$ gcloud components update
All components are up to date.
+
$ docker -v
Docker version 1.12.0-rc3, build 91e29e8, experimental
The first image push requires admin rights for the project. I had the same problem trying to push a new container to GCR for a team project, which I could resolve by updating my permissions.
You might also want to have a look at docker-credential-gcr. Hope that helps.
What version of gcloud and Docker are you using?
Looking at your requests, it seems as though the Docker client is not attaching credentials, which would explain the access denial.
I would recommend running gcloud components update and seeing if the issue reproduces. If it still does, feel free to reach out to us on gcr-contact at google.com so we can help you debug the issue and get your issue resolved.
I am still not able to push a docker image from my local machine, but authorizing a compute instance with my account and pushing an image from there works. If you run into this issue, I recommend creating a Compute Engine instance (for yourself), authorizing an account with gcloud auth that can push containers, and pushing from there. I have my source code in a Git repository that I can just pull from to get the code.
Thanks for adding your Docker version info. Does downgrading Docker to a more stable release (e.g. 1.11.2) help at all? Have you run 'docker-machine upgrade'?
It seems like you're trying to run gcloud docker push from an Google Compute Engine instance without a proper security scope of read/write access to Google Cloud Storage (it's where Google Container Registry stores the images of your containers behind the scene).
Try to create another instance, but this time with proper access scopes, i.e.:
gcloud compute --project "kubernetes-test-1367" instances create "test" --zone "us-east1-b" --machine-type "n1-standard-1" --network "default" --scopes default="https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring.write","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management","https://www.googleapis.com/auth/devstorage.full_control" --image "/debian-cloud/debian-8-jessie-v20160629" --boot-disk-size "10" --boot-disk-type "pd-standard" --boot-disk-device-name "test-1"
Once you create new instance, ssh into it and then try to re-run the gcloud docker push gcr.io/kubernetes-test-1367/myapp command
I checked for
gcloud auth list
to see my application is the active account and not my personal Google account. After setting
gcloud config set account example#gmail.com
I was able to push
gcloud docker -- push eu.gcr.io/$PROJECT_ID/my-docker:v1
So I can continue http://kubernetes.io/docs/hellonode/
I had a similar issue and it turned out that I had to enable billing for the project. When you have a new Google Cloud account you can enable only so many projects with billing. Once I did that it worked.
Also this could be the cause of this problem (was in my case):
Important: make sure the Compute Engine API is enabled for your project on the
Source: https://pinrojas.com/2016/09/12/your-personal-kubernetes-image-repo-in-a-few-steps-gcr-io/
If anyone is still having this problem while trying to push a docker image to gcr, even though they've authenticated an account that should have the permission to do so, try running gcloud auth configure-docker and pushing again.

AWS Command Line Interface Unable to Locate Credentials - Special Permissions

Okay, so I've encountered an insanely frustrating problem while trying to reach an AWS S3 Bucket through AWS CLI via the command prompt in Windows 7. The AWS CLI is "unable to locate credentials" a.k.a. the config.txt file # C:\Users\USERNAME\.aws\config.txt.
I've tried pathing to it by creating the AWS_CONFIG_FILE environmental variable in ControlPanel>System>AdvancedSystemSettings>EnvironmentalVariables, but no dice. I've also tried all of the above on another Win7 machine. Again, no dice.
What could I be missing here. Are there any special permission that need to be set for AWS CLI to accest config.txt? Help, before I poke my own eyes out!
The contents of config.txt, in case you're interested, are:
[default]
aws_access_key_id = key id here
aws_secret_access_key = key here
region = us-east-1
There is a another way to configure aws credentials while using command line tool.
You can pass credentials using windows command instead of passing through file.
Execute below command from windows command prompt
aws configure
It prompt you to enter below things
AWS Access key ID:
AWS secrete key ID:
Default region Name:
Default output Format:
See this video tutorial: https://youtu.be/hhXj8lM_jBs
Okay, so the config file cannot be a text file (.txt). You should create the file in CMD, and it should be a generic file w/o any extension.
A couple of points on this as I had similar problems whilst trying to perform an S3 sync.
My findings were as follows.
Remove the spaces between hte = and the key value pair (see example below).
The OP has specified a [default] section in their example, but I got the same error when I had removed this section as I did not think it was needed so it's worth nothing this is needed.
I then reformed my file as follows and it worked...
[default]
aws_access_key_id=****
aws_secret_access_key=****
region=eu-west-2
[deployment-profile]
aws_access_key_id=****
aws_secret_access_key=****
region=eu-west-2
I had to include a blank line at the bottom of my credentials file.
Just posting this really as I struggled for a few hours with vague messages from AWS and these were the solutions that worked for me. Hope that it helps someone.
If like me you have a custom IAM user in your credentials file rather than 'default', try setting the AWS_DEFAULT_PROFILE env variable to the name of your IAM user, and then running commands.
[user1]
ACCESS_KEY=
SECRET_KEY=
set AWS_DEFAULT_PROFILE=user1
aws <command>
Alternatively you can specify the --profile variable each time you use the cli:
aws <command> --profile user1

Resources