Serveless offline TypeError: Cannot read property 'accessKeyId' of null - aws-lambda

So I can deploy my lambda to aws no problem but trying to run it locally
serverless invoke local --function hello
TypeError: Cannot read property 'accessKeyId' of null
The congig and credentials file look ok./
Edit
~/.aws/config
[default]
region = eu-west-1
output = json
~
[default]
aws_access_key_id = A***************
aws_secret_access_key = /p*********************

What aws-sdk version are you using in your function? A quick google showed that there was a problem in the aws-sdk that contained the same error message. Make sure you have the latest version.
Also, remember that when you run a function locally, aws-sdk will look for credentials on your local system.
Run $ ls ~/.aws on a mac and C:\> dir "%UserProfile%\.aws" on windows to see if you have your credentials files stored locally. See this guide for more details.

Related

Terraform throwing error:error configuring Terraform AWS Provider: The system cannot find the path specified

I was facing issue with running aws command via cli with certificate issue. So as per some blogs, I was trying to fix the issue using setx AWS_CA_BUNDLE "C:\data\ca-certs\ca-bundle.pem" command.
Now even after I removed the variable AWS_CA_BUNDLE from my aws configure file, terraform keeps throwing the below error on terraform apply.
Error: error configuring Terraform AWS Provider: loading configuration: open C:\data\ca-certs\ca-bundle.pem: The system cannot find the path specified.
Can someone please tell me where terraform/aws cli is taking this value from and how to remove it? I have tried deleting the entire aws config and credential files still this error is thrown, uninstall aws cli and reinstalling.
If its set in some system/environment variable, can you please tell me how to reset it to default value?
The syntax to add ca_bundle variable to config file is wrong.
Your config file should look like this
[default]
region = us-east-1
ca_bundle = dev/apps/ca-certs/cabundle-2019mar05.pem
But as I understand you want to use environment variable (AWS_CA_BUNDLE).
AWS_CA_BUNDLE:
Specifies the path to a certificate bundle to use for HTTPS certificate validation.
If defined, this environment variable overrides the value for the profile setting ca_bundle. You can override this environment variable by using the --ca-bundle command line parameter.
I would suggest remove environment variable (AWS_CA_BUNDLE) and add ca_bundle to config file. The delete .terraform folder and run terraform init
Go environment variables and delete the environment variable created by AWS_CA_BUNDLE. Shut down Terminal and again start. Run the commands now it will work properly.

Testing AWS Lambda functions locally - invalid function name?

So, I'm finally at the point where I can test my first Lambda function locally. As background, I've installed the AWS CLI under MacOS 10.14.2 (Mojave) and am able to access my AWS account. I've successfully zipped up my Lambda function and used 'aws lambda create-function' to deploy it.
I've installed aws-lambda-local (https://www.npmjs.com/package/aws-lambda-local) using 'npm install -g aws-lambda-local'.
But when I invoke the following from the Lambda function root:
lambda-local -l index.js -e event.json
I get the following error:
Invalid function name. It should be accessible from invocation place
Would someone please tell me why this is happening? I mean, the function name is most definitely valid.
Totally confused here!
According to the aws-lambda-local docs, there is no -l option. Use -f or --function to specify the file with the lambda function

Specify an AWS CLI profile in a script when two exist

I'm attempting to use a script which automatically creates snapshots of all EBS volumes on an AWS instance. This script is running on several other instances with no issue.
The current instance already has an AWS profile configured which is used for other purposes. My understanding is I should be able to specify the profile my script uses, but I can't get this to work.
I've added a new set of credentials to the /home/ubuntu/.aws file by adding the following under the default credentials which are already there:
[snapshot_creator]
aws_access_key_id=s;aldkas;dlkas;ldk
aws_secret_access_key=sdoij34895u98jret
In the script I have tried adding AWS_PROFILE=snapshot_creatorbut when I run it I get the error Unable to locate credentials. You can configure credentials by running "aws configure".
So, I delete my changes to /home/ubuntu/.aws and instead run aws configure --profile snapshot_creator. However after entering all information I get the error [Errno 17] File exists: '/home/ubuntu/.aws'.
So I add my changes to the .aws file again and this time in the script for every single command starting with aws ec2 I add the parameter --profile snapshot_creator, but this time when I run the script I get The config profile (snapshot_creator) could not be found.
How can I tell the script to use this profile? I don't want to change the environment variables for the instance because of the aforementioned other use of AWS CLI for other purposes.
Credentials should be stored in the file "/home/ubuntu/.aws/credentials"
I guess this error is because it couldn't create a directory. Can you delete the ".aws" file and re-run the configure command? It should create the credentials file under "/home/ubuntu/.aws/"
File exists: '/home/ubuntu/.aws'

Parse Server S3 Adapter Deprecated

The Parse S3 Adapter's requirement of S3_ACCESS_KEY and S3_SECRET_KEY is now deprecated. It says to use the environment variables: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. We are have setup an AWS user with an Access Key ID and we have our secret key as well. We have updated to the latest version of the adapter and removed our old S3_X_Key variables. Unfortunately, as soon as we do this we are unable to access, upload or change files on our S3 bucket. The user does have access to our buckets properties and if we change it back to use the explicit S3_ACCESS_KEY and secret everything works.
We are hosting on Heroku and haven't had any issues until now.
What else needs to be done to set this up?
This deprecation notice is very vague on how to fix this.
(link to notice: https://github.com/parse-server-modules/parse-server-s3-adapter#deprecation-notice----aws-credentials)
I did the following steps and it's working now:
Installed Amazon's CLI
http://docs.aws.amazon.com/cli/latest/userguide/installing.html
Configured CLI by creating a user and then creating key id and secret
http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
Set the S3_BUCKET env variable
export S3_BUCKET=
Installed files adapter using command
npm install --save #parse/s3-files-adapter
In my parse-server's index.js added the files adapter
var S3Adapter = require('#parse/s3-files-adapter');
var s3Adapter = new S3Adapter();
var api = new ParseServer({
appId: 'my_app',
masterKey: 'master_key',
filesAdapter: s3Adapter
})
Arjav Dave's answer below is best if you are using AWS or a hosting solution where you can login to the server and run the AWS Configure command on the server. Or if you are running everything locally.
However, I was asking about Heroku and this goes for any server environment where you can set ENV variables.
Really it comes down to just a few steps. If you have a previous version setup you are going to switch your file adapter to just read:
filesAdapter: 'parse-server-s3-adapter',
(or whatever your npm installed package is called some are using the #parse/... one)
Take out the require statement and don't create any instance variables of S3Adapter or anything like that in your index.js.
Then in Heroku.com create config vars or with the CLI: heroku config:set AWS_ACCESS_KEY_ID=abc and heroku config:set AWS_SECRET_ACCESS_KEY=abc
Now run and test your uploading. All should be good.
The new adapter uses the environment variables for access and you just have to tell it what file adapter is installed in the index.js file. It will handle the rest. If this isn't working it'll be worth testing the IAM profile setup and making sure it's all working before coming back to this part. See below:
Still not working? Try running this example (edit sample.js to be your bucket when testing):
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/getting-started-nodejs.html
Completely lost and no idea where to start?
1 Get Your AWS Credentials:
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/getting-your-credentials.html
2 Setup Your Bucket
https://transloadit.com/docs/faq/how-to-set-up-an-amazon-s3-bucket/
(follow the part on IAM users as well)
3 Follow IAM Best Practices
https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
Then go back to the top of this posting.
Hope that helps anyone else that was confused by this.

AWS Command Line Interface Unable to Locate Credentials - Special Permissions

Okay, so I've encountered an insanely frustrating problem while trying to reach an AWS S3 Bucket through AWS CLI via the command prompt in Windows 7. The AWS CLI is "unable to locate credentials" a.k.a. the config.txt file # C:\Users\USERNAME\.aws\config.txt.
I've tried pathing to it by creating the AWS_CONFIG_FILE environmental variable in ControlPanel>System>AdvancedSystemSettings>EnvironmentalVariables, but no dice. I've also tried all of the above on another Win7 machine. Again, no dice.
What could I be missing here. Are there any special permission that need to be set for AWS CLI to accest config.txt? Help, before I poke my own eyes out!
The contents of config.txt, in case you're interested, are:
[default]
aws_access_key_id = key id here
aws_secret_access_key = key here
region = us-east-1
There is a another way to configure aws credentials while using command line tool.
You can pass credentials using windows command instead of passing through file.
Execute below command from windows command prompt
aws configure
It prompt you to enter below things
AWS Access key ID:
AWS secrete key ID:
Default region Name:
Default output Format:
See this video tutorial: https://youtu.be/hhXj8lM_jBs
Okay, so the config file cannot be a text file (.txt). You should create the file in CMD, and it should be a generic file w/o any extension.
A couple of points on this as I had similar problems whilst trying to perform an S3 sync.
My findings were as follows.
Remove the spaces between hte = and the key value pair (see example below).
The OP has specified a [default] section in their example, but I got the same error when I had removed this section as I did not think it was needed so it's worth nothing this is needed.
I then reformed my file as follows and it worked...
[default]
aws_access_key_id=****
aws_secret_access_key=****
region=eu-west-2
[deployment-profile]
aws_access_key_id=****
aws_secret_access_key=****
region=eu-west-2
I had to include a blank line at the bottom of my credentials file.
Just posting this really as I struggled for a few hours with vague messages from AWS and these were the solutions that worked for me. Hope that it helps someone.
If like me you have a custom IAM user in your credentials file rather than 'default', try setting the AWS_DEFAULT_PROFILE env variable to the name of your IAM user, and then running commands.
[user1]
ACCESS_KEY=
SECRET_KEY=
set AWS_DEFAULT_PROFILE=user1
aws <command>
Alternatively you can specify the --profile variable each time you use the cli:
aws <command> --profile user1

Resources