Serverless deploy error provisionning stack - aws-lambda

When I try to deploy my application on AWS, I get the following error :
An error occurred while provisioning your stack: HelloLambdaFunction -
Lambda was unable to configure your environment variables because the
environment variables you have provided contains reserved keys that
are currently not supported for modification. Reserved keys used in
this request: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY.
My AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY are stored in env.yml and I access them using process.env.AWS_ACCESS_KEY_ID
How can I fix this error ?

I would try removing those keys from the config and see what happens.

Related

AccessDenied when deploying serverless via aws-vault

I am trying to deploy serverless with the credentials stored in my aws-vault keychain.
However when I run aws-vault exec myprofile -- sls deploy I receive the following error:
An error occurred: MyLambdaFunction - AccessDenied. User doesn't have permission to call iam:GetRole.
The user has AdministratorAccess and I also gave him IAMFullAccess (which should not be needed)
When I deploy with the same credentials stored in ~.aws/credentials with sls deploy
it works.
According to the docs the session token generated by aws-vault has some restrictions:
You cannot call any IAM API operations unless MFA authentication information is included in the request.
You cannot call any AWS STS API except AssumeRole or GetCallerIdentity.
A work around is to use the --no-session
so aws-vault exec myprofile --no-session -- sls deploy works without any error.

import plugin throw error 400 saying InvalidParameterValue: The specified KMS key is not accessible

2 days back everything was working. but now it started giving this error. i am able to reproduce same error in dev environment. for testing i created a s3 without encryption and new kms key. but i am getting same error there.
aws ec2 import-image --description "123" --encrypted --kms-key-id arn:aws:kms:us-east-1:123456789:key/abc-efg-hij-klm-nop-xyz --disk-containers Format=ova,UserBucket="{S3Bucket=,S3Key=}"
An error occurred (InvalidParameterValue) when calling the ImportImage operation: The specified KMS key is not accessible. If this is a default EBS CMK, please retry your request without specifying the key explicitly
any help?

Creation of authentication connection is failing

I am following the virtual assistant get started sample:
Virtual asistant
I am stuck on the step "Skill Authentication".
I tried to use the following command with all the arguments and generated botsecret for --secret argument.
msbot connect generic --name "Authentication" --keys "{\"YOUR_AUTH_CONNECTION_NAME\":\"Azure Active Directory v2\"}" --bot YOURBOTFILE.bot --secret "YOUR_BOT_SECRET" --url "portal.azure.net"
I still get the following error:
Error: You are attempting to perform an operation which needs access to the secret and --secret is missing
Can someone tell me what am I missing?

How to run portworx backup to minio server

Trying to configure portworx volume backups (ptxctl cloudsnap) to localhost minio server (emulating S3).
First step is to create cloud credentials using ptxctl cred c
e.g.
./pxctl credentials create --provider s3 --s3-access-key mybadaccesskey --s3-secret-key mybadsecretkey --s3-region local --s3-endpoint 10.0.0.1:9000
This results in:
Error configuring cloud provider.Make sure the credentials are correct: RequestError: send request failed caused by: Get https://10.0.0.1:9000/: EOF
disabling SSL (which is not configured as this is just a localhost test) gives me:
./pxctl credentials create --provider s3 --s3-access-key mybadaccesskey --s3-secret-key mybadsecretkey --s3-region local --s3-endpoint 10.0.0.1:9000 --s3-disable-ssl
Which returns:
Not authenticated with the secrets endpoint
I've tried this with both minio gateway (nas) and minio server - same result.
Portworx container is running within Rancher
Any thoughts appreciated
Resolved via instructions at https://docs.portworx.com/secrets/portworx-with-kvdb.html
i.e. set secret type to kvdb in /etc/pwx/config.json
"secret": {
"cluster_secret_key": "",
"secret_type": "kvdb"
},
Then login using ./pxctl secrets kvdb login
After this, credentials create was successful and subsequent cloudsnap backup. Test was using --s3-disable-ssl switch
Note - kvdb is plain text so not suitable for production obvs.

An error occurred (InvalidParameterValue) when calling the RunInstances operation: Value () for parameter groupId is invalid. The value cannot be empt

I'm getting error when creating ec2 instance from my ami:
aws ec2 run-instances --image-id ami-3e21ed44 --count 1 --instance-type t2.medium --key-name sssoft --security-groups launch-wizard-4
Isn't this example same as in here?
It is giving this error:
An error occurred (InvalidParameterValue) when calling the RunInstances operation: Value () for parameter groupId is invalid. The value cannot be empty
What is wrong?
The error means the security group launch-wizard-4 does not exist in your account.
If that security group does exist in your account, check the AWS CLI profile you're using. It uses default by default. But if you need to use a different profile, just add --profile my-profile-name to the command.
For more information on profiles: AWS CLI Named Profiles
Silly mistake. In the command line the default zone was different from that of the AMI

Resources