import plugin throw error 400 saying InvalidParameterValue: The specified KMS key is not accessible - amazon-ec2

2 days back everything was working. but now it started giving this error. i am able to reproduce same error in dev environment. for testing i created a s3 without encryption and new kms key. but i am getting same error there.
aws ec2 import-image --description "123" --encrypted --kms-key-id arn:aws:kms:us-east-1:123456789:key/abc-efg-hij-klm-nop-xyz --disk-containers Format=ova,UserBucket="{S3Bucket=,S3Key=}"
An error occurred (InvalidParameterValue) when calling the ImportImage operation: The specified KMS key is not accessible. If this is a default EBS CMK, please retry your request without specifying the key explicitly
any help?

Related

Creation of authentication connection is failing

I am following the virtual assistant get started sample:
Virtual asistant
I am stuck on the step "Skill Authentication".
I tried to use the following command with all the arguments and generated botsecret for --secret argument.
msbot connect generic --name "Authentication" --keys "{\"YOUR_AUTH_CONNECTION_NAME\":\"Azure Active Directory v2\"}" --bot YOURBOTFILE.bot --secret "YOUR_BOT_SECRET" --url "portal.azure.net"
I still get the following error:
Error: You are attempting to perform an operation which needs access to the secret and --secret is missing
Can someone tell me what am I missing?

How to run portworx backup to minio server

Trying to configure portworx volume backups (ptxctl cloudsnap) to localhost minio server (emulating S3).
First step is to create cloud credentials using ptxctl cred c
e.g.
./pxctl credentials create --provider s3 --s3-access-key mybadaccesskey --s3-secret-key mybadsecretkey --s3-region local --s3-endpoint 10.0.0.1:9000
This results in:
Error configuring cloud provider.Make sure the credentials are correct: RequestError: send request failed caused by: Get https://10.0.0.1:9000/: EOF
disabling SSL (which is not configured as this is just a localhost test) gives me:
./pxctl credentials create --provider s3 --s3-access-key mybadaccesskey --s3-secret-key mybadsecretkey --s3-region local --s3-endpoint 10.0.0.1:9000 --s3-disable-ssl
Which returns:
Not authenticated with the secrets endpoint
I've tried this with both minio gateway (nas) and minio server - same result.
Portworx container is running within Rancher
Any thoughts appreciated
Resolved via instructions at https://docs.portworx.com/secrets/portworx-with-kvdb.html
i.e. set secret type to kvdb in /etc/pwx/config.json
"secret": {
"cluster_secret_key": "",
"secret_type": "kvdb"
},
Then login using ./pxctl secrets kvdb login
After this, credentials create was successful and subsequent cloudsnap backup. Test was using --s3-disable-ssl switch
Note - kvdb is plain text so not suitable for production obvs.

An error occurred (InvalidParameterValue) when calling the RunInstances operation: Value () for parameter groupId is invalid. The value cannot be empt

I'm getting error when creating ec2 instance from my ami:
aws ec2 run-instances --image-id ami-3e21ed44 --count 1 --instance-type t2.medium --key-name sssoft --security-groups launch-wizard-4
Isn't this example same as in here?
It is giving this error:
An error occurred (InvalidParameterValue) when calling the RunInstances operation: Value () for parameter groupId is invalid. The value cannot be empty
What is wrong?
The error means the security group launch-wizard-4 does not exist in your account.
If that security group does exist in your account, check the AWS CLI profile you're using. It uses default by default. But if you need to use a different profile, just add --profile my-profile-name to the command.
For more information on profiles: AWS CLI Named Profiles
Silly mistake. In the command line the default zone was different from that of the AMI

Serverless deploy error provisionning stack

When I try to deploy my application on AWS, I get the following error :
An error occurred while provisioning your stack: HelloLambdaFunction -
Lambda was unable to configure your environment variables because the
environment variables you have provided contains reserved keys that
are currently not supported for modification. Reserved keys used in
this request: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY.
My AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY are stored in env.yml and I access them using process.env.AWS_ACCESS_KEY_ID
How can I fix this error ?
I would try removing those keys from the config and see what happens.

Issue creating/accessing hive external table with s3 location from spark thrift service

I have configured the s3 keys (access key and secret key) in a jceks file using hadoop-credential api. Commands used for the same are as below:
hadoop credential create fs.s3a.access.key -provider jceks://hdfs#nn_hostname/tmp/s3creds_test.jceks
hadoop credential create fs.s3a.secret.key -provider jceks://hdfs#nn_hostname/tmp/s3creds_test.jceks
Then, I am opening a connection to Spark Thrift Server using beeline and passing the jceks file path in the connection string as below:
beeline -u "jdbc:hive2://hostname:10001/;principal=hive/_HOST#?hadoop.security.credential.provider.path=jceks://hdfs#nn_hostname/tmp/s3creds_test.jceks;
Now, when I try to create an external table with the location in s3, it fails with the below exception:
CREATE EXTERNAL TABLE IF NOT EXISTS test_table_on_s3 (col1 String, col2 String) row format delimited fields terminated by ',' LOCATION 's3a://bucket_name/kalmesh/';
Exception: Error: org.apache.spark.sql.execution.QueryExecutionException: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: java.nio.file.AccessDeniedException s3a://bucket_name/kalmesh: getFileStatus on s3a://bucket_name/kalmesh: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: request_id), S3 Extended Request ID: extended_request_id=) (state=,code=0)
I don't think jceks support for the fs.s3a. secrets went in until Hadoop 2.8. I don't think; it's hard to tell from the source. If that is the case, and you are using Hadoop 2.7, then the secret isn't going to to be picked up. Afraid you will have to put it in the config.
I had a similar situation, just with Drill instead of Hive. But like in your case:
using Hadoop 2.9 jars (1st version to support AWS KMS)
writing to s3a://
encrypting with SSE-KMS
... and got AmazonS3Exception: Access Denied.
In my case (perhaps in yours, as well) the exception description was a bit ambiguous. The reported AmazonS3Exception: Access Denied did not originate from S3 but from KMS! Access was denied to the key I used for encryption. User making the API calls was not on key's users list - once I added that user to key's list writing started to work and I could create encrypted tables on s3a://...
For me the following s3 permissions were required:
s3:ListBucket
s3:GetObject
s3:PutObject
I was receiving the same error and was missing s3:ListBucket.
As for KMS permissions (if applicable):
kms:Decrypt
kms:Encrypt
kms:GenerateDataKey

Resources