botot3 attach_volume throwing volume not available - amazon-ec2

I am trying to attach volume to instance using boto3 but its failed to attach with below error
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 661, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (IncorrectState) when calling the AttachVolume operation: vol-xxxxxxxxxxxxxxx is not 'available'.
I can see volumme exists in aws console but somehow boto3 is not able to attach volume.
os.environ['AWS_DEFAULT_REGION'] = "us-west-1"
client = boto3.client('ec2', aws_access_key_id=access_key, aws_secret_access_key=secret_key,
region_name='us-west-1')
response1 = client.attach_volume(
VolumeId=volume_id,
InstanceId=instance_id,
Device='/dev/sdg',
)
I tried using aws cli for attaching the same and its working fine after exporting AWS_DEFAULT_REGION="us-west-1"
Also tried exporting the same in python script using os.environ['AWS_DEFAULT_REGION'] = "us-west-1" but python script is failing with the same error as mentioned above.

I figured it out. I am not giving enough time after creating ebs volume. I am able to attach now after adding sleep

Related

Running databrew.describe_job_run() from inside a Lambda does not work

I have a Lambda that polls the status of a DataBrew job using a boto3 client. The code - as written here - works fine in my local environment. When I put it into a Lambda function, I get the error:
[ERROR] AttributeError: 'GlueDataBrew' object has no attribute 'describe_job_run'
This is the syntax found in the Boto3 documentation:
client.describe_job_run(
Name='string',
RunId='string')
This is my code:
import boto3
def get_brewjob_status(jobName, jobRunId):
brew = boto3.client('databrew')
try:
jobResponse = brew.describe_job_run(Name=jobName, RunId=jobRunId)
status = jobResponse['State']
except Exception as e:
status='FAILED'
print('Unable to get job status')
raise(e)
return {
'jobStatus':status
}
def lambda_handler(event, context):
jobName=event['jobName']
jobRunId=event['jobRunId']
response=get_brewjob_status(jobName, jobRunId)
return response
I am using the Lambda runtime version of boto3. The jobName and jobRunId variables are strings passed from a Step Function, but I've also tried to hard code them into the Lambda to check the error and I get the same result. I have tried running it on both the runtime Python3.7 and Python3.8 versions. I'm also confident (and have double checked) that the IAMs permissions allow the Lambda access to DataBrew. Thanks for any ideas!
Fixed my own problem. There must be some kind of conflict with the boto3 runtime and databrew - maybe not updated to include databrew yet? I created a .zip deployment package and it worked fine. Should have done that two days ago...

Google Cloud Storage can't find project name

I am using the python client library for the Google Storage API, and I have a file, pythonScript.py, that has the contents:
# Imports the Google Cloud client library
from google.cloud import storage
# Instantiates a client
storage_client = storage.Client()
# The name for the new bucket
bucket_name = 'my-new-bucket'
# Creates the new bucket
bucket = storage_client.create_bucket(bucket_name)
print('Bucket {} created.'.format(bucket.name))
When I try to run it I get this in the terminal:
Traceback (most recent call last): File
"pythonScript.py", line 11, in
bucket = storage_client.create_bucket(bucket_name) File "/home/joel/MATI/env/lib/python3.5/site-packages/google/cloud/storage/client.py",
line 218, in create_bucket
bucket.create(client=self) File "/home/joel/MATI/env/lib/python3.5/site-packages/google/cloud/storage/bucket.py",
line 199, in create
data=properties, _target_object=self) File "/home/joel/MATI/env/lib/python3.5/site-packages/google/cloud/_http.py",
line 293, in api_request
raise exceptions.from_http_response(response) google.cloud.exceptions.Conflict: 409 POST
https://www.googleapis.com/storage/v1/b?project=avid-folder-180918:
Sorry, that name is not available. Please try a different one.
I am not sure why, since I do have the GSS API enabled for my project, and the default configuration seems to be correct. The out output of gcloud config list is:
[compute]
region = us-east1
zone = us-east1-d
[core]
account = joel#southbendcodeschool.com
disable_usage_reporting = True
project = avid-folder-180918
Your active configuration is: [default]
Bucket names are globally unique. Someone else must already own the bucket named "my-new-bucket."

graphlab create: unable to start cluster in aws

At the moment I'm trying to create a cluster in aws ec2 with Graphlab Create. The code is as follows:
import graphlab as gl
ec2config = gl.deploy.Ec2Config(region='us-west-2', instance_type='m3.large',
aws_access_key_id='secret-acces-key-id',
aws_secret_access_key='secret-access-key')
ec2 = gl.deploy.ec2_cluster.create(name='Test Cluster',
s3_path='s3://test-big-data-2016', ec2_config=ec2config, idle_shutdown_timeout=3600, num_hosts=1)
When the above code is executed I get the following error:
Traceback (most recent call last):
File "test.py", line 59, in
ec2 = gl.deploy.ec2_cluster.create(name='Test Cluster', s3_path='s3://test-big-data-2016', ec2_config=ec2config, idle_shutdown_timeout=36000, num_hosts=1)
File "/Users/remco/anaconda/envs/gl-env/lib/python2.7/site-packages/graphlab/deploy/ec2_cluster.py", line 83, in create
cluster.start()
File "/Users/remco/anaconda/envs/gl-env/lib/python2.7/site-packages/graphlab/deploy/ec2_cluster.py", line 233, in start
self.idle_shutdown_timeout
File "/Users/remco/anaconda/envs/gl-env/lib/python2.7/site-packages/graphlab/deploy/_executionenvironment.py", line 372, in _start_commander_host
raise RuntimeError('Unable to start host(s). Please terminate '
RuntimeError: Unable to start host(s). Please terminate manually from the AWS console.
When I look in EC2 Management Console a new instance is launched and running. But still getting the error in the terminal.
I really don't know what I'm doing wrong here. I followed the exact instructions from: https://turi.com/learn/userguide/deployment/pipeline-example.html

Forbidden Error on get_thing_shadow with boto3, aws iot and alexa

I am running a custom alexa skill with flask-ask that connects to aws iot.
Using same credentials work when running the script on local machine and using ngrok to assign to Alexa skill endpoint. But when I use zappa to upload as lambda, I get the following:
File "/var/task/main.py", line 48, in get_shadow
res=client.get_thing_shadow(thingName="test_light")
File "/var/runtime/botocore/client.py", line 253, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/var/runtime/botocore/client.py", line 543, in _make_api_call
raise error_class(parsed_response, operation_name)
ClientError: An error occurred (ForbiddenException) when calling the GetThingShadow operation: Forbidden
When using ngrok, the skill works completely fine. What am I missing here? Help!
The problem was VPC access. I had to provide the role the VPC access policy and it worked.

Flask- Cannot read or write to file

I am using AWS EC2 to host a Flask application and I am trying to read and write to a text file when the user submits a form using the open() function. When the form is submitted I am getting the error:
Internal Server Error
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
I am not sure why this error is happening.
The code that does this is:
#app.route("/submit", methods=["POST"])
def submit():
file = open("settingsfile.txt", "w")
file.close()

Resources