euca2ools for Windows/Cygwin? - windows

Is there any way to use euca2ools natively on Windows or through Cygwin or is there any other tool compatible with eucalyptus which I could run under Windows ?

You can use the AWS CLI with Eucalyptus, e.g. for EC2:
aws --endpoint-url http://myeuca:8773/services/Eucalyptus ec2 describe-instances
the example assumes you have already configured credentials.

Related

How to invoke an EXE on EC2 Windows using Lambda/.Net Core

When file is uploaded to s3bucket, I need to invoke an executable on EC2 Instance. The executable will process a long job and invoke some command line executions
So, I want to run an EXE on EC2 Windows instance from AWS Lambda using .Net Core.
After some research, I figured out the prerequisites to do this
SSM Agent installed on EC2 instance
Create an IAM role for EC2:
AmazonSSMMamangementInstanceCore
IAM role for Lambda
AWSLambdaExecute
AmazonEC2ReadOnlyAccess
AmazonSSMFullAccess
AmazonS3FullAccess
Please advice me if there is any better approach to implement this.

Windows 10: aws working in powershell but not cmd?

Hi I'm on windows 10 and trying to run a basic aws command.
aws s3 ls target --profile user1
I have a already configure this profile and I can see it in the directory. This works when I'm in powershell but not cmd. In cmd I get this
The config profile (user1;) could not be found
is there anything I can do? thanks.
You have to configure a profile first.
example:
aws configure --profile user1
Here is a link that can help: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-profiles

How does one disable SourceDestCheck when creating instances with AWS CLI

It should be possible to disable SourceDestCheck since it is documented
"SourceDestCheck -> (boolean)"
but using run-instances with
aws ec2 run-instances ...
--SourceDestCheck false
or
--sourceDestCheck=false
Fails with
Unknown options: --SourceDestCheck, false
It seems I can run it later with a modify command
aws ec2 modify-instance-attribute --resource=$INSTANCE_ID --no-source-dest-check
but it should be possible to set that at instantiation. I just can't figure out the actual syntax.
I know this is old but I ran into the same issue today and solved it this way. In the resource block of your terraform file add:
provisioner "local-exec" {
command = "aws ec2 modify-instance-attribute --no-source-dest-check --instance-id ${self.id}"
}
assuming you have the was cli tools installed.
As far as I can tell, you can't set that on initial launch with the AWS CLI. It's not a supported option. You have to call aws ec2 modify-instance-attribute --no-source-dest-check documented here.
As #mark pointed out, this isn't an option in the RunInstances API. I just want to add that the SourceDestCheck in the AWS CLI doc you referenced is an output. If you look closely, it's an attribute of the ENI.

ec2-describe-instances does not return a response

I am working with an application that uses EC2 CLI.
I have a running instance in us-east-1 region.
When I run
ec2-describe-instances --region us-east-1
it does not return anything.
However,
aws ec2 describe-instances --region us-east-1
returns the expected json response.
ec2-describe-volume does not work either.
I have set up EC2_HOME and EC2_URL as described in the documentation.
export EC2_HOME="/usr/local/ec2/ec2-api-tools-1.7.3.0"
export EC2_URL=https://ec2.us-east-1.amazonaws.com
ec2-describe-regions works as expected.
Am I missing something here?
Found out that CLI tools were already installed via package manager before I installed it from source. Removing ec2-api-tools package fixed the problem.

Cloudera CDH on EC2

I am an aws newbie, and I'm trying to run Hadoop on EC2 via Cloudera's AMI. I installed the AMI, downloaded the cloudera-haddop-for-ec2-tools, and now I'm trying to configure
haddop-ec2-env.sh
It is asking for the following:
AWS_ACCOUNT_ID
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
EC2_KEYDIR
PRIVATE_KEY_PATH
when running:
./hadoop-ec2 launch-cluster my-cluster 10
i'm getting
AWS was not able to validate the provided access credentials
Firstly, I have the first 3 attributes for my own account. This is a corporate account, and I received an email with the access key id and secret access key for my email. Is it possible that my account doesn't have the proper permissions to do what is needed here. Exactly why does this script need my credentials? What does it need to do?
Secondly, where is the EC2 key dir? I've uploaded my key.pem file that amazon created for me, and hard coded that into the PRIVATE_KEY_PATH and chmod 400 on the .pem file. Is that the correct key that this script needs?
Any help is appreciated?
Sam
The cloudera ec2 tools heavily rely on the amazon ec2 api tools. Therefore, you must do the following:
1) Download amazon ec2 api tools from http://aws.amazon.com/developertools/351
2) Download cloudera ec2 tools from http://cloudera-packages.s3.amazonaws.com/cloudera-for-hadoop-on-ec2-0.3.0.tar.gz
3) Set the following env variables I am only giving Unix based examples
export EC2_HOME=<path-to-tools-from-step-1>
export $PATH=$PATH:$EC2_HOME/bin
export $PATH=$PATH:<path-to-cloudera-ec2-tools>/bin
export EC2_PRIVATE_KEY=<path-to-private-key.pem>
export EC2_CERT=<path-to-cert.pem>
4) In cloudera-ec2-tools/bin set the following variables
AWS_ACCOUNT_ID=<amazon-acct-id>
AWS_ACCESS_KEY_ID=<amazon-access-key>
AWS_SECRET_ACCESS_KEY=<amazon-secret-key>
EC2_KEYDIR=<dir-where-the-ec2-private-key-and-ec2-cert-are>
KEY_NAME=<name-of-ec2-private-key>
And then run
$ hadoop-ec2 launch-cluster my-hadoop-cluster 10
Which will create a hadoop cluster called "my-hadoop" with 10 nodes on multiple ec2 machines

Resources