Windows instances on spot - amazon-ec2

Here is my json file
# cat s6.json
{
"ImageId": "ami-0b33d91d",
"InstanceType": "i2.xlarge",
"KeyName": "xxx"
}
And I can use this command...
# aws ec2 request-spot-instances --spot-price "1.03" --instance-count 1 --type "one-time" --launch-specification file://s6.json
The above command works as expected. But if I change the Image ID to windows ami-ab33d3bd I get this error...
An error occurred (InvalidInput) when calling the RequestSpotInstances
operation: Unsupported product.
I can however request a regular on-demand instance without any problem. So this command works...
# aws ec2 run-instances --image-id ami-ab33d3bd --count 1 --instance-type i2.xlarge --key-name xxx
Does it mean that Windows instances are not available on spot?

From EC2-Spot FAQs:
Q. Which operating systems are available as Spot instances?
Linux/UNIX and Windows Server are available. Windows Server with SQL
Server is not currently available.
The AMI ami-ab33d3bd is a Windows Server 2008 with SQL Enterprise which is not supported for Spot.

Related

Start AWS EC2 instance, run commands, stream logs to console and terminate

Trying to run few steps of CI/CD in a EC2 instance. Please don't ask for reasons.
Need to:
1) Start an instance using AWS CLI. Set few environment variables.
2) Run few bash commands.
3) Stream the command from the above commands into the console of the caller script.
4) If any of the commands fail, need to fail the calling script as well.
5) Terminate the instance.
There is a SO thread which indicates that streaming the output is not as easy. [1]
What I would do, if I had to implement this task:
Start the instance using the cli command aws ec2 run-instances and using an AMI which has the AWS SSM agent preinstalled. [2]
Run your commands using AWS SSM. [3] This has the benefit that you can run any number of commands you want - whenever you want (i.e. the commands must not be specified at instance launch, but can be chosen afterwards). You also get the status code of each command.[4]
Use the CloudWatch integration in SSM to stream your command output to CloudWatch logs. [5]
Stream the logs from CloudWatch to your own instance. [6]
Note: Instead of streaming the command output via CloudWatch, you could also periodically poll the SSM API by using aws ssm get-command-invocation. [7]
Reference
[1] How to check whether my user data passing to EC2 instance working or not?
[2] Working with SSM Agent - AWS Systems Manager
[3] Walkthrough: Use the AWS CLI with Run Command - AWS Systems Manager
[4] Understanding Command Statuses - AWS Systems Manager
[5] Streaming AWS Systems Manager Run Command output to Amazon CloudWatch Logs | AWS Management Tools Blog
[6] how to view aws log real time (like tail -f)
[7] get-command-invocation — AWS CLI 1.16.200 Command Reference
Approach 1.
Start an instance using AWS CLI.
aws ec2 start-instances --instance-ids i-1234567890abcdef0
Set few environment variables.
Use user dat of ec2 to set env. & run commands
..
Run your other logic / scripts
To terminate the instance run below command in the same instance.
instanceid=`curl http://169.254.169.254/latest/meta-data/instance-id`
aws ec2 terminate-instances --instance-ids $instanceid
Approach 2.
Use python boto3 or kitchen chef ci.

How to run Windows Containers on Local Kubernetes?

I'm having a .Net Framework and .NetCore Containers and I would like to run them in Kubernetes. I have Docker Desktop for Windows installed and Kubernetes with it. How can I run these Windows Containers in Kubernetes?
This Documentation specifies how to create a Windows Node on Kubernetes, but it is very confusing. As I am on windows machine and I see linux based commands in there (And no mention of what OS you need to run all those). I am on Windows 10 Pro Machine. Is there a way to run these containers on Kubernetes?
When I try to create a Pod with Windows Containers, it fails with the following error message "Failed to pull image 'imagename'; rpc error: code = Unknown desc = image operating system 'windows' cannot be used on this platform"
Welcome on StackOverflow Srinath
To my knowledge you can't run Windows Containers on local version of Kubernetes at this moment. When you enable Kubernetes option in your Docker Desktop for Windows installation, the Kubernetes cluster is simply run inside Linux VM (with its own Docker Runtime for Linux containers only) on Hyper-V hypervisor.
The other solution for you is to use for instance a managed version of Kubernetes with Windows nodes from any of popular cloud providers. I think relatively easy to start is Azure (if you don't have an Azure subscription, create a free trial account, valid for 12 months).
I would suggest you to use an old way to run Kubernetes on Azure, a service called Azure Container Service aka ACS, for one reason, it has been verified by me to be working well with Windows Containers, especially for testing purposes (I could not achieve the same with its successor, called AKS):
Run following commands in Azure Cloud Shell, and your cluster will be ready to use in few minutes.
az group create --name azEvalRG --location eastus
az acs create -g azEvalRG -n k8s-based-win -d k8s-based-win --windows --agent-count 1 -u azureuser --admin-password 'YourSecretPwd1234$' -t kubernetes --location eastus

Is there a way to extract whether an OS is "Linux" from aws ec2 cli?

Looking at the output from aws ec2 describe-instances I see an "ImageId = amixxxxx" but I don't see anything that tells me the operating system, whether it is Linux or something else. Is there a way to get this information from describe-instances?
describe-images provides a "Description" that tells me it's Linux (among other things). But interestingly does not tell me the "Platform", although you can see it if you navigate to the "Images > AMIs" section of the Console.
This command will display the Platform parameter, which is either set to windows or is blank:
$ aws ec2 describe-instances --query 'Reservations[*].Instances[*].[InstanceId,Platform]' --output text
i-9c9c9b94 None
i-cdb618a2 None
i-3d263640 None
i-bc57c751 windows
i-3eaa31d2 windows
i-294b95c7 None
So, a blank/None entry would indicate Linux.

Spark cluster launch error in AWS EC2

I am trying to launch a Spark cluster on an EC2 that I created in a development AWS instance. I was able to successfully connect to the EC2 instance using the AWSCLI as ec2-user. I used the existing VPC and AMI to create this EC2. Unzipped the Spark files on EC2 and using the private key tried starting the cluster using the below:
export AWS_SECRET_ACCESS_KEY=xxx
export AWS_ACCESS_KEY_ID=xxx
/home/ec2-user/spark-1.2.0/ec2$ ./spark-ec2 -k test -i /home/ec2-user/identity_files/test.pem launch test-spark-cluster
Got the Error:
boto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request
InvalidKeyPair.NotFoundThe key pair 'test' does not existxxx
I thought, this might have been due to the region issue, so I used the region and zone parameters while launching spark
/home/ec2-user/spark-1.2.0/ec2$ ./spark-ec2 -k test -i /home/ec2-user/identity_files/test.pem -r us-west-2 -z us-west-2a launch test-spark-cluster
However, when I run this, I encounter a different error:
boto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request
VPCIdNotSpecifiedNo default VPC for this userxxx
How can I resolve this issue?
I am no expert on this area, but I would recommend setting more parameters on your script call, something like:
./spark-ec2 -k test
-i /home/ec2-user/identity_files/test.pem
-s 5
--instance-type=m3.medium
--region=eu-west-1
--spark-version=1.2.0
launch myCluster
The -s refers to the instante quantity to be created. Furthermore, you might want to check the following, pay special attention to the last one:
The key pair test exists on your account
The key pair test.pem is present on the EC2-console
The region for both key pair and instances is the same
Searching on the web I have found out that most of the errors related to key pairs not being found are caused by region mismatching.

unable to create EC2 instance through knife

I am creating an ec2 instance through knife . i gave the following command to create
knife ec2 server create -r "role[webserver]" -I ami-b84e04ea --flavor t1.micro --region ap-southeast-1 -G default -x ubuntu -N server01 -S ec2keypair
but getting error as Fog::Compute::AWS::Error: InvalidBlockDeviceMapping => iops must be specified with the volumeType of device '/dev/sda1' . I am unable to solve this issue , Any help will be appreciated .
Its possible that the ami you are trying to launch requires an EBS. With an EBS you can set the IOPS value which seems like it is not set and is giving you the issue.
Having a look at the documentation it seems you might need to add
--ebs-size 10
SIZE as an option.
I got that from the knife documentation
http://docs.opscode.com/plugin_knife_ec2.html
Also taking a look at the source code for the knife ec2 plugin it looks like you can add.
--ebs-optimized
Enabled optimized EBS I/O

Resources