We bought a reserved instance one year ago. Its up and running now.
Now since its about to expire, I wish to make another one year's payment and continue using it as a reserved instance.
But when I try to 'Purchase Reserved Instances', it does not show any option for current reserved instance.
If I right click on my existing instance, the only option is 'Purchase more like these' ... and not anything regarding extending the current reserved instance.
How can i continue using my existing reserved instance ??
The concept of EC2 Reserved Instance is not like the concept of server hosting before.
There's no "continue" concept here. You just need to purchase another EC2 Reserved Instance right after your current reserved instance expiration, and AWS will calculate the hourly price with your reserved instance price rate automatically. But do make sure you purchase your new reserved instance with correct parameters (instance size, region, zone...).
Hope this information is helpful for you.
Related
I'm writing a terraform provider for a software, which has a large set of instance specific global configurations (approximately 300 of them). When you use the provider, you define your endpoint and credentials and then operate within this instance. What I'm struggling to decide is how exactly to manage this config. It's not a resource that is created or destroyed, so I'm not sure if creating a global_config resource would be the best approach. Since all the values will already have been initialised during the setup of the system and can only be overridden; the config cannot be destroyed; you can't have more than two config resources. Since you should be able to override all entries, it can't be a data source either.
I haven't managed to find any relevant documentation (or even similar examples) so far, so I would be very grateful, if someone could point me to anything relevant, or suggest how to best achieve this. Thanks.
Terraform's provider model is designed primarily for objects that Terraform itself can create or destroy. There is no built-in support for automatically "adopting" an existing object to be under Terraform's management, because Terraform generally assumes that each object is managed by exactly one declared resource instance and Terraform aims to preserve that assumption by being the one to have created the object.
However, there are some existing examples in other systems of this sort of "singleton" object that is implicitly created but can have its settings changed. Key examples for study are the resource types for default VPCs and their default public subnets in AWS.
There are currently two broad ways to represent this situation in Terraform, neither of which is perfect and so each of which has some advantages and disadvantages to consider:
Mandatory terraform import: you can potentially build your resource type so that its "create" action always immediately fails telling the user to import the existing object, and then to implement the "import" action to allow users to explicitly bind their existing object to their Terraform resource instance using the terraform import command.
This is the more explicit of the two options in that it requires the user to intentionally declare that the existing object should be managed by this Terraform configuration, in the same way that users normally take that action in Terraform. This means that the user remains in control and can (as they must always do when importing) take care to import that object into only one resource instance in one Terraform configuration, thereby preserving Terraform's uniqueness assumption.
However, it also adds a mandatory extra setup step to any Terraform configuration which uses this resource type. That extra step does not fit well into typical automation around Terraform, and so that step will often need to be taken in an exceptional way outside of a team's normal workflow.
Treat "create" as if it were "adopt": since the actions a provider is expected to implement for a resource type are just a matter convention, there's no technical reason that your "create" action cannot just verify that the configured object exists and return success without creating anything. I call that "adopting" here to represent the idea that Terraform will then assume that this existing object is now under the exclusive management of whatever resource instance claimed to have created it, but "adopting" is not actually a formal part of Terraform's workflow.
This has the advantage of fitting well into an existing Terraform workflow, requiring no unusual additional steps on the part of the operator.
However, it also means that it's easier to accidentally adopt the same object into two different resource instances, either in the same configuration or in separate configurations. The consequences of doing that will vary depending on what the object represents, but at minimum it will likely result in the different resource instances "fighting" one another, constantly undoing each other's work on each new Terraform run and thus never converging on a stable desired state.
The second of these is the more convenient of the two and so is the one that existing providers have typically chosen as long as the consequences of incorrect multiple-adoption are just the risk of a non-converging system: that situation is confusing and kinda annoying, but also often not super harmful.
The first is the safer of the two because it guards against the accidental multiple-adoption problem. It could be appropriate if two configurations fighting to control a single object may have more significant consequences, such as one configuration breaking the other one by changing its settings in a way that is invalid for the other use-case.
https://docs.near.org/docs/concepts/account#account-id-rules says that account IDs must be 2 to 64 characters, and currently MIN_ALLOWED_TOP_LEVEL_ACCOUNT_LENGTH = 32, so top-level accounts of 32 to 64 characters are available without needing to be purchased at auction.
It also says:
Currently all mainnet accounts use a near top-level account name (ex example.near)
My guess is that "currently" means not just that there don't happen to be any other top-level accounts now but that there cannot be until a future change in NEAR protocol.
Where can I see the source code related to this statement?
And is there a public roadmap / timeline somewhere stating when (auctioned and non-auctioned) top-level accounts will become available?
given how account and account names are implemented (an account is actually a smart contract) the use of "near" as as a root-level account is needed.
".near" is actually an account with a smart contract deployed on it:
https://explorer.near.org/accounts/near
Anything related to account id (names) can be found here:
https://github.com/near/nearcore/blob/master/core/account-id/src/lib.rs
Context: I am deploying multiple Amazon Echos at a single location in different rooms. These echos are managed in Alexa For Business. These echos will share many parts of the same skill, but in our AWS Lambda function, several environment variables should be segregated between these. In particular, we inform the Echo users what the wifi code is for their room, which will vary between rooms (and which other rooms should not be able to access.)
What I'd like to do is have those environment variables filter by the Echos' DSNs. Is this possible?
Or is this better to do at the skill level? We considered that, but we're not sure how to get around the inability to use similar invocation names and we do want to use consistent invocation names due to simplicity in user training.
More info: Using Node.js 6.10 and a template based on this: https://github.com/Donohue/alexa
I am not a smart man. Please feel free to hand-hold me in your answers. Thanks for your time.
The context object holds a field named deviceId. This uid is constant on a per device per skill basis. That means it's different for different skills used by the same device. But it keeps being the same for one skill invoked from the same device.
You can use that data to partition your devices on skill level.
What I'd do in your specific situation is to auto-register a device in your DB with the first skill invocation from that device. Once it's registered you can manually edit the dataset and add a group, that this device is supposed to belong to.
Your skill backend (lambda) then checks, what group the device is in and delivers the according WIFI pw.
I am using API to get ec2 spot price history, but I cannot get anything except for last 90 or so days, and cannot specify frequency of observations. Is there a way to get a complete history of spot prices, preferable at minute or hourly frequency?
While not explicitly documented for the DescribeSpotPriceHistory API action, this restriction is at least mentioned for the AWS Management Console (which uses that API in turn), see Viewing Spot Price History:
You can view the Spot Price history over a period from one to 90 days based on the instance type, the operating system you want the instance to run on, the time period, and the Availability Zone in which it will be launched.
Since anybody could have retrieved and logged the entire spot price history ever since this API is available (and without a doubt quite some users and researchers have done just that; even the AWS blog listed some dedicated Third-Party AWS Tracking Sites, but these are all defunct at first sight), this restriction admittedly seems a bit arbitrary, but is certainly pragmatic from a strictly operational point of view, i.e. you have all the information you need to base future bids upon (esp. given AWS has only ever reduced prices so far, and regularly does so in fact much to the delight of its customers).
Likewise there's no option to change the frequency, so you'd need to resort to client side code for the hourly aggregation.
This website has re-sampled EC2 spot price histories for some regions, you may access them via a simple API directly from your Python script:
http://ec2-spot-prices.ai-mmo-games.de/
I hope this helps.
AWS only provides 90 days of history. And the data is raw, i.e., not normalized by hour or even minute. so there are holes in the data sometimes.
One approach would be to suck in the data into an ipython notebook and use pandas excellent time series tools to resample by minute or 5-min etc. here's a short tutorial:
https://medium.com/cloud-uprising/the-data-science-of-aws-spot-pricing-8bed655caed2
here are more details on using pandas for time-series resampling:
http://pandas.pydata.org/pandas-docs/stable/timeseries.html
hope that helps...
The ec2-run-instances command needs an AMI ID and the ID is different across all regions. Is there any way to specify that I need an AMI that will be suitable for region x / zone y and instance_type z?
In other words I need a way to use some "default" AMI so that I can write a script that will work across all EC2 regions.
There is nothing like a default AMI for Amazon EC2, and no concept of selecting a default (or rather the region specific) AMI amongst the otherwise identical AMIs with different IDs per region either (a region independent AMI ID would be a nifty improvement though).
This is usually solved by adding a respective mapping to your script, thus depends on the scripting environment in use (a simple map should always be available somehow) - e.g. AWS CloudFormation uses the very same approach itself, see the sample EC2ChooseAMI.template, which is an example of using Mappings to select an AMI based on region and instance type.
The AWSRegionArch2AMI map achieves what you desire, plus offering a choice of architecture as well (which implies a hint why a default AMI ID could not be as easy to implement then it might look at fist sight).