AWS EC2 - starting T2 unlimited instances - bug in EC2 API? - amazon-ec2

T2 instances can now be started with an additional option to allow more CPU bursting for additional cost.
SDK: http://docs.aws.amazon.com/aws-sdk-php/v3/api/api-ec2-2016-11-15.html#runinstances
I tried it, I can switch my instances to unlimited so it should be possible.
However, I added the new configuration option to the array and nothing changed, it's still set to "standard" as before.
Here a JSON dump of the runinstances option array:
{
"UserData": "....",
"SecurityGroupIds": [
"sg-04df967f"
],
"InstanceType": "t2.micro",
"ImageId": "ami-4e3a4051",
"MaxCount": 1,
"MinCount": 1,
"SubnetId": "subnet-22ec130c",
"Tags": [
{
"Key": "task",
"Value": "test"
},
{
"Key": "Name",
"Value": "unlimitedtest"
}
],
"InstanceInitiatedShutdownBehavior": "terminate",
"CreditSpecification": {
"CpuCredits": "unlimited"
}
}
It starts the EC2 instance successfully just as before, however the CreditSpecification setting is ignored.
Amazon denies normal users to contact support, so I hope maybe someone here has a clue about it.

Hmmm... Using qualitatively the same run-instances JSON
{
"ImageId": "ami-bf4193c7",
"InstanceType": "t2.micro",
"CreditSpecification": {
"CpuCredits": "unlimited"
}
}
worked for me - the instance shows this:
T2 Unlimited Enabled
in the "description" tab after selecting this instance in the ec2 console.

Related

Cloudwatch : cannot see my host in CWAgent(custom namespace) metric

I am trying to push the memory usage of my ec2 instance to cloudwatch. I have the cloudwatch agent running in the ec2 instance. I have the config file on AWS ssm. this is how it looks
{
"agent": {
"metrics_collection_interval": 60,
"run_as_user": "root"
},
"logs": {
"logs_collected": {
"files": {
"collect_list": [
{
"file_path": "/var/log/my-service/*.log",
"log_group_name": "my-service",
"log_stream_name": "{instance_id}"
}
]
}
}
},
"metrics": {
"append_dimensions": {
"AutoScalingGroupName": "${aws:AutoScalingGroupName}",
"ImageId": "${aws:ImageId}",
"InstanceId": "${aws:InstanceId}",
"InstanceType": "${aws:InstanceType}"
},
"metrics_collected": {
"disk": {
"measurement": [
"used_percent"
],
"metrics_collection_interval": 60,
"resources": [
"*"
]
},
"mem": {
"measurement": [
"mem_used_percent"
],
"metrics_collection_interval": 60
},
"statsd": {
"metrics_aggregation_interval": 10,
"metrics_collection_interval": 10,
"service_address": ":8125"
}
}
}
}
I am starting cloudwatch agent with this command
sudo
/opt/aws/amazon-cloudwatch-agent/bin/./amazon-cloudwatch-agent-ctl -a
fetch-config -m ec2 -c ssm:AmazonCloudWatch-linux -s
The logs are being pushed from the locations mentioned in the config, but I do not see the memory metric being pushed. If I understand correctly I should see the ip address of my instance under CWAgent--> host under Metrics. Unfortunately, I do not see this.
I tried checking the cloudwatch agent logs under /opt/aws/amazon-cloudwatch-agent/logs
/amazon-cloudwatch-agent.log
I do not see any logs which says, it pushed or trying to push any metric.
Any help is much appreciated.
I believe that do will you see the instance name not IP address

Failed to run the pipeline (Pipeline) - Azure Data Factory

For testing purposes I'm trying to execute this simple pipeline (nothing sophisticated).
However, I'm getting this error:
{"code":"BadRequest","message":null,"target":"pipeline//runid/cb841f14-6fdd-43aa-a9c1-4619dab28cdd","details":null,"error":null}
The goal is to see if two variables are getting the right values (we have been facing some issues in our production environment).
This is the json with the definition of the pipeline:
{
"name": "GeneralTest",
"properties": {
"activities": [
{
"name": "Set variable1",
"type": "SetVariable",
"dependsOn": [],
"userProperties": [],
"typeProperties": {
"variableName": "start_time",
"value": {
"value": "#utcnow()",
"type": "Expression"
}
}
},
{
"name": "Wait1",
"type": "Wait",
"dependsOn": [
{
"activity": "Set variable1",
"dependencyConditions": [
"Succeeded"
]
}
],
"userProperties": [],
"typeProperties": {
"waitTimeInSeconds": 5
}
},
{
"name": "Set variable2",
"description": "",
"type": "SetVariable",
"dependsOn": [
{
"activity": "Wait1",
"dependencyConditions": [
"Succeeded"
]
}
],
"userProperties": [],
"typeProperties": {
"variableName": "end_time ",
"value": {
"value": "#utcnow()",
"type": "Expression"
}
}
}
],
"variables": {
"start_time": {
"type": "String"
},
"end_time ": {
"type": "String"
}
},
"folder": {
"name": "Old Pipelines"
},
"annotations": []
}
}
What am I missing, or what could be the problem with this process?
You are having a "blank space" after the variable name end_time like "end_time "
You can see the difference in my repro:
MyCode VS YourCode
Clearing that would make the execution just fine.
I faced a similar issue when doing a Debug run of one of my pipelines. The error messages for these types of errors are not helpful when running in Debug mode.
What I have found is that if you publish the pipeline and then Trigger a Pipeline Run (instead of a Debug run), you can then go to Monitor Pipeline Runs and it will show you a more useful error message.
Apart from possible blank spaces in variable or parameter names, Data Factory doesn't like hyphens, but only in parameter names, variables are fine.
Validation passes, but then in debug time you get the same cryptic error
I ran into this same error message in Data Factory today on the Copy activity. Everything passed validation but this error would pop on each debug run.
I have parameters configured on my dataset connections so that I can use dynamic queries against the data sources. In this case, I was using explicit queries, so the parameters appeared irrelevant. I tried with both blank values and value is null. Both failed the same way.
I tried with stupid but real text values and it worked! The pipeline isn't leveraging the stupid values for any work, so their content doesn't matter, but some portion of the engine needs a non-null value in the parameters in order to execute.

Objects in array is not well supported error observed for ELK docker image

I'm using the latest elk image for kibana dashboard and I have json file which is having list of array[] and I'm not able to show those as field in kibana and It's showing that the object in array is not well supported error message.
As per the document in kibana I just went through the below link but I didn't find anything useful for elk docker image.
https://github.com/istresearch/kibana-object-format
I just tried to run the command
Run bin/kibana-plugin install <package.zip>
but it returned as run is unknown command removed run and ran remaining command but It says that's invalid.
I'm using linux box and Kibana 7.3 version.
Is it possible to overcome this issue? how to deploy that plugin for elk image else is there any other way to make those arrays object as fields in kibana.
I'm not sure how can I proceed. Please help me.
Sample Data:
{
"expand": "schema,names",
"startAt": 0,
"maxResults": 50,
"total": 4,
"issues": [{
"expand": "operations,versionedRepresentations,editmeta,changelog,renderedFields",
"id": "1999875",
"self": "https://amazon.kindle.com/jira/rest/api/2/issue/1999875",
"key": "KINDLEAMZ-67578",
"fields": {
"summary": "contingency is displaying for confirmed card.",
"priority": {
"name": "P1",
"id": "1"
},
"created": "2019-09-23T11:25:21.000+0000"
}
},
{
"expand": "operations,versionedRepresentations,editmeta,changelog,renderedFields",
"id": "2019428",
"self": "https://amazon.kindle.com/jira/rest/api/2/issue/2019428",
"key": "KINDLEAMZ-68661",
"fields": {
"summary": "card",
"priority": {
"name": "P1",
"id": "1"
},
"created": "2019-09-23T11:25:21.000+0000"
}
},
{
"expand": "operations,versionedRepresentations,editmeta,changelog,renderedFields",
"id": "2010958",
"self": "https://amazon.kindle.com/jira/rest/api/2/issue/2010958",
"key": "KINDLEAMZ-68167",
"fields": {
"summary": "Test Card",
"priority": {
"name": "P1",
"id": "1"
},
"created": "2019-09-23T11:25:21.000+0000"
}
}
]
}
I just want to fetch KEY, Summary, Priority from each of the above array. But its not working as expected when I tried to make a field its showing as array in kibana. If this is not working with 7.3.0 should I downgrade to lower version? the steps are missing for docker user in that document. Is there any way to get those details?
Checking here: https://github.com/istresearch/kibana-object-format/releases it looks like the plugin latest release was for Elasticsearch 6.3. I guess that is the reason for your error.
I'm not sure there's a fix for this in kibana. There are many issues on this subject, open for a long time, like: https://github.com/elastic/kibana/issues/3333.

scrapoxy - How to limit the number of instances that get created

I am using this library to setup a proxy on digital ocean and I am looking for a way to limit the number of droplets that get created in DigitalOcean so while looking at the documentation I came across this link which suggests that to limit the droplet I need to add max which I did however that does not seem to work, right now I am seeing a new droplet being created every few seconds and i want to reduce that to 2 if possible just to test things out.
This is what i have in my configuration
{
"commander": {
"password": "password"
},
"instance": {
"port": 3128,
"scaling": {
"min": 1,
"max": 2
},
"log": {
"path": "/log"
}
},
"providers": [
{
"type": "digitalocean",
"token": "the-token-goes-here",
"region": "SGP1",
"size": "s-1vcpu-2gb",
"sshKeyName": "ubuntu",
"imageName": "name-of-image",
"tags": "proxy,instance",
"max": 2
}
]
}
My understanding is by adding the max I should have been able to limit the droplets but I see no difference, is this the right place to add? What am i missing here?

AWS EC2 Systems Manager Parameter Types

I'm trying to use the Amazon EC2 Systems Manager (http://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) to create an "Automation" document type to (amongst other things) tag an AMI it just created.
You can create tags in a predetermined manner like this within "mainSteps":
...
{
"name": "CreateTags",
"action": "aws:createTags",
"maxAttempts": 3,
"onFailure": "Abort",
"inputs": {
"ResourceType": "EC2",
"ResourceIds": ["{{ CreateImage.ImageId }}"],
"Tags": [
{
"Key": "Original_AMI_ID",
"Value": "Created from {{ SourceAmiId }}"
}
]
}
},
...
but to tag with a variable number of Tags, I'm assuming the following change is neccessary:
...
{
"name": "CreateTags",
"action": "aws:createTags",
"maxAttempts": 3,
"onFailure": "Abort",
"inputs": {
"ResourceType": "EC2",
"ResourceIds": ["{{ CreateImage.ImageId }}"],
"Tags": {{ Tags }}
}
},
...
with the addition of a new parameter called 'Tags' of type 'MapList':
"parameters": {
"Tags": {
"type": "MapList"
}
}
since running the process was complaining about my using a 'String' type and saying I should use a 'MapList'.
'MapList' is listed as a parameter type of the Amazon EC2 Systems Manager (http://docs.aws.amazon.com/systems-manager/latest/APIReference/top-level.html), but I have not yet found any documentation on how to define this type.
I have guessed at several formats based upon both what I've seen from their 'hardcoded' sample above and other tagging method in their other API's to no avail:
[ { "Key": "Name", "Value": "newAmi" } ]
[ { "Key": "Name", "Values": [ "newAmi" ] } ]
1: { "Key": "Name", "Values": [ "newAmi" ] }
Does anyone know how to define the new parameter types introduced with the Amazon EC2 Systems Manager (specifically, 'MapList')?
Update:
Since the docs are lacking, Amazon Support is asking the automation team how to best tag ami's using this method. I have found how to add a single tag as a parameter value in the console, though:
{ "Key": "TagName", "Value": "TagValue" }
My attempts to add multiple tags will allow the automation to start:
{ "Key": "TagName1", "Value": "TagValue1" }, { "Key": "TagName2", "Value": "TagValue2" }
but ultimately returns this generic error at runtime:
Internal Server Error. Please refer to Automation Service Troubleshooting
Guide for more diagnosis details
It might seem like the [] is missing from around the array, but you seem to get those for free because when I add them I get this error:
Parameter type error. [[ { "Key": "Description", "Value": "Desc" },
{ "Key": "Name", "Value": "Nm" } ]] is defined as MapList.
Thanks for using EC2 Systems Manager, Automation feature. Here's the document I tested, it works.
{
"schemaVersion": "0.3",
"description": "Test tags.",
"assumeRole": "arn:aws:iam::123456789012:role/TestRole",
"parameters": {
"Tags": {
"default": [{
"Key": "TagName1",
"Value": "TagValue1"
},
{
"Key": "TagName2",
"Value": "TagValue2"
}],
"type": "MapList"
}
},
"mainSteps": [
{
"name": "CreateTags",
"action": "aws:createTags",
"maxAttempts": 3,
"onFailure": "Abort",
"inputs": {
"ResourceType": "EC2",
"ResourceIds": [
"i-12345678"
],
"Tags": "{{ Tags }}"
}
}
]
}

Resources