Is there any way to reset all slaves reserved resources in Mesos, without configuring one by one the /unreserve http endpoint?
In Mesos documentation :
/unreserve (since 0.25.0)
Suppose we want to unreserve the resources that we dynamically reserved above. We can send an HTTP POST request to the master’s /unreserve endpoint like so:
$ curl -i \
-u <operator_principal>:<password> \
-d slaveId=<slave_id> \
-d resources='[
{
"name": "cpus",
"type": "SCALAR",
"scalar": { "value": 8 },
"role": "ads",
"reservation": {
"principal": <reserver_principal>
}
},
{
"name": "mem",
"type": "SCALAR",
"scalar": { "value": 4096 },
"role": "ads",
"reservation": {
"principal": <reserver_principal>
}
}
]' \
-X POST http://<ip>:<port>/master/unreserve
Mesos doesn't directly provide any support for unreserving resources at more than one slave using a single operation. However, you can write a script that uses the /unreserve endpoint to unreserves the resources at all the slaves in the cluster, e.g., by fetching the list of slaves and reserved resources from the /slaves endpoint on the master (see the reserved_resources_full key).
Related
I am looking to send pull request to multiple reviewers in bitbucket. Currently I have json and curl request as below
{
"title": "PR-Test",
"source": {
"branch": {
"name": "master"
}
},
"destination": {
"branch": {
"name": "prd"
}
},
"reviewers": [
{
"uuid": "{d543251-6455-4113-b4e4-2fbb1tb260}"
}
],
"close_source_branch": true
}
curl -u "user:pass" -H "Content-Type: application/json" https://api.bitbucket.org/2.0/repositories/companyname/my-repo/pullrequests -X POST --data #my-pr.json
The above curl command works. I need the json syntax to pass either multiple usernames or multiple UUID in the reviewers list.
https://developer.atlassian.com/bitbucket/api/2/reference/resource/repositories/%7Busername%7D/%7Brepo_slug%7D/pullrequests
The documentation you linked to seems to indicate that something like this should work:
"reviewers": [
{
"uuid": "{504c3b62-8120-4f0c-a7bc-87800b9d6f70}"
},
{
"uuid": "{bafabef7-b740-4ee0-9767-658b3253ecc0}"
}
]
T2 instances can now be started with an additional option to allow more CPU bursting for additional cost.
SDK: http://docs.aws.amazon.com/aws-sdk-php/v3/api/api-ec2-2016-11-15.html#runinstances
I tried it, I can switch my instances to unlimited so it should be possible.
However, I added the new configuration option to the array and nothing changed, it's still set to "standard" as before.
Here a JSON dump of the runinstances option array:
{
"UserData": "....",
"SecurityGroupIds": [
"sg-04df967f"
],
"InstanceType": "t2.micro",
"ImageId": "ami-4e3a4051",
"MaxCount": 1,
"MinCount": 1,
"SubnetId": "subnet-22ec130c",
"Tags": [
{
"Key": "task",
"Value": "test"
},
{
"Key": "Name",
"Value": "unlimitedtest"
}
],
"InstanceInitiatedShutdownBehavior": "terminate",
"CreditSpecification": {
"CpuCredits": "unlimited"
}
}
It starts the EC2 instance successfully just as before, however the CreditSpecification setting is ignored.
Amazon denies normal users to contact support, so I hope maybe someone here has a clue about it.
Hmmm... Using qualitatively the same run-instances JSON
{
"ImageId": "ami-bf4193c7",
"InstanceType": "t2.micro",
"CreditSpecification": {
"CpuCredits": "unlimited"
}
}
worked for me - the instance shows this:
T2 Unlimited Enabled
in the "description" tab after selecting this instance in the ec2 console.
Is there a way to run an EmrActivity in AWS Data Pipeline on an existing cluster? We currently are using Data Pipeline to run jobs in AWS EMR using EmrCluster and EmrActivity but we'd like to have all pipelines run on the same cluster. I've tried reading the documentation and building a pipeline in architect but I can't seem to find a way to do anything but create a cluster and run jobs on it. There doesn't seem to be a way to define a new pipeline which uses an existing cluster. If there is how would I do it? We're currently using CloudFormation to create our pipelines so if possible an example using CloudFormation would be preferable but I'll take what I can get.
Yes it is possible.
Launch your EMR cluster
Start TaskRunner on the master instance with the option --workerGroup=name-of-the-worker-group
In the activities of your pipeline don't specify runsOn parameter, pass your worker group instead.
Here is an example of the activity with such parameter defined using CloudFormation:
...
{
"Id": "S3ToRedshiftCopyActivity",
"Name": "S3ToRedshiftCopyActivity",
"Fields": [
{
"Key": "type",
"StringValue": "RedshiftCopyActivity"
},
{
"Key": "workerGroup",
"StringValue": "name-of-the-worker-group"
},
{
"Key": "insertMode",
"StringValue": "#{myInsertMode}"
},
{
"Key": "commandOptions",
"StringValue": "FORMAT CSV"
},
{
"Key": "dependsOn",
"RefValue": "RedshiftTableCreateActivity"
},
{
"Key": "input",
"RefValue": "S3StagingDataNode"
},
{
"Key": "output",
"RefValue": "DestRedshiftTable"
}
]
}
...
You can find detailed documentation how to do that here:
http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-how-task-runner-user-managed.html
I was trying to deploy ngnix docker container by mesos Marathon, I would like to set some environment variables in the container, so I added parameter section in the Json file, but after I added the parameter section, it was failed. My Json file as following:
{
"container":{
"type":"DOCKER",
"docker":{
"image":"nginx",
"network":"BRIDGE",
"portMappings":[{"containerPort":80,"hostPort":0,"servicePort":80,"protocol":"tcp"}],
"parameters": [
{ "key": "myhostname", "value": "a.corp.org" }
]
}
},
"id":"nginx7",
"instances":1,
"cpus":0.25,
"mem":256,
"uris":[]
}
my launch script was: curl -X POST -H "Content-Type: application/json" 10.3.11.11:8080/v2/apps -d#"$#"
The command I ran was: ./launch.sh nginx.json
You used the wrong parameter key myhostname, if you want to setup hostname for you container, it should be:
"parameters": [
{ "key": "hostname", "value": "a.corp.org" }
]
if you want to pass environment variable, it should be:
"parameters": [
{ "key": "env", "value": "myhostname=a.corp.org" }
]
I would like to automate my hive script every day , in order to do that i have an option which is data pipeline. But the problem is there that i am exporting data from dynamo-db to s3 and with a hive script i am manipulating this data. I am giving this input and output in hive-script that's where the problem starts because a hive-activity has to have input and output but i have to give them in script file.
I am trying to find a way to automate this hive-script and waiting for some ideas ?
Cheers,
You can disable staging on Hive Activity to run any arbitrary Hive Script.
stage = false
Do something like:
{
"name": "DefaultActivity1",
"id": "ActivityId_1",
"type": "HiveActivity",
"stage": "false",
"scriptUri": "s3://baucket/query.hql",
"scriptVariable": [
"param1=value1",
"param2=value2"
],
"schedule": {
"ref": "ScheduleId_l"
},
"runsOn": {
"ref": "EmrClusterId_1"
}
},
Another alternative to the Hive Activity, is to use an EMR activity as in the following example:
{
"schedule": {
"ref": "DefaultSchedule"
},
"name": "EMR Activity name",
"step": "command-runner.jar,hive-script,--run-hive-script,--args,-f,s3://bucket/path/query.hql",
"runsOn": {
"ref": "EmrClusterId"
},
"id": "EmrActivityId",
"type": "EmrActivity"
}