I am using Mesos 1.0.1. I have added an agent with a new role docker_gpu_worker. I register a framework with this role. The framework does not receive any offers. Other frameworks (same Java code) using other roles are working fine. I have not restarted the three Mesos masters. Does anyone have an idea about what might be going wrong?
At master/frameworks, I see my framework:
"{
"id": "fd01b1b0-eb73-4d40-8774-009171ae1db1-0701",
"name": "/data4/Users/mikeb/jobs/999",
"pid": "scheduler-77345362-b85c-4044-8db5-0106b9015119#x.x.x.x:57617",
"used_resources": {
"disk": 0,
"mem": 0,
"gpus": 0,
"cpus": 0
},
"offered_resources": {
"disk": 0,
"mem": 0,
"gpus": 0,
"cpus": 0
},
"capabilities": [],
"hostname": "x-x-x-x.ec2.internal",
"webui_url": "",
"active": true,
"user": "mikeb",
"failover_timeout": 10080,
"checkpoint": true,
"role": "docker_gpu_worker",
"registered_time": 1507028279.18887,
"unregistered_time": 0,
"principal": "test-framework-java",
"resources": {
"disk": 0,
"mem": 0,
"gpus": 0,
"cpus": 0
},
"tasks": [],
"completed_tasks": [],
"offers": [],
"executors": []
}"
At master/roles I see my role:
"{
"frameworks": [
"fd01b1b0-eb73-4d40-8774-009171ae1db1-0701",
"fd01b1b0-eb73-4d40-8774-009171ae1db1-0673",
"fd01b1b0-eb73-4d40-8774-009171ae1db1-0335"
],
"name": "docker_gpu_worker",
"resources": {
"cpus": 0,
"disk": 0,
"gpus": 0,
"mem": 0
},
"weight": 1
}"
At master/slaves I see my agent:
"{
"id": "fd01b1b0-eb73-4d40-8774-009171ae1db1-S5454",
"pid": "slave(1)#x.x.x.x:5051",
"hostname": "x-x-x-x.ec2.internal",
"registered_time": 1506692213.24938,
"resources": {
"disk": 35056,
"mem": 59363,
"gpus": 4,
"cpus": 32,
"ports": "[31000-32000]"
},
"used_resources": {
"disk": 0,
"mem": 0,
"gpus": 0,
"cpus": 0
},
"offered_resources": {
"disk": 0,
"mem": 0,
"gpus": 0,
"cpus": 0
},
"reserved_resources": {
"docker_gpu_worker": {
"disk": 35056,
"mem": 59363,
"gpus": 4,
"cpus": 32,
"ports": "[31000-32000]"
}
},
"unreserved_resources": {
"disk": 0,
"mem": 0,
"gpus": 0,
"cpus": 0
},
"attributes": {},
"active": true,
"version": "1.0.1",
"reserved_resources_full": {
"docker_gpu_worker": [
{
"name": "gpus",
"type": "SCALAR",
"scalar": {
"value": 4
},
"role": "docker_gpu_worker"
},
{
"name": "cpus",
"type": "SCALAR",
"scalar": {
"value": 32
},
"role": "docker_gpu_worker"
},
{
"name": "mem",
"type": "SCALAR",
"scalar": {
"value": 59363
},
"role": "docker_gpu_worker"
},
{
"name": "disk",
"type": "SCALAR",
"scalar": {
"value": 35056
},
"role": "docker_gpu_worker"
},
{
"name": "ports",
"type": "RANGES",
"ranges": {
"range": [
{
"begin": 31000,
"end": 32000
}
]
},
"role": "docker_gpu_worker"
}
]
},
"used_resources_full": [],
"offered_resources_full": []
}"
We have tracked the problem to this Mesos agent config:
--isolation="filesystem/linux,cgroups/devices,gpu/nvidia"
Removing that, the agent works properly, but without access to GPU resources. This config is a requirement according to the docs for Nvidia GPU support and those docs seem to indicate that version 1.0.1 supports it. We are continuing to investigate.
The GPU_RESOURCES capability must be enabled for frameworks.
As illustrated in http://mesos.readthedocs.io/en/latest/gpu-support/,
this can be achieved for example by specifying --framework_capabilities="GPU_RESOURCES" in the mesos-execute command, or with code like this in C++:
FrameworkInfo framework;
framework.add_capabilities()->set_type(
FrameworkInfo::Capability::GPU_RESOURCES);
For Marathon frameworks instead, the Marathon service must be started with the option --enable_features "gpu_resources" as indicated in Enable GPU resources (CUDA) on DC/OS
you can register the roles with master statically,
if you add an agent role at run time it would not be known to master
and it would require mesos master restart for master to see this role.
Try restarting the mesos master.
Related
Visual is to get this working in mkdocs driven by a local AJAX server.
This one is hard to give an example to but I will. Before I do that, the problem is that I want to use various ajax endpoints to drive Vega visuals in MkDocs. But I run into the CORS permissions.
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://machine1:8080/dataflare. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing). Status code: 200.
I have struggled to find documentation on how to solve this so, does anyone know how do we enable this in mkdocs?
The sample is long but this works until you use a local data source other than the vega data. When you change the AJAX endpoint from "url": "https://vega.github.io/vega/data/flare.json" to http://myhost:8080/shortflare (or fullflare endpoint) you get the CORS permission error above.
So should the client mkdocs enable the endpoint as safe cross site source or should bottle AJAX be sending a CORS header? I dont understand why the original AJAX works when the bottle endpoint does not work.
Now the hard part. Showing an example.
This simple bottle server will be the AJAX endpoint mimic of the original flaredata :
sample of data is on http://myhost:8080/shortflare
full dataset http://myhost:8080/fullflare is served via proxy from https://vega.github.io/vega/data/flare.json
from bottle import route, run, template
import requests
DATAFLARE = '''
[
{
"id": 1,
"name": "flare"
},
{
"id": 2,
"name": "analytics",
"parent": 1
},
{
"id": 3,
"name": "cluster",
"parent": 2
},
{
"id": 4,
"name": "AgglomerativeCluster",
"parent": 3,
"size": 3938
}
]
'''
#route('/shortflare')
def getflare():
return DATAFLARE
#route('/fullflare')
def proxyflare():
return requests.get('https://vega.github.io/vega/data/flare.json').text
if __name__ == "__main__":
run(host='0.0.0.0', port=8080, debug=True)
mkdocs setup (i.e mkdocs.yml)
site_name: VEGA
dev_addr: '0.0.0.0:2001'
theme:
name: material
nav_style: dark
palette:
accent: pink
primary: lime
plugins:
- search
- charts
markdown_extensions:
- pymdownx.superfences:
custom_fences:
- name: vegalite
class: vegalite
format: !!python/name:mkdocs_charts_plugin.fences.fence_vegalite
extra_javascript:
- https://cdn.jsdelivr.net/npm/vega#5
- https://cdn.jsdelivr.net/npm/vega-lite#5
- https://cdn.jsdelivr.net/npm/vega-embed#6
ve requires
mkdocs==1.2.3
mkdocs-charts-plugin==0.0.6
mkdocs-material==7.3.6
and the markdown to make the vegalite graphic (add this in index.md or anywhere)
Relational maps.
```vegalite
{
"$schema": "https://vega.github.io/schema/vega/v5.json",
"description": "An example of Cartesian layouts for a node-link diagram of hierarchical data.",
"width": 600,
"height": 1600,
"padding": 5,
"signals": [
{
"name": "labels", "value": true,
"bind": {"input": "checkbox"}
},
{
"name": "layout", "value": "tidy",
"bind": {"input": "radio", "options": ["tidy", "cluster"]}
},
{
"name": "links", "value": "diagonal",
"bind": {
"input": "select",
"options": ["line", "curve", "diagonal", "orthogonal"]
}
},
{
"name": "separation", "value": false,
"bind": {"input": "checkbox"}
}
],
"data": [
{
"name": "tree",
"url": "https://vega.github.io/vega/data/flare.json",
"transform": [
{
"type": "stratify",
"key": "id",
"parentKey": "parent"
},
{
"type": "tree",
"method": {"signal": "layout"},
"size": [{"signal": "height"}, {"signal": "width - 100"}],
"separation": {"signal": "separation"},
"as": ["y", "x", "depth", "children"]
}
]
},
{
"name": "links",
"source": "tree",
"transform": [
{ "type": "treelinks" },
{
"type": "linkpath",
"orient": "horizontal",
"shape": {"signal": "links"}
}
]
}
],
"scales": [
{
"name": "color",
"type": "linear",
"range": {"scheme": "magma"},
"domain": {"data": "tree", "field": "depth"},
"zero": true
}
],
"marks": [
{
"type": "path",
"from": {"data": "links"},
"encode": {
"update": {
"path": {"field": "path"},
"stroke": {"value": "#ccc"}
}
}
},
{
"type": "symbol",
"from": {"data": "tree"},
"encode": {
"enter": {
"size": {"value": 100},
"stroke": {"value": "#fff"}
},
"update": {
"x": {"field": "x"},
"y": {"field": "y"},
"fill": {"scale": "color", "field": "depth"}
}
}
},
{
"type": "text",
"from": {"data": "tree"},
"encode": {
"enter": {
"text": {"field": "name"},
"fontSize": {"value": 9},
"baseline": {"value": "middle"}
},
"update": {
"x": {"field": "x"},
"y": {"field": "y"},
"dx": {"signal": "datum.children ? -7 : 7"},
"align": {"signal": "datum.children ? 'right' : 'left'"},
"opacity": {"signal": "labels ? 1 : 0"}
}
}
}
]
}
```
I actually found out why. The bottle server would need to send the proper header. This is done by adding these few lines to the bottle server.
from bottle_cors_plugin import cors_plugin
from bottle import app
app = app()
app.install(cors_plugin('*'))
if __name__ == "__main__":
run(app=app, host='0.0.0.0', port=8080, debug=True)
Well.. I am quite "newb" regarding ES so regarding aggregation... there is no words in the dictionary to describe my level regarding it :p
Today I am facing an issue where I am trying to create a query that should execute something similar to a SQL DISTINCT, but among filters. I have this document given (of course, an abstraction of the real situation):
{
"id": "1",
"createdAt": 1626783747,
"updatedAt": 1626783747,
"isAvailable": true,
"kind": "document",
"classification": {
"id": 1,
"name": "a_name_for_id_1"
},
"structure": {
"material": "cartoon",
"thickness": 5
},
"shared": true,
"objective": "stackoverflow"
}
As all the data of the above document can vary, I however have some values that can be redundant, such as classification.id, kind, structure.material.
So, in order to fullfit my requirements, I would like to "group by" these 3 fields in order to have a unique combination of each. If we go deeper, with the following data, I should get the following possibilities:
[{
"id": "1",
"createdAt": 1626783747,
"updatedAt": 1626783747,
"isAvailable": true,
"kind": "document",
"classification": {
"id": 1,
"name": "a_name_for_id_1"
},
"structure": {
"material": "cartoon",
"thickness": 5
},
"shared": true,
"objective": "stackoverflow"
},
{
"id": "2",
"createdAt": 1626783747,
"updatedAt": 1626783747,
"isAvailable": true,
"kind": "document",
"classification": {
"id": 2,
"name": "a_name_for_id_2"
},
"structure": {
"material": "iron",
"thickness": 3
},
"shared": true,
"objective": "linkedin"
},
{
"id": "3",
"createdAt": 1626783747,
"updatedAt": 1626783747,
"isAvailable": false,
"kind": "document",
"classification": {
"id": 2,
"name": "a_name_for_id_2"
},
"structure": {
"material": "paper",
"thickness": 1
},
"shared": false,
"objective": "tiktok"
},
{
"id": "4",
"createdAt": 1626783747,
"updatedAt": 1626783747,
"isAvailable": true,
"kind": "document",
"classification": {
"id": 3,
"name": "a_name_for_id_3"
},
"structure": {
"material": "cartoon",
"thickness": 5
},
"shared": false,
"objective": "snapchat"
},
{
"id": "5",
"createdAt": 1626783747,
"updatedAt": 1626783747,
"isAvailable": true,
"kind": "document",
"classification": {
"id": 3,
"name": "a_name_for_id_3"
},
"structure": {
"material": "paper",
"thickness": 1
},
"shared": true,
"objective": "twitter"
},
{
"id": "6",
"createdAt": 1626783747,
"updatedAt": 1626783747,
"isAvailable": false,
"kind": "document",
"classification": {
"id": 3,
"name": "a_name_for_id_3"
},
"structure": {
"material": "iron",
"thickness": 3
},
"shared": true,
"objective": "facebook"
}
]
based on the above, I should get the following results in the "buckets":
document 1 cartoon
document 2 iron
document 2 paper
document 3 cartoon
document 3 paper
document 3 iron
Of course, for the sake of this example (and to make it easier, I yet don't have any duplicates)
However, on top of that, I need some "pre-filters" as I only want:
Documents that are available isAvailable=true
Documents'structure's thickness should range between 2 and 4 included: 2 >= structure.thickness >= 4
Document's that are shared shared=true
I should so then get only the following combinations compared to the first set of results:
document 1 cartoon -> not a valid result, thickness > 4
document 2 iron
document 2 paper -> not a valid result, isAvailable != true
document 3 cartoon -> not a valid result, thickness > 4
document 3 cartoon -> not a valid result, thickness < 2
document 3 iron -> not a valid result, isAvailable != true
If you're still reading, well.. thanks! xD
So, as you can see, I need all the possible combination of this field regarding the static pattern kind <> classification_id <> structure_material that are matching the filters regarding isAvailable, thickness, shared.
Regarding the output, the hits doesn't matter to me as I don't need the documents but only the combination kind <> classification_id <> structure_material :)
Thanks for any help :)
Max
You can got with Cardinatily aggregations with your existing filters.Please check this url and let me know if you have any queries.
https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-cardinality-aggregation.html
Thanks to a colleague, I could finally get it working as expected!
QUERY
GET index-latest/_search
{
"size": 0,
"query": {
"bool": {
"filter": [
{
"term": {
"isAvailable": true
}
},
{
"range": {
"structure.thickness": {
"gte": 2,
"lte": 4
}
}
},
{
"term": {
"shared": true
}
}
]
}
},
"aggs": {
"my_agg_example": {
"composite": {
"size": 10,
"sources": [
{
"kind": {
"terms": {
"field": "kind.keyword",
"order": "asc"
}
}
},
{
"classification_id": {
"terms": {
"field": "classification.id",
"order": "asc"
}
}
},
{
"structure_material": {
"terms": {
"field": "structure.material.keyword",
"order": "asc"
}
}
}
]
}
}
}
}
The given result is then:
{
"took": 11,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 1,
"relation": "eq"
},
"max_score": null,
"hits": []
},
"aggregations": {
"my_agg_example": {
"after_key": {
"kind": "document",
"classification_id": 2,
"structure_material": "iron"
},
"buckets": [
{
"key": {
"kind": "document",
"classification_id": 2,
"structure_material": "iron"
},
"doc_count": 1
}
]
}
}
}
So, as we can see, we get the following bucket:
{
"key": {
"kind": "document",
"classification_id": 2,
"structure_material": "iron"
},
"doc_count": 1
}
Note: Be careful regarding the type of your field.. putting .keyword on classification.id was resulting to no results in the buckets... .keyword should be use only on types such as string (as far as I understood, correct me if I am wrong)
As expected, we have the following result (compared to the initial question):
document 2 iron
Note: Be careful, the order of the elements within the aggs.<name>.composite.sources does play a role in the returned results.
Thanks!
There is a step in my Azure pipeline yaml that requires the step number of the first failed step. Is there a way to retrieve this information (preferably in a bash task)?
The idea is to retrieve the logs of the failed step .../_apis/build/builds/777777/logs/3
I'm not good at bash scripting but you need to:
Call Azure DevOps REST API first on timeline endpoint:
https://dev.azure.com/{{organization}}/{{project}}/_apis/build/builds/3477/timeline?api-version=6.0
where 3477 is your build id.
Then go through response and find first record with result=failed:
{
"previousAttempts": [],
"id": "5caf77c8-9b10-50ef-b5c7-ca89c63e1c86",
"parentId": "12f1170f-54f2-53f3-20dd-22fc7dff55f9",
"type": "Task",
"name": "Run a multi-line script",
"startTime": "2020-09-07T12:00:04.5033333Z",
"finishTime": "2020-09-07T12:00:04.7466667Z",
"currentOperation": null,
"percentComplete": null,
"state": "completed",
"result": "failed",
"resultCode": null,
"changeId": 10,
"lastModified": "0001-01-01T00:00:00",
"workerName": "Hosted Agent",
"order": 4,
"details": null,
"errorCount": 1,
"warningCount": 0,
"url": null,
"log": {
"id": 7,
"type": "Container",
"url": "https://dev.azure.com/thecodemanual/4fa6b279-3db9-4cb0-aab8-e06c2ad550b2/_apis/build/builds/3477/logs/7"
},
"task": {
"id": "d9bafed4-0b18-4f58-968d-86655b4d2ce9",
"name": "CmdLine",
"version": "2.164.2"
},
"attempt": 1,
"identifier": null,
"issues": [
{
"type": "error",
"category": "General",
"message": "Bash exited with code '1'.",
"data": {
"type": "error",
"logFileLineNumber": "15"
}
}
]
},
At log property of this record you will find url to your log
"log": {
"id": 7,
"type": "Container",
"url": "https://dev.azure.com/thecodemanual/4fa6b279-3db9-4cb0-aab8-e06c2ad550b2/_apis/build/builds/3477/logs/7"
},
Below is my json which I need to process using jq in bash script. I need to get the "Id" column value. Since in this json there are 3 records, record with maximum value id would be returned. So after processing of below json I should get 170.
I am newbie and have very limited exposure to bash.
{
"count": 3,
"value": [
{
"properties": {},
"tags": [], "validationResults": [],
"plans": [
{
"planId": "49699e0f-b893-4633-bc05-754b8a562d07"
}
], "triggerInfo": {},
"id": 170,
"buildNumber": "20181011.8", "status": "completed", "result": "succeeded", "queueTime": "2018-10-11T15:56:24.9611153Z", "startTime": "2018-10-11T15:56:28.3668144Z", "finishTime": "2018-10-11T15:57:20.5163422Z",
"url": "https://indiatelecom.visualstudio.com/d354caa2-2e88-414a-829b-25df3aceaaaf/_apis/build/Builds/170",
"buildNumberRevision": 8, "uri": "vstfs:///Build/Build/170",
"sourceBranch": "refs/heads/master", "sourceVersion": "4303c19f8fda79e35fcb598219d5dca6bb274c2d",
"priority": "normal", "reason": "manual", "lastChangedDate": "2018-10-11T15:57:20.797Z", "parameters": "{\"system.debug\":\"false\"}",
"orchestrationPlan": {
"planId": "49699e0f-b893-4633-bc05-754b8a562d07"
}, "keepForever": false, "retainedByRelease": false, "triggeredByBuild": null
},
{ "properties": {}, "tags": [], "validationResults": [],
"plans": [ { "planId": "15026a2f-c725-4e52-974b-61e01a940661"
} ],
"triggerInfo": {},
"id": 160,
"buildNumber": "20181009.20", "status": "completed", "result": "succeeded", "queueTime": "2018-10-09T16:47:42.2954075Z", "startTime": "2018-10-09T16:47:43.8034575Z",
"finishTime": "2018-10-09T16:48:35.8340469Z", "url": "https://indiatelecom.visualstudio.com/d354caa2-2e88-414a-829b-25df3aceaaaf/_apis/build/Builds/160",
"buildNumberRevision": 20, "uri": "vstfs:///Build/Build/160",
"sourceBranch": "refs/heads/master", "sourceVersion": "19a55c7482083785265b86015150521b40230c11",
"priority": "normal", "reason": "manual",
"lastChangedDate": "2018-10-09T16:48:36.057Z", "parameters": "{\"system.debug\":\"false\"}",
"orchestrationPlan": {
"planId": "15026a2f-c725-4e52-974b-61e01a940661"
},
"keepForever": false, "retainedByRelease": false,
"triggeredByBuild": null },
{
"properties": {}, "tags": [],
"validationResults": [], "plans": [
{
"planId": "e45d9da8-4d95-42b7-aa23-478e1c1c49f5"
}
],
"triggerInfo": {},
"id": 147,
"buildNumber": "20181009.7", "status": "completed",
"result": "succeeded", "queueTime": "2018-10-09T15:15:47.0248009Z",
"startTime": "2018-10-09T15:15:50.8899892Z", "finishTime": "2018-10-09T15:16:47.7866356Z",
"url": "https://indiatelecom.visualstudio.com/d354caa2-2e88-414a-829b-25df3aceaaaf/_apis/build/Builds/147",
"buildNumberRevision": 7, "uri": "vstfs:///Build/Build/147",
"sourceBranch": "refs/heads/master", "sourceVersion": "70fccb138a2f2a9dfe18290c468959102f504067",
"priority": "normal", "reason": "manual",
"lastChangedDate": "2018-10-09T15:16:48.16Z",
"parameters": "{\"system.debug\":\"false\"}", "orchestrationPlan": {
"planId": "e45d9da8-4d95-42b7-aa23-478e1c1c49f5"
}, "keepForever": false, "retainedByRelease": false,
"triggeredByBuild": null }
] }
The id's are stored in an array under the key value. .value[].id lists the ids, if you put them into an array, you can call max on it:
jq '[.value[].id] | max' < file.json
I have deployed elasticsearch and kibana with below application definitions
elasticsearch.json
{
"id": "elasticsearch",
"container": {
"type": "DOCKER",
"docker": {
"image": "docker.elastic.co/elasticsearch/elasticsearch:6.3.2",
"network": "BRIDGE",
"portMappings": [
{ "hostPort": 9200, "containerPort": 9200, "servicePort": 0 },
{ "hostPort": 9300, "containerPort": 9300, "servicePort": 0 }
],
"forcePullImage":true
}
},
"instances": 1,
"cpus": 1,
"mem": 3048,
"labels":{
"HAPROXY_GROUP":"external",
"HAPROXY_0_VHOST":"publichost",
"HAPROXY_0_MODE":"http",
"DCOS_PACKAGE_NAME": "elasticsearch"
},
"env": {
"ES_JAVA_OPTS": "-Xmx2048m -Xms2048m"
}
}
Which deploys elasticsearch on "/" context path
kibana.json
{
"id": "kibana",
"container": {
"type": "DOCKER",
"docker": {
"image": "docker.elastic.co/kibana/kibana:6.3.2",
"network": "BRIDGE",
"portMappings": [
{ "hostPort": 5601, "containerPort": 5601, "servicePort":0}
],
"forcePullImage":true
},
"volumes": [
{
"containerPath": "/usr/share/kibana/config",
"hostPath": "/home/azureuser/kibana/config",
"mode": "RW"
}
]
},
"instances": 1,
"cpus": 0.5,
"mem": 2000,
"labels":{
"HAPROXY_0_VHOST":"publichost",
"HAPROXY_0_MODE":"http",
"DCOS_SERVICE_NAME": "kibana",
"DCOS_SERVICE_SCHEME": "http",
"DCOS_SERVICE_PORT_INDEX": "0"
}
}
this also eploys kibana on "/" context path
Then how to access kibana
when I try to access
http://publichost/app/kibana doesn't work beacuse elasticsearch is on "/"
I did it by removing "HAPROXY_GROUP":"external" from elasticsearch, Now it will not deploy it on marathon-lb and hence wont be accessible via browser.