Local [Polkadot + Statemine] testnet with polkadot-launch - substrate

I want to deploy a local Polkadot Testnet via polkadot-launch.
I built the executables from:
polkadot: v0.9.9-1
cumulus: statemine_v3
This is the config.json:
{
"relaychain": {
"bin": "./bin/polkadot",
"chain": "rococo-local",
"nodes": [
{
"name": "alice",
"wsPort": 9944,
"port": 30444
},
{
"name": "bob",
"wsPort": 9955,
"port": 30555
},
{
"name": "charlie",
"wsPort": 9966,
"port": 30666
},
{
"name": "dave",
"wsPort": 9977,
"port": 30777
}
],
"genesis": {
"runtime": {
"runtime_genesis_config": {
"configuration": {
"config": {
"validation_upgrade_frequency": 1,
"validation_upgrade_delay": 1
}
}
}
}
}
},
"parachains": [
{
"bin": "./bin/polkadot-collator",
"id": "200",
"balance": "1000000000000000000000",
"nodes": [
{
"wsPort": 9988,
"port": 31200,
"name": "alice",
"flags": ["--force-authoring", "--", "--execution=wasm"]
}
]
},
{
"bin": "./bin/polkadot-collator",
"id": "300",
"balance": "1000000000000000000000",
"nodes": [
{
"wsPort": 9999,
"port": 31300,
"name": "alice",
"flags": ["--force-authoring", "--", "--execution=wasm"]
}
]
}
],
"simpleParachains": [
{
"bin": "./bin/adder-collator",
"id": "400",
"port": "31400",
"name": "alice",
"balance": "1000000000000000000000"
}
],
"hrmpChannels": [
{
"sender": 200,
"recipient": 300,
"maxCapacity": 8,
"maxMessageSize": 512
}
],
"types": {},
"finalization": false
}
When I call polkadot-launch, all alice, bob, charlieanddave` have ok logs:
$ tail -f alice.log
2021-09-25 19:34:30 🙌 Starting consensus session on top of parent 0x7df9c10b7ff6ded2b2712273633582c445345541a3a5d20fab85e67c041bab5c
2021-09-25 19:34:30 🎁 Prepared block for proposing at 8 [hash: 0x9b44a965ee76e35f4721888c53f31ebe8920224bcea359e82ff8dedb1734502b; parent_hash: 0x7df9…ab5c; extrinsics (2): [0xc67d…93c7, 0xecd4…0d35]]
2021-09-25 19:34:30 🔖 Pre-sealed block for proposal at 8. Hash now 0xd167ba86acc9f96c1282793387bc37f09b9cd4132113e8c59027958673bd22ae, previously 0x9b44a965ee76e35f4721888c53f31ebe8920224bcea359e82ff8dedb1734502b.
2021-09-25 19:34:30 ✨ Imported #8 (0xd167…22ae)
2021-09-25 19:34:32 💤 Idle (3 peers), best: #8 (0xd167…22ae), finalized #5 (0x4ab2…f178), ⬇ 1.4kiB/s ⬆ 2.2kiB/s
2021-09-25 19:34:36 🙌 Starting consensus session on top of parent 0xd167ba86acc9f96c1282793387bc37f09b9cd4132113e8c59027958673bd22ae
2021-09-25 19:34:36 🎁 Prepared block for proposing at 9 [hash: 0x5978b3aded771de9bdbcbe1cbb8d65f36dd0f85db791cf4faa7a43c2ad9a720e; parent_hash: 0xd167…22ae; extrinsics (2): [0xf381…4795, 0xfe0b…8a55]]
2021-09-25 19:34:36 🔖 Pre-sealed block for proposal at 9. Hash now 0x0f3261953f7ee2bf7973d5b3b988eceaf001ab8f8c0ee770d2c47e360e597caa, previously 0x5978b3aded771de9bdbcbe1cbb8d65f36dd0f85db791cf4faa7a43c2ad9a720e.
2021-09-25 19:34:36 ✨ Imported #9 (0x0f32…7caa)
2021-09-25 19:34:37 💤 Idle (3 peers), best: #9 (0x0f32…7caa), finalized #6 (0x6046…7629), ⬇ 1.5kiB/s ⬆ 2.7kiB/s
2021-09-25 19:34:42 ✨ Imported #10 (0x7a55…a45a)
2021-09-25 19:34:42 💤 Idle (3 peers), best: #10 (0x7a55…a45a), finalized #7 (0x7df9…ab5c), ⬇ 1.6kiB/s ⬆ 1.6kiB/s
2021-09-25 19:34:47 💤 Idle (3 peers), best: #10 (0x7a55…a45a), finalized #8 (0xd167…22ae), ⬇ 3.0kiB/s ⬆ 3.4kiB/s
2021-09-25 19:34:48 👶 New epoch 1 launching at block 0x7602…ec7f (block slot 272101548 >= start slot 272101548).
2021-09-25 19:34:48 👶 Next epoch starts at slot 272101558
2021-09-25 19:34:48 ✨ Imported #11 (0x7602…ec7f)
2021-09-25 19:34:50 🥩 Round #9 concluded, committed: SignedCommitment { commitment: Commitment { payload: 0xf0a3fb9ad9246d2071beed4daebaf1145e4ccc2939d2a739a832cbbf51fc28ed, block_number: 9, validator_set_id: 0 }, signatures: [Some(Signature(d586df10c3502ca9f0babf7b032e409a79c2506f384ea48216a6d802435d7cd45caf1eca39d7972d01e0a019de10061c40b8f13d5e07f4c7a1865e36d0a72c3100)), None, Some(Signature(ac6edcd9d5df9ed08ddb3870ab058b2709c2e2c31aef9e758a76cc0032971f8d6063048a5fa051f5b6d0be98ef12e9d964b9c9ab371aa1a422eaef4da394773200)), Some(Signature(32635d143cb5f1f4ab1475a0cbc3565f96ae68bf55f7090c53180075fc6f11a86e547e45d6c98b1b8af70cb07e14d3f188c1186320ea9b862fcc83553d059f2101))] }.
But 9988.log seems weird:
$ tail -f 9988.log
error: The argument '--force-authoring' was provided more than once, but cannot be used multiple times
USAGE:
polkadot-collator --alice --collator --offchain-worker <ENABLED> --force-authoring --in-peers <COUNT> --max-parallel-downloads <COUNT> --node-key-type <TYPE> --out-peers <COUNT> --parachain-id <parachain-id> --pool-kbytes <COUNT> --pool-limit <COUNT> --port <PORT> --rpc-methods <METHOD SET> --state-cache-size <Bytes> --sync <SYNC_MODE> --tmp --tracing-receiver <RECEIVER> --wasm-execution <METHOD> --ws-port <PORT>
For more information try --help
So I guess I'm only running a Relay Chain?
Collators for Cumulus/Statemine chains 200 and 300 are dead?
What's wrong with my setup?

Polkadot launch is handing over that argument by default now (because it was almost always needed). So just delete "--force-authoring", and it should all work.

Related

metrics-server show unknown on one node

It just happen on almalinux 9.0 server which I add into cluster today.
k8s version is 1.19.16
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-dev-node1 110m 11% 2601Mi 104%
k8s-dev-node8 160m 5% 3600Mi 56%
k8s-dev-node9 105m 3% 2201Mi 34%
k8s-dev-node16 <unknown> <unknown> <unknown> <unknown>
hostNetwork has been seted on metrics server deployment.
top pod is fine and get node stats is fine too.
kubectl get --raw /api/v1/nodes/k8s-dev-node16/proxy/stats/summary
{
"node": {
"nodeName": "k8s-dev-node16",
"systemContainers": [
{
"name": "kubelet",
"startTime": "2022-11-10T05:52:14Z",
"cpu": {
"time": "2022-11-10T09:42:50Z",
"usageNanoCores": 17536218,
"usageCoreNanoSeconds": 221415603000
},
"memory": {
"time": "2022-11-10T09:42:50Z",
"availableBytes": 8011243520,
"usageBytes": 58892288,
"workingSetBytes": 46686208,
"rssBytes": 0,
"pageFaults": 2555049,
"majorPageFaults": 1
}
},
...

consul deregister_critical_service_after is not woring

Hello everyone I have a healthcheck on my consul service, my goal is whenever the service is unhealthy then the consul should remove them from the service catalog.
Bellow is my config
{
"service": {
"name": "api",
"tags": [ "api-tag" ],
"port": 80
},
"check": {
"id": "api_up",
"name": "Fetch health check from local nginx",
"http": "http://localhost/HealthCheck",
"interval": "5s",
"timeout": "1s",
"deregister_critical_service_after": "15s"
},
"data_dir": "/consul/data",
"retry_join": [
"192.168.0.1",
"192.168.0.2",
]
}
Thanks for all the helps
The reason the service is not being de-registered is that the check is being specified outside of the service {} block in your JSON. This makes the check a node-level check, not a service-level check.
Here's a pretty-printed version of the config you provided.
{
"service": {
"name": "api",
"tags": [
"api-tag"
],
"port": 80
},
"check": {
"id": "api_up",
"name": "Fetch health check from local nginx",
"http": "http://localhost/HealthCheck",
"interval": "5s",
"timeout": "1s",
"deregister_critical_service_after": "15s"
},
"data_dir": "/consul/data",
"retry_join": [
"192.168.0.1",
"192.168.0.2",
]
}
Below is the configuration you should be using in order to correctly associate the check with the configured service, and de-register the service after the check has been marked as critical for more than 15 seconds.
{
"service": {
"name": "api",
"tags": [
"api-tag"
],
"port": 80,
"check": {
"id": "api_up",
"name": "Fetch health check from local nginx",
"http": "http://localhost/HealthCheck",
"interval": "5s",
"timeout": "1s",
"deregister_critical_service_after": "15s"
}
},
"data_dir": "/consul/data",
"retry_join": [
"192.168.0.1",
"192.168.0.2"
]
}
Note this statement from the docs for DeregisterCriticalServiceAfter.
If a check is in the critical state for more than this configured value, then its associated service (and all of its associated checks) will automatically be deregistered. The minimum timeout is 1 minute, and the process that reaps critical services runs every 30 seconds, so it may take slightly longer than the configured timeout to trigger the deregistration. This should generally be configured with a timeout that's much, much longer than any expected recoverable outage for the given service.

How to register multiple service instances in consul on one machine

I have a consul running locally on a dev machine. I also have one golang service running on two different ports on the same machine. Is there a way to register them as one service but two instances in consul using golang API (for example, is it possible to specify the node name when registering)?
Here's a very basic example which registers two instances of a service named my-service. Each instance is configured to listen on a different port, 8080 and 8081 respectively.
The key thing to note is that the service instances are also registered with a unique service ID in order to disambiguate between instance A and instance B of my-service which are running on the same agent.
package main
import (
"fmt"
"github.com/hashicorp/consul/api"
)
func main() {
// Get a new client
client, err := api.NewClient(api.DefaultConfig())
if err != nil {
panic(err)
}
service_name := "my-service"
service_ports := [2]int{8080, 8081}
for idx, port := range service_ports {
svc_reg := &api.AgentServiceRegistration{
ID: fmt.Sprintf("%s-%d", service_name, idx),
Name: service_name,
Port: port,
}
client.Agent().ServiceRegister(svc_reg)
}
}
After running go mod init consul-register (or any module name), and executing the code with go run main.go, you can see the service has been registered in the catalog.
$ consul catalog services
consul
my-service
Both service instances are correctly being returned for service discovery queries over DNS or HTTP.
$ dig #127.0.0.1 -p 8600 -t SRV my-service.service.consul +short
1 1 8080 b1000.local.node.dc1.consul.
1 1 8081 b1000.local.node.dc1.consul.
$ curl localhost:8500/v1/health/service/my-service
[
{
"Node": {
"ID": "11113853-a8e0-5787-7482-538078db855a",
"Node": "b1000.local",
"Address": "127.0.0.1",
"Datacenter": "dc1",
"TaggedAddresses": {
"lan": "127.0.0.1",
"lan_ipv4": "127.0.0.1",
"wan": "127.0.0.1",
"wan_ipv4": "127.0.0.1"
},
"Meta": {
"consul-network-segment": ""
},
"CreateIndex": 11,
"ModifyIndex": 13
},
"Service": {
"ID": "my-service-0",
"Service": "my-service",
"Tags": [],
"Address": "",
"Meta": null,
"Port": 8080,
"Weights": {
"Passing": 1,
"Warning": 1
},
"EnableTagOverride": false,
"Proxy": {
"Mode": "",
"MeshGateway": {},
"Expose": {},
"TransparentProxy": {}
},
"Connect": {},
"CreateIndex": 14,
"ModifyIndex": 14
},
"Checks": [
{
"Node": "b1000.local",
"CheckID": "serfHealth",
"Name": "Serf Health Status",
"Status": "passing",
"Notes": "",
"Output": "Agent alive and reachable",
"ServiceID": "",
"ServiceName": "",
"ServiceTags": [],
"Type": "",
"Definition": {},
"CreateIndex": 11,
"ModifyIndex": 11
}
]
},
{
"Node": {
"ID": "11113853-a8e0-5787-7482-538078db855a",
"Node": "b1000.local",
"Address": "127.0.0.1",
"Datacenter": "dc1",
"TaggedAddresses": {
"lan": "127.0.0.1",
"lan_ipv4": "127.0.0.1",
"wan": "127.0.0.1",
"wan_ipv4": "127.0.0.1"
},
"Meta": {
"consul-network-segment": ""
},
"CreateIndex": 11,
"ModifyIndex": 13
},
"Service": {
"ID": "my-service-1",
"Service": "my-service",
"Tags": [],
"Address": "",
"Meta": null,
"Port": 8081,
"Weights": {
"Passing": 1,
"Warning": 1
},
"EnableTagOverride": false,
"Proxy": {
"Mode": "",
"MeshGateway": {},
"Expose": {},
"TransparentProxy": {}
},
"Connect": {},
"CreateIndex": 15,
"ModifyIndex": 15
},
"Checks": [
{
"Node": "b1000.local",
"CheckID": "serfHealth",
"Name": "Serf Health Status",
"Status": "passing",
"Notes": "",
"Output": "Agent alive and reachable",
"ServiceID": "",
"ServiceName": "",
"ServiceTags": [],
"Type": "",
"Definition": {},
"CreateIndex": 11,
"ModifyIndex": 11
}
]
}
]

Registering Multiple Same-Host Services

I am using the Consul API to register a local web-service running on various ports on my local machine. My end-goal is to be able to run multiple backends and load balance against them on different ports.
I am running a local Consul server of one node for development in a Vagrant VM. I have registered the first instance of my service:
{
"Node": {
"ID": "49d3be4b-5ee5-5f0f-e145-dcb1782e5b4b",
"Node": "localhost",
"Address": "127.0.0.1",
"Datacenter": "dc1",
"TaggedAddresses": {
"lan": "127.0.0.1",
"wan": "127.0.0.1"
},
"Meta": {
"consul-network-segment": ""
},
"CreateIndex": 5,
"ModifyIndex": 6
},
"Services": {
"consul": {
"ID": "consul",
"Service": "consul",
"Tags": [],
"Address": "",
"Port": 8300,
"EnableTagOverride": false,
"CreateIndex": 5,
"ModifyIndex": 5
},
"rusty": {
"ID": "rusty",
"Service": "rusty",
"Tags": [
"rusty",
"rust"
],
"Address": "127.0.0.1",
"Port": 8001,
"EnableTagOverride": false,
"CreateIndex": 247,
"ModifyIndex": 491
}
}
}
You can see my service, rusty, registered on port 8001. The strange thing is that when I register the same service on a different port, Consul supersedes port 8001 with the new service port.
Is there not a way to run multiple backends for a service on different ports on the same host?
Try to check that you are registering services with different IDs. For complete info see the parameters for /agent/service/register endpoint.
Here is an example with two rusty service instances with different IDs rusty1 and rusty2
{
"Node": {
"ID": "eff2fae3-6ee5-5de7-bf1a-c041992a1d6a",
"Node": "FB20160707",
"Address": "192.168.1.66",
"Datacenter": "dc1",
"TaggedAddresses": {
"lan": "192.168.1.66",
"wan": "192.168.1.66"
},
"Meta": {},
"CreateIndex": 5,
"ModifyIndex": 6
},
"Services": {
"consul": {
"ID": "consul",
"Service": "consul",
"Tags": [],
"Address": "",
"Port": 8300,
"EnableTagOverride": false,
"CreateIndex": 5,
"ModifyIndex": 5
},
"rusty1": {
"ID": "rusty1",
"Service": "rusty",
"Tags": [],
"Address": "10.10.10.10",
"Port": 8001,
"EnableTagOverride": false,
"CreateIndex": 16,
"ModifyIndex": 28
},
"rusty2": {
"ID": "rusty2",
"Service": "rusty",
"Tags": [],
"Address": "10.10.10.10",
"Port": 8002,
"EnableTagOverride": false,
"CreateIndex": 19,
"ModifyIndex": 29
}
}
}
As per my comment to #ruslan-sennov, if the services section looked like this (the ID for each instance of the rusty service is made unique by adding the port, but the name is kept as rusty):
"Services": {
"consul": {
"ID": "consul",
"Service": "consul",
"Tags": [],
"Address": "",
"Port": 8300,
"EnableTagOverride": false,
"CreateIndex": 5,
"ModifyIndex": 5
},
"rusty": {
"ID": "rusty:8001",
"Service": "rusty",
"Tags": [
"rusty",
"rust"
],
"Address": "127.0.0.1",
"Port": 8001,
"EnableTagOverride": false,
"CreateIndex": 247,
"ModifyIndex": 491
},
"rusty": {
"ID": "rusty:8002",
"Service": "rusty",
"Tags": [
"rusty",
"rust"
],
"Address": "127.0.0.1",
"Port": 8002,
"EnableTagOverride": false,
"CreateIndex": 247,
"ModifyIndex": 491
}
}
This then means you can query the rusty service with a SRV query and get detail on which ports are available:
dig #127.0.0.1 rusty.service.consul SRV
; <<>> DiG 9.11.3 <<>> rusty.service.consul SRV
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 56091
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 52, AUTHORITY: 0, ADDITIONAL: 5
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;rusty.service.consul. IN SRV
;; ANSWER SECTION:
rusty.service.consul. 0 IN SRV 1 1 8001 FB20160707.node.dc1.consul.
rusty.service.consul. 0 IN SRV 1 1 8002 FB20160707.node.dc1.consul.
If you also change the names to be unique (rusty1 and rusty2 as suggested by Ruslan) you lose this querying ability.
I know this is late to answer this, but hope this would help someone.
As per Spring Cloud Consul docs, Add this to bootstrap.yml.
spring:
cloud:
consul:
discovery:
instanceId: ${spring.application.name}:${vcap.application.instance_id:${spring.application.instance_id:${random.value}}}

Mesos Marathon apps with persistent volume apps stuck at suspended

I'm having trouble running an app in Marathon using persistent local volumes. Having followed the instructions, starting Marathon with a role and principal and creating a simple app with a persistent volume, it just hangs at suspended. It seems that the slave has responded with a valid offer, but can't actually start up the app. The slave doesn't log anything regarding the task, even when I compile with the debug option and turn logging right up with GLOG_v=2.
Also it seems that Marathon is constantly rolling the task ID as it failing to start, but I can't see why anywhere.
Oddly when I run without persistent volume, but with disk reservation the app starts running.
The debug logging on Marathon doesn't appear to be showing anything useful, however I could be missing something. Could anyone give me any pointers as to what the problem may be or where to look for additional debug? Many thanks in advance 😄 .
Here's some info about my environment and debug info:
Slave: Ubuntu 14.04 running 0.28 prebuilt and tested in 0.29 built from source
Master: Mesos 0.28 running inside a Docker Ubuntu 14.04 image on CoreOS
Marathon: 1.1.1 running inside a Docker Ubuntu 14.04 image on CoreOS
App with persistent storage
App info from v2/apps/test/tasks on Marathon
{
"app": {
"id": "/test",
"cmd": "while true; do sleep 10; done",
"args": null,
"user": null,
"env": {},
"instances": 1,
"cpus": 1,
"mem": 128,
"disk": 0,
"executor": "",
"constraints": [
[
"role",
"CLUSTER",
"persistent"
]
],
"uris": [],
"fetch": [],
"storeUrls": [],
"ports": [
10002
],
"portDefinitions": [
{
"port": 10002,
"protocol": "tcp",
"labels": {}
}
],
"requirePorts": false,
"backoffSeconds": 1,
"backoffFactor": 1.15,
"maxLaunchDelaySeconds": 3600,
"container": {
"type": "MESOS",
"volumes": [
{
"containerPath": "test",
"mode": "RW",
"persistent": {
"size": 100
}
}
]
},
"healthChecks": [],
"readinessChecks": [],
"dependencies": [],
"upgradeStrategy": {
"minimumHealthCapacity": 0.5,
"maximumOverCapacity": 0
},
"labels": {},
"acceptedResourceRoles": null,
"ipAddress": null,
"version": "2016-05-19T11:31:54.861Z",
"residency": {
"relaunchEscalationTimeoutSeconds": 3600,
"taskLostBehavior": "WAIT_FOREVER"
},
"versionInfo": {
"lastScalingAt": "2016-05-19T11:31:54.861Z",
"lastConfigChangeAt": "2016-05-18T16:46:59.684Z"
},
"tasksStaged": 0,
"tasksRunning": 0,
"tasksHealthy": 0,
"tasksUnhealthy": 0,
"deployments": [
{
"id": "4f3779e5-a805-4b95-9065-f3cf9c90c8fe"
}
],
"tasks": [
{
"id": "test.4b7d4303-1dc2-11e6-a179-a2bd870b1e9c",
"slaveId": "9f7c6ed5-4bf5-475d-9311-05d21628604e-S17",
"host": "ip-10-0-90-61.eu-west-1.compute.internal",
"localVolumes": [
{
"containerPath": "test",
"persistenceId": "test#test#4b7d4302-1dc2-11e6-a179-a2bd870b1e9c"
}
],
"appId": "/test"
}
]
}
}
App info in Marathon: (it seems the deployment is spinning)
App without persistent storage
App info from v2/apps/test2/tasks on Marathon
{
"app": {
"id": "/test2",
"cmd": "while true; do sleep 10; done",
"args": null,
"user": null,
"env": {},
"instances": 1,
"cpus": 1,
"mem": 128,
"disk": 100,
"executor": "",
"constraints": [
[
"role",
"CLUSTER",
"persistent"
]
],
"uris": [],
"fetch": [],
"storeUrls": [],
"ports": [
10002
],
"portDefinitions": [
{
"port": 10002,
"protocol": "tcp",
"labels": {}
}
],
"requirePorts": false,
"backoffSeconds": 1,
"backoffFactor": 1.15,
"maxLaunchDelaySeconds": 3600,
"container": null,
"healthChecks": [],
"readinessChecks": [],
"dependencies": [],
"upgradeStrategy": {
"minimumHealthCapacity": 0.5,
"maximumOverCapacity": 0
},
"labels": {},
"acceptedResourceRoles": null,
"ipAddress": null,
"version": "2016-05-19T13:44:01.831Z",
"residency": null,
"versionInfo": {
"lastScalingAt": "2016-05-19T13:44:01.831Z",
"lastConfigChangeAt": "2016-05-19T13:09:20.106Z"
},
"tasksStaged": 0,
"tasksRunning": 1,
"tasksHealthy": 0,
"tasksUnhealthy": 0,
"deployments": [],
"tasks": [
{
"id": "test2.bee624f1-1dc7-11e6-b98e-568f3f9dead8",
"slaveId": "9f7c6ed5-4bf5-475d-9311-05d21628604e-S18",
"host": "ip-10-0-90-61.eu-west-1.compute.internal",
"startedAt": "2016-05-19T13:44:02.190Z",
"stagedAt": "2016-05-19T13:44:02.023Z",
"ports": [
31926
],
"version": "2016-05-19T13:44:01.831Z",
"ipAddresses": [
{
"ipAddress": "10.0.90.61",
"protocol": "IPv4"
}
],
"appId": "/test2"
}
],
"lastTaskFailure": {
"appId": "/test2",
"host": "ip-10-0-90-61.eu-west-1.compute.internal",
"message": "Slave ip-10-0-90-61.eu-west-1.compute.internal removed: health check timed out",
"state": "TASK_LOST",
"taskId": "test2.e74fb439-1dc2-11e6-a179-a2bd870b1e9c",
"timestamp": "2016-05-19T13:15:24.155Z",
"version": "2016-05-19T13:09:20.106Z",
"slaveId": "9f7c6ed5-4bf5-475d-9311-05d21628604e-S17"
}
}
}
Slave log when running the app without:
I0519 13:09:22.471876 12459 status_update_manager.cpp:320] Received status update TASK_RUNNING (UUID: 36c1f0cb-2fcd-44b9-ab79-cef81c2094be) for task test2.e74fb439-1dc2-11e6-a179-a2bd870b1e9c of framework 1a6352a6-d690-41a2-967e-07342bba56d2-0000
I0519 13:09:22.471906 12459 status_update_manager.cpp:497] Creating StatusUpdate stream for task test2.e74fb439-1dc2-11e6-a179-a2bd870b1e9c of framework 1a6352a6-d690-41a2-967e-07342bba56d2-0000
I0519 13:09:22.472262 12459 status_update_manager.cpp:824] Checkpointing UPDATE for status update TASK_RUNNING (UUID: 36c1f0cb-2fcd-44b9-ab79-cef81c2094be) for task test2.e74fb439-1dc2-11e6-a179-a2bd870b1e9c of framework 1a6352a6-d690-41a2-967e-07342bba56d2-0000
I0519 13:09:22.477686 12459 status_update_manager.cpp:374] Forwarding update TASK_RUNNING (UUID: 36c1f0cb-2fcd-44b9-ab79-cef81c2094be) for task test2.e74fb439-1dc2-11e6-a179-a2bd870b1e9c of framework 1a6352a6-d690-41a2-967e-07342bba56d2-0000 to the agent
I0519 13:09:22.477830 12453 process.cpp:2605] Resuming slave(1)#10.0.90.61:5051 at 2016-05-19 13:09:22.477814016+00:00
I0519 13:09:22.477967 12453 slave.cpp:3638] Forwarding the update TASK_RUNNING (UUID: 36c1f0cb-2fcd-44b9-ab79-cef81c2094be) for task test2.e74fb439-1dc2-11e6-a179-a2bd870b1e9c of framework 1a6352a6-d690-41a2-967e-07342bba56d2-0000 to master#10.0.82.230:5050
I0519 13:09:22.478185 12453 slave.cpp:3532] Status update manager successfully handled status update TASK_RUNNING (UUID: 36c1f0cb-2fcd-44b9-ab79-cef81c2094be) for task test2.e74fb439-1dc2-11e6-a179-a2bd870b1e9c of framework 1a6352a6-d690-41a2-967e-07342bba56d2-0000
I0519 13:09:22.478229 12453 slave.cpp:3548] Sending acknowledgement for status update TASK_RUNNING (UUID: 36c1f0cb-2fcd-44b9-ab79-cef81c2094be) for task test2.e74fb439-1dc2-11e6-a179-a2bd870b1e9c of framework 1a6352a6-d690-41a2-967e-07342bba56d2-0000 to executor(1)#10.0.90.61:34262
I0519 13:09:22.488315 12460 pid.cpp:95] Attempting to parse 'master#10.0.82.230:5050' into a PID
I0519 13:09:22.488370 12460 process.cpp:646] Parsed message name 'mesos.internal.StatusUpdateAcknowledgementMessage' for slave(1)#10.0.90.61:5051 from master#10.0.82.230:5050
I0519 13:09:22.488452 12452 process.cpp:2605] Resuming slave(1)#10.0.90.61:5051 at 2016-05-19 13:09:22.488441856+00:00
I0519 13:09:22.488600 12458 process.cpp:2605] Resuming (14)#10.0.90.61:5051 at 2016-05-19 13:09:22.488590080+00:00
I0519 13:09:22.488632 12458 status_update_manager.cpp:392] Received status update acknowledgement (UUID: 36c1f0cb-2fcd-44b9-ab79-cef81c2094be) for task test2.e74fb439-1dc2-11e6-a179-a2bd870b1e9c of framework 1a6352a6-d690-41a2-967e-07342bba56d2-0000
I0519 13:09:22.488726 12458 status_update_manager.cpp:824] Checkpointing ACK for status update TASK_RUNNING (UUID: 36c1f0cb-2fcd-44b9-ab79-cef81c2094be) for task test2.e74fb439-1dc2-11e6-a179-a2bd870b1e9c of framework 1a6352a6-d690-41a2-967e-07342bba56d2-0000
I0519 13:09:22.492985 12452 process.cpp:2605] Resuming slave(1)#10.0.90.61:5051 at 2016-05-19 13:09:22.492974080+00:00
I0519 13:09:22.493021 12452 slave.cpp:2629] Status update manager successfully handled status update acknowledgement (UUID: 36c1f0cb-2fcd-44b9-ab79-cef81c2094be) for task test2.e74fb439-1dc2-11e6-a179-a2bd870b1e9c of framework 1a6352a6-d690-41a2-967e-07342bba56d2-0000
May be due to low disk space or RAM.
Minimum Idle configuration is specified in the below link

Resources