I have an Opendaylight Carbon application that is running just fine. Now I want to add QoS and queues to it (Open vSwitch 2.5.2). If I create a qos using ovs-vsctl in a terminal, Opendaylight starts logging stack traces.
The ovs-vsctl command:
sudo ovs-vsctl -- set port s1-eth1 qos=#newqos -- --id=#newqos create qos type=linux-htb other-config:max-rate=800000
The stack trace (only included the first few stack traces):
2017-12-12 17:54:21,802 | WARN | rd-dispatcher-23 | ShardDataTree | 204 - org.opendaylight.controller.sal-distributed-datastore - 1.5.2.Carbon | member-1-shard-inventory-operational: Store Tx member-1-datastore-operational-fe-0-chn-408-txn-3-0: Data validation failed for path /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/node-connector/node-connector[{(urn:opendaylight:inventory?revision=2013-08-19)id=1}].
org.opendaylight.yangtools.yang.data.api.schema.tree.ModifiedNodeDoesNotExistException: Node /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/node-connector/node-connector[{(urn:opendaylight:inventory?revision=2013-08-19)id=1}] does not exist. Cannot apply modification to its children.
at org.opendaylight.yangtools.yang.data.impl.schema.tree.AbstractNodeContainerModificationStrategy.checkTouchApplicable(AbstractNodeContainerModificationStrategy.java:281)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.SchemaAwareApplyOperation.checkApplicable(SchemaAwareApplyOperation.java:125)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.AbstractNodeContainerModificationStrategy.checkChildPreconditions(AbstractNodeContainerModificationStrategy.java:305)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.AbstractNodeContainerModificationStrategy.checkTouchApplicable(AbstractNodeContainerModificationStrategy.java:288)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.SchemaAwareApplyOperation.checkApplicable(SchemaAwareApplyOperation.java:125)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.AbstractNodeContainerModificationStrategy.checkChildPreconditions(AbstractNodeContainerModificationStrategy.java:305)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.AbstractNodeContainerModificationStrategy.checkTouchApplicable(AbstractNodeContainerModificationStrategy.java:288)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.SchemaAwareApplyOperation.checkApplicable(SchemaAwareApplyOperation.java:125)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.AbstractNodeContainerModificationStrategy.checkChildPreconditions(AbstractNodeContainerModificationStrategy.java:305)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.AbstractNodeContainerModificationStrategy.checkTouchApplicable(AbstractNodeContainerModificationStrategy.java:288)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.SchemaAwareApplyOperation.checkApplicable(SchemaAwareApplyOperation.java:125)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.AbstractNodeContainerModificationStrategy.checkChildPreconditions(AbstractNodeContainerModificationStrategy.java:305)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.AbstractNodeContainerModificationStrategy.checkTouchApplicable(AbstractNodeContainerModificationStrategy.java:288)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.SchemaAwareApplyOperation.checkApplicable(SchemaAwareApplyOperation.java:125)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.StructuralContainerModificationStrategy.checkApplicable(StructuralContainerModificationStrategy.java:99)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.AbstractNodeContainerModificationStrategy.checkChildPreconditions(AbstractNodeContainerModificationStrategy.java:305)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.AbstractNodeContainerModificationStrategy.checkTouchApplicable(AbstractNodeContainerModificationStrategy.java:288)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.SchemaAwareApplyOperation.checkApplicable(SchemaAwareApplyOperation.java:125)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.RootModificationApplyOperation.checkApplicable(RootModificationApplyOperation.java:72)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.AbstractDataTreeTip.validate(AbstractDataTreeTip.java:35)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.controller.cluster.datastore.ShardDataTree.lambda$processNextPendingTransaction$0(ShardDataTree.java:743)[204:org.opendaylight.controller.sal-distributed-datastore:1.5.2.Carbon]
at org.opendaylight.controller.cluster.datastore.ShardDataTree.processNextPending(ShardDataTree.java:789)[204:org.opendaylight.controller.sal-distributed-datastore:1.5.2.Carbon]
at org.opendaylight.controller.cluster.datastore.ShardDataTree.processNextPendingTransaction(ShardDataTree.java:736)[204:org.opendaylight.controller.sal-distributed-datastore:1.5.2.Carbon]
at org.opendaylight.controller.cluster.datastore.ShardDataTree.startCanCommit(ShardDataTree.java:819)[204:org.opendaylight.controller.sal-distributed-datastore:1.5.2.Carbon]
at org.opendaylight.controller.cluster.datastore.SimpleShardDataTreeCohort.canCommit(SimpleShardDataTreeCohort.java:90)[204:org.opendaylight.controller.sal-distributed-datastore:1.5.2.Carbon]
at org.opendaylight.controller.cluster.datastore.CohortEntry.canCommit(CohortEntry.java:97)[204:org.opendaylight.controller.sal-distributed-datastore:1.5.2.Carbon]
at org.opendaylight.controller.cluster.datastore.ShardCommitCoordinator.handleCanCommit(ShardCommitCoordinator.java:236)[204:org.opendaylight.controller.sal-distributed-datastore:1.5.2.Carbon]
at org.opendaylight.controller.cluster.datastore.ShardCommitCoordinator.handleReadyLocalTransaction(ShardCommitCoordinator.java:200)[204:org.opendaylight.controller.sal-distributed-datastore:1.5.2.Carbon]
at org.opendaylight.controller.cluster.datastore.Shard.handleReadyLocalTransaction(Shard.java:675)[204:org.opendaylight.controller.sal-distributed-datastore:1.5.2.Carbon]
at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:316)[204:org.opendaylight.controller.sal-distributed-datastore:1.5.2.Carbon]
at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:270)[198:org.opendaylight.controller.sal-akka-raft:1.5.2.Carbon]
at org.opendaylight.controller.cluster.common.actor.AbstractUntypedPersistentActor.onReceiveCommand(AbstractUntypedPersistentActor.java:44)[197:org.opendaylight.controller.sal-clustering-commons:1.5.2.Carbon]
at akka.persistence.UntypedPersistentActor.onReceive(PersistentActor.scala:170)[185:com.typesafe.akka.persistence:2.4.18]
at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:104)[197:org.opendaylight.controller.sal-clustering-commons:1.5.2.Carbon]
at akka.actor.ActorCell$$anonfun$become$1.applyOrElse(ActorCell.scala:544)[178:com.typesafe.akka.actor:2.4.18]
at akka.actor.Actor$class.aroundReceive(Actor.scala:502)[178:com.typesafe.akka.actor:2.4.18]
at akka.persistence.UntypedPersistentActor.akka$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:168)[185:com.typesafe.akka.persistence:2.4.18]
at akka.persistence.Eventsourced$$anon$1.stateReceive(Eventsourced.scala:727)[185:com.typesafe.akka.persistence:2.4.18]
at akka.persistence.Eventsourced$class.aroundReceive(Eventsourced.scala:183)[185:com.typesafe.akka.persistence:2.4.18]
at akka.persistence.UntypedPersistentActor.aroundReceive(PersistentActor.scala:168)[185:com.typesafe.akka.persistence:2.4.18]
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)[178:com.typesafe.akka.actor:2.4.18]
at akka.actor.ActorCell.invoke(ActorCell.scala:495)[178:com.typesafe.akka.actor:2.4.18]
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)[178:com.typesafe.akka.actor:2.4.18]
at akka.dispatch.Mailbox.run(Mailbox.scala:224)[178:com.typesafe.akka.actor:2.4.18]
at akka.dispatch.Mailbox.exec(Mailbox.scala:234)[178:com.typesafe.akka.actor:2.4.18]
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)[174:org.scala-lang.scala-library:2.11.11.v20170413-090219-8a413ba7cc]
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)[174:org.scala-lang.scala-library:2.11.11.v20170413-090219-8a413ba7cc]
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)[174:org.scala-lang.scala-library:2.11.11.v20170413-090219-8a413ba7cc]
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)[174:org.scala-lang.scala-library:2.11.11.v20170413-090219-8a413ba7cc]
2017-12-12 17:54:21,803 | ERROR | lt-dispatcher-21 | LocalThreePhaseCommitCohort | 204 - org.opendaylight.controller.sal-distributed-datastore - 1.5.2.Carbon | Failed to prepare transaction member-1-datastore-operational-fe-0-chn-408-txn-3-0 on backend
TransactionCommitFailedException{message=Data did not pass validation., errorList=[RpcError [message=Data did not pass validation., severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.yangtools.yang.data.api.schema.tree.ModifiedNodeDoesNotExistException: Node /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/node-connector/node-connector[{(urn:opendaylight:inventory?revision=2013-08-19)id=1}] does not exist. Cannot apply modification to its children.]]}
at org.opendaylight.controller.cluster.datastore.ShardDataTree.lambda$processNextPendingTransaction$0(ShardDataTree.java:760)[204:org.opendaylight.controller.sal-distributed-datastore:1.5.2.Carbon]
at org.opendaylight.controller.cluster.datastore.ShardDataTree.processNextPending(ShardDataTree.java:789)[204:org.opendaylight.controller.sal-distributed-datastore:1.5.2.Carbon]
at org.opendaylight.controller.cluster.datastore.ShardDataTree.processNextPendingTransaction(ShardDataTree.java:736)[204:org.opendaylight.controller.sal-distributed-datastore:1.5.2.Carbon]
at org.opendaylight.controller.cluster.datastore.ShardDataTree.startCanCommit(ShardDataTree.java:819)[204:org.opendaylight.controller.sal-distributed-datastore:1.5.2.Carbon]
at org.opendaylight.controller.cluster.datastore.SimpleShardDataTreeCohort.canCommit(SimpleShardDataTreeCohort.java:90)[204:org.opendaylight.controller.sal-distributed-datastore:1.5.2.Carbon]
at org.opendaylight.controller.cluster.datastore.CohortEntry.canCommit(CohortEntry.java:97)[204:org.opendaylight.controller.sal-distributed-datastore:1.5.2.Carbon]
at org.opendaylight.controller.cluster.datastore.ShardCommitCoordinator.handleCanCommit(ShardCommitCoordinator.java:236)[204:org.opendaylight.controller.sal-distributed-datastore:1.5.2.Carbon]
at org.opendaylight.controller.cluster.datastore.ShardCommitCoordinator.handleReadyLocalTransaction(ShardCommitCoordinator.java:200)[204:org.opendaylight.controller.sal-distributed-datastore:1.5.2.Carbon]
at org.opendaylight.controller.cluster.datastore.Shard.handleReadyLocalTransaction(Shard.java:675)[204:org.opendaylight.controller.sal-distributed-datastore:1.5.2.Carbon]
at org.opendaylight.controller.cluster.datastore.Shard.handleNonRaftCommand(Shard.java:316)[204:org.opendaylight.controller.sal-distributed-datastore:1.5.2.Carbon]
at org.opendaylight.controller.cluster.raft.RaftActor.handleCommand(RaftActor.java:270)[198:org.opendaylight.controller.sal-akka-raft:1.5.2.Carbon]
at org.opendaylight.controller.cluster.common.actor.AbstractUntypedPersistentActor.onReceiveCommand(AbstractUntypedPersistentActor.java:44)[197:org.opendaylight.controller.sal-clustering-commons:1.5.2.Carbon]
at akka.persistence.UntypedPersistentActor.onReceive(PersistentActor.scala:170)[185:com.typesafe.akka.persistence:2.4.18]
at org.opendaylight.controller.cluster.common.actor.MeteringBehavior.apply(MeteringBehavior.java:104)[197:org.opendaylight.controller.sal-clustering-commons:1.5.2.Carbon]
at akka.actor.ActorCell$$anonfun$become$1.applyOrElse(ActorCell.scala:544)[178:com.typesafe.akka.actor:2.4.18]
at akka.actor.Actor$class.aroundReceive(Actor.scala:502)[178:com.typesafe.akka.actor:2.4.18]
at akka.persistence.UntypedPersistentActor.akka$persistence$Eventsourced$$super$aroundReceive(PersistentActor.scala:168)[185:com.typesafe.akka.persistence:2.4.18]
at akka.persistence.Eventsourced$$anon$1.stateReceive(Eventsourced.scala:727)[185:com.typesafe.akka.persistence:2.4.18]
at akka.persistence.Eventsourced$class.aroundReceive(Eventsourced.scala:183)[185:com.typesafe.akka.persistence:2.4.18]
at akka.persistence.UntypedPersistentActor.aroundReceive(PersistentActor.scala:168)[185:com.typesafe.akka.persistence:2.4.18]
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)[178:com.typesafe.akka.actor:2.4.18]
at akka.actor.ActorCell.invoke(ActorCell.scala:495)[178:com.typesafe.akka.actor:2.4.18]
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)[178:com.typesafe.akka.actor:2.4.18]
at akka.dispatch.Mailbox.run(Mailbox.scala:224)[178:com.typesafe.akka.actor:2.4.18]
at akka.dispatch.Mailbox.exec(Mailbox.scala:234)[178:com.typesafe.akka.actor:2.4.18]
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)[174:org.scala-lang.scala-library:2.11.11.v20170413-090219-8a413ba7cc]
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)[174:org.scala-lang.scala-library:2.11.11.v20170413-090219-8a413ba7cc]
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)[174:org.scala-lang.scala-library:2.11.11.v20170413-090219-8a413ba7cc]
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)[174:org.scala-lang.scala-library:2.11.11.v20170413-090219-8a413ba7cc]
Caused by: org.opendaylight.yangtools.yang.data.api.schema.tree.ModifiedNodeDoesNotExistException: Node /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/node-connector/node-connector[{(urn:opendaylight:inventory?revision=2013-08-19)id=1}] does not exist. Cannot apply modification to its children.
at org.opendaylight.yangtools.yang.data.impl.schema.tree.AbstractNodeContainerModificationStrategy.checkTouchApplicable(AbstractNodeContainerModificationStrategy.java:281)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.SchemaAwareApplyOperation.checkApplicable(SchemaAwareApplyOperation.java:125)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.AbstractNodeContainerModificationStrategy.checkChildPreconditions(AbstractNodeContainerModificationStrategy.java:305)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.AbstractNodeContainerModificationStrategy.checkTouchApplicable(AbstractNodeContainerModificationStrategy.java:288)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.SchemaAwareApplyOperation.checkApplicable(SchemaAwareApplyOperation.java:125)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.AbstractNodeContainerModificationStrategy.checkChildPreconditions(AbstractNodeContainerModificationStrategy.java:305)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.AbstractNodeContainerModificationStrategy.checkTouchApplicable(AbstractNodeContainerModificationStrategy.java:288)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.SchemaAwareApplyOperation.checkApplicable(SchemaAwareApplyOperation.java:125)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.AbstractNodeContainerModificationStrategy.checkChildPreconditions(AbstractNodeContainerModificationStrategy.java:305)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.AbstractNodeContainerModificationStrategy.checkTouchApplicable(AbstractNodeContainerModificationStrategy.java:288)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.SchemaAwareApplyOperation.checkApplicable(SchemaAwareApplyOperation.java:125)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.AbstractNodeContainerModificationStrategy.checkChildPreconditions(AbstractNodeContainerModificationStrategy.java:305)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.AbstractNodeContainerModificationStrategy.checkTouchApplicable(AbstractNodeContainerModificationStrategy.java:288)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.SchemaAwareApplyOperation.checkApplicable(SchemaAwareApplyOperation.java:125)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.StructuralContainerModificationStrategy.checkApplicable(StructuralContainerModificationStrategy.java:99)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.AbstractNodeContainerModificationStrategy.checkChildPreconditions(AbstractNodeContainerModificationStrategy.java:305)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.AbstractNodeContainerModificationStrategy.checkTouchApplicable(AbstractNodeContainerModificationStrategy.java:288)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.SchemaAwareApplyOperation.checkApplicable(SchemaAwareApplyOperation.java:125)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.RootModificationApplyOperation.checkApplicable(RootModificationApplyOperation.java:72)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.yangtools.yang.data.impl.schema.tree.AbstractDataTreeTip.validate(AbstractDataTreeTip.java:35)[81:org.opendaylight.yangtools.yang-data-impl:1.1.2.Carbon]
at org.opendaylight.controller.cluster.datastore.ShardDataTree.lambda$processNextPendingTransaction$0(ShardDataTree.java:743)[204:org.opendaylight.controller.sal-distributed-datastore:1.5.2.Carbon]
... 28 more
As it shows a ModifiedNodeDoesNotExistException, do I have to install some ovs related feature? If so, which one?
I assume that ODL gets notified of the qos and queue creation and tries to update the operational data store. The Exception complains that node
Node /(urn:opendaylight:inventory?revision=2013-08-19)nodes/node/node[{(urn:opendaylight:inventory?revision=2013-08-19)id=openflow:1}]/node-connector/node-connector[{(urn:opendaylight:inventory?revision=2013-08-19)id=1}]
does not exist. Why isn't this nodeconnector id=openflow:1:1 in stead of id=1?
Wireshark openflow shows that these exceptions are likely triggered by the OFPMP_QUEUE reply. If no queues, this message is empty. If queues exist (like example below, queue 0,1,2) it reports stats for queues on port 1:
OpenFlow 1.3
Version: 1.3 (0x04)
Type: OFPT_MULTIPART_REPLY (19)
Length: 136
Transaction ID: 5709
Type: OFPMP_QUEUE (5)
Flags: 0x0000
Pad: 00000000
Queue stats
Port number: 1
Queue ID: 0
Tx bytes: 60
Tx packets: 1
Tx errors: 0
Duration sec: 2954916689
Duration nsec: 174000000
Queue stats
Queue stats
I assume that ODL fails to map port 1 to node-connector openflow:1:1. This probably causes the exception because node-connector 1 cannot be found.
The problem is indeed that the controller is parsing the OFPMP_QUEUE reply and wants to write statistics to node-connector '1' in stead of 'openflow:1:1'. To test this hypothesis, I wrote my own node-connector '1' in the operational datastore, containing queue 0, 1 and 2. I adapted my application to use queues and as soon I created the queues with ovs-vsctl command in first post, everything works:
The node-connector '1' is updated with queue stats in the operational DS
Flows that use the queue are indeed limited by the max-rate of the queue
The question is now:
Is this a bug
OR do I actually have to do it this way
OR should I only use queues that are created through the OVSDB plugin of ODL?
I am developing from a maven ODL Carbon archetype and have not been able to get OVSDB support. The OVSDB does not (yet?) seem to exist as a plugin feature for the Carbon release.
Any help is appreciated.
Related
I've ESP32-S2-MINI-1 configured as access point and 5 IPcameras connected to this access point,
my problem is that more ipcameras received the same IP from ESP32-S2-MINI-1, for example this is the answer to a broadcast request, each ipcameras responds two times and for example ipcamera ACTO018066 and ipcamera ACTO017101 have the same ipaddress 192.168.43.3 .
Is this a bug of ESP32-S2-MINI-1 ?
How I can solve this problem?
The firmware for ESP32-S2-MINI-1 is version:2.1.0.0(0b76313 - ESP32S2 - Aug 20 2020 05:57:43)
SDK version:v4.2-dev-2044-gdd3c032
compile time(b5e1674):Aug 21 2020 05:00:52
Bin version:2.1.0(MINI)
is it available a newer release?
Thanks, Antonio
+IPD,0,524:DH192.168.43.3 255.255.255.0192.168.43.1192.168.43.1ðÈÄÙ~':ACTO018066 ...
+IPD,0,524:DH192.168.43.3 255.255.255.0192.168.43.1192.168.43.1ðÈÄÙ~':ACTO018066 ...
+IPD,0,524:DH192.168.43.2 255.255.255.0192.168.43.1192.168.43.1ðÈÄÎœm3ACTO011614 ...
+IPD,0,524:DH192.168.43.2 255.255.255.0192.168.43.1192.168.43.1ðÈÄÎœm3ACTO011614 ...
+IPD,0,524:DH192.168.43.3 255.255.255.0192.168.43.1192.168.43.1ðÈIJ¤†ñACTO017101 ...
+IPD,0,524:DH192.168.43.3 255.255.255.0192.168.43.1192.168.43.1ðÈIJ¤†ñACTO017101HBJJR ...
+IPD,0,524:DH192.168.43.2 255.255.255.0192.168.43.18.8.8.8192.168.43.1»&P)EnACTO005825 ...
+IPD,0,524:DH192.168.43.2 255.255.255.0192.168.43.18.8.8.8192.168.43.1»&P)EnACTO005825 ...
+IPD,0,524:DH192.168.43.4 255.255.255.0192.168.43.18.8.8.8192.168.43.1»&á2Æ ACTO005665 ...
+IPD,0,524:DH192.168.43.4 255.255.255.0192.168.43.18.8.8.8192.168.43.1»&á2Æ ACTO005665 ...
0,CLOSED
OK
Getting the below exception when running the code:
FhirContext ctx = FhirContext.forR4();
// Create a FhirInstanceValidator and register it to a validator
FhirValidator validator = ctx.newValidator();
FhirInstanceValidator instanceValidator = new FhirInstanceValidator();
validator.registerValidatorModule(instanceValidator);
/*
* If you want, you can configure settings on the validator to adjust
* its behaviour during validation
*/
instanceValidator.setAnyExtensionsAllowed(true);
// input is Patient resource in String https://www.hl7.org/fhir/patient-example.json.html
ValidationResult result = validator.validateWithResult(input);
I am using Hapi Library to validate a resource (if i am not wrong this is a Patient resource https://www.hl7.org/fhir/patient-example.json.html ). I have stored this Patient Json in a string
and trying to Validate its :
1: Structure -> i think using Parse Validation it can be achieved and i did the same.
2: Cardinality -> I created two "active:true" Json key-value pair thinking that it will throw cardinality error but neither of SchemxxxValidator / ParseValidator / InstanceValidator working.
...
How to validate a resource against properties listed here https://www.hl7.org/fhir/validation.html (structure ,cardinality , ValueDomains ...) , Do i have to use all three ways
That is Parser , FhirInstanceValidator and SchemaBaseValidator / SchematronBaseValidator .
Please Help as i am new to FHIR and excuse for lame question.
15:58| INFO | VersionUtil.java 72 | HAPI FHIR version 4.1.0 - Rev 03163c2cf5
15:58| INFO | FhirContext.java 174 | Creating new FHIR context for FHIR version [R4]
15:58| INFO | DefaultProfileValidationSupport.java 227 | Loading structure definitions from classpath: /org/hl7/fhir/r4/model/profile/profiles-resources.xml
15:58| INFO | DependencyLogImpl.java 75 | FHIR XML procesing will use StAX implementation 'Woodstox' version '5.1.0'
15:58| INFO | DefaultProfileValidationSupport.java 227 | Loading structure definitions from classpath: /org/hl7/fhir/r4/model/profile/profiles-types.xml
15:58| INFO | DefaultProfileValidationSupport.java 227 | Loading structure definitions from classpath: /org/hl7/fhir/r4/model/profile/profiles-others.xml
15:58| INFO | DefaultProfileValidationSupport.java 227 | Loading structure definitions from classpath: /org/hl7/fhir/r4/model/extension/extension-definitions.xml
15:58| ERROR | FhirInstanceValidator.java 222 | Failure during validation
java.lang.UnsupportedOperationException
at org.hl7.fhir.r4.hapi.ctx.HapiWorkerContext.generateSnapshot(HapiWorkerContext.java:242)
at org.hl7.fhir.r4.elementmodel.ParserBase.getDefinition(ParserBase.java:122)
at org.hl7.fhir.r4.elementmodel.JsonParser.parse(JsonParser.java:123)
at org.hl7.fhir.r4.validation.InstanceValidator.validate(InstanceValidator.java:539)
at org.hl7.fhir.r4.validation.InstanceValidator.validate(InstanceValidator.java:531)
at org.hl7.fhir.r4.hapi.validation.FhirInstanceValidator.validate(FhirInstanceValidator.java:220)
at org.hl7.fhir.r4.hapi.validation.FhirInstanceValidator.validate(FhirInstanceValidator.java:242)
at org.hl7.fhir.r4.hapi.validation.BaseValidatorBridge.doValidate(BaseValidatorBridge.java:20)
at org.hl7.fhir.r4.hapi.validation.BaseValidatorBridge.validateResource(BaseValidatorBridge.java:43)
at org.hl7.fhir.r4.hapi.validation.FhirInstanceValidator.validateResource(FhirInstanceValidator.java:33)
at ca.uhn.fhir.validation.FhirValidator.validateWithResult(FhirValidator.java:243)
at ca.uhn.fhir.validation.FhirValidator.validateWithResult(FhirValidator.java:198)
at com.json.schema.validator.InstanceValidatorEx.instanceValidator(InstanceValidatorEx.java:223)
at com.json.schema.validator.InstanceValidatorEx.main(InstanceValidatorEx.java:191)
Exception in thread "main" ca.uhn.fhir.rest.server.exceptions.InternalErrorException: Unexpected failure while validating resource
at org.hl7.fhir.r4.hapi.validation.FhirInstanceValidator.validate(FhirInstanceValidator.java:223)
at org.hl7.fhir.r4.hapi.validation.FhirInstanceValidator.validate(FhirInstanceValidator.java:242)
at org.hl7.fhir.r4.hapi.validation.BaseValidatorBridge.doValidate(BaseValidatorBridge.java:20)
at org.hl7.fhir.r4.hapi.validation.BaseValidatorBridge.validateResource(BaseValidatorBridge.java:43)
at org.hl7.fhir.r4.hapi.validation.FhirInstanceValidator.validateResource(FhirInstanceValidator.java:33)
at ca.uhn.fhir.validation.FhirValidator.validateWithResult(FhirValidator.java:243)
at ca.uhn.fhir.validation.FhirValidator.validateWithResult(FhirValidator.java:198)
at com.json.schema.validator.InstanceValidatorEx.instanceValidator(InstanceValidatorEx.java:223)
at com.json.schema.validator.InstanceValidatorEx.main(InstanceValidatorEx.java:191)
Caused by: java.lang.UnsupportedOperationException
at org.hl7.fhir.r4.hapi.ctx.HapiWorkerContext.generateSnapshot(HapiWorkerContext.java:242)
at org.hl7.fhir.r4.elementmodel.ParserBase.getDefinition(ParserBase.java:122)
at org.hl7.fhir.r4.elementmodel.JsonParser.parse(JsonParser.java:123)
at org.hl7.fhir.r4.validation.InstanceValidator.validate(InstanceValidator.java:539)
at org.hl7.fhir.r4.validation.InstanceValidator.validate(InstanceValidator.java:531)
at org.hl7.fhir.r4.hapi.validation.FhirInstanceValidator.validate(FhirInstanceValidator.java:220)
Cardinality -> I created two "active:true" Json key-value pair thinking that it will throw cardinality error but neither of SchemxxxValidator / ParseValidator / InstanceValidator working. ...
That's an issue in HAPI - it validates the objects it loads from the JSON, and the JSON parser silently drops the duplicate property key. If you use the validator directly, this won't happen. I believe that this is going to be addressed at some stage
generateSnapshot failed
that's a real issue - I'm not sure why that's not set up, but the validator can't work if snapshots are not being generated
I have a spout which reads from a source with 40K qps.
I have two bolt, first one which reads from the source and does a database connection to build a cache which refreshes in every hour. The database has 2 connection open for a user so executor count that I have for this bolt is 2.
Other bolt is assigned 200 executors and 200 task to process the request.
I can't increase the connection to db. And I see that all the request is going to single workers. Other workers keep waiting and prints "0 send message".
kafkaSpoutConfigList:
- executorsCount: 30
taskCount: 30
spoutName: 'kafka_consumer_spout'
topicName: 'request'
processingBoltConfigList:
- executorsCount: 2
taskCount: 2
boltName: 'db_bolt'
boltClassName: 'com.Bolt1Class'
boltSourceList:
- 'kafka_consumer_spout'
- executorsCount: 200
taskCount: 200
boltName: 'bolt2'
boltClassName: 'com.Bolt2Class'
boltSourceList:
- 'db_bolt::streamx'
kafkaBoltConfigList:
- executorsCount: 15
taskCount: 15
boltName: 'kafka_producer_bolt'
topicName: 'consumer_topic'
boltSourceList:
- 'bolt2::Stream1'
- executorsCount: 15
taskCount: 15
boltName: 'kafka_producer_bolt'
topicName: 'data_test'
boltSourceList:
- 'bolt2::Stream2'
I am using localandgroupshuffling.
When you use LocalOrShuffleGrouping, the following happens:
If the target bolt has one or more tasks in the same worker process, tuples will be shuffled to just those in-process tasks. Otherwise, this acts like a normal shuffle grouping
So let's say your workers look like this:
worker1: {"bolt1 task 1", "bolt2 task 0-50"}
worker2: { "bolt1 task 2", "bolt2 task 50-100"}
worker3: { "bolt2 task 100-150"}
worker4: { "bolt2 task 150-200"}
In this case because you're telling Storm to use a local grouping when sending from bolt1 to bolt2, all the tuples will be going to worker 1 and 2. Worker 3 and 4 will be idle.
If you want to send tuples also to worker 3 and 4, you need to switch to shuffle grouping.
I am using flexcan driver on an embedded linux and I have C program controling can messages. In my C program I need to check the state of the can bus e.g. buss-off or error-active. I can use linux command like
ip -details -statistics link show can0 with following result:
2: can0: <NOARP,UP,LOWER_UP,ECHO> mtu 16 qdisc pfifo_fast state UP mode DEFAULT group default qlen 10
link/can promiscuity 0
can state *ERROR-ACTIVE (berr-counter tx 0 rx 0) restart-ms 100
bitrate 250000 sample-point 0.866
tq 266 prop-seg 6 phase-seg1 6 phase-seg2 2 sjw 1
flexcan: tseg1 4..16 tseg2 2..8 sjw 1..4 brp 1..256 brp-inc 1
clock 30000000
re-started bus-errors arbit-lost error-warn error-pass bus-off
31594 0 0 7686 25577 33258
RX: bytes packets errors dropped overrun mcast
5784560 723230 0 1 0 0
TX: bytes packets errors dropped carrier collsns
157896 19742 0 33269 0 0
How can I get that can state ERROR-ACTIVE in my C program? Also I can see in the flex can driver there are some registers that can be used to see the state but I don't know how to include these values in my program also. registers like FLEXCAN_ESR_BOFF_INT contains the values that I need.
You can setup your socket to return CAN errors as messages.
As described in Network Problem Notifications the CAN interface driver
can generate so called Error Message Frames that can optionally be
passed to the user application in the same way as other CAN frames.
The possible errors are divided into different error classes that may
be filtered using the appropriate error mask. To register for every
possible error condition CAN_ERR_MASK can be used as value for the
error mask. The values for the error mask are defined in
linux/can/error.h
can_err_mask_t err_mask = ( CAN_ERR_TX_TIMEOUT | CAN_ERR_BUSOFF );
setsockopt(s, SOL_CAN_RAW, CAN_RAW_ERR_FILTER,
&err_mask, sizeof(err_mask));
See kernel documentation for more information.
Update
Take a look at libsocketcan and the routine can_get_state.
I am tring the Spring Yarn example on [github][1] which is build by gradle. And I sucessful run the custom-amservice example on yarn.
But I don't know how to allocate special resource to the containers. I tried to override the onContainerAllocated and onContainerLaunched method in class StaticEventingAppmaster at my CustomAppmaster and allocate the resource, just like below.
#Override
protected void onContainerAllocated(Container container) {
//==allocate resource
Resource resource = new ResourcePBImpl();
resource.setMemory(1300);
resource.setVirtualCores(7);
container.setResource(resource);
//====
if (getMonitor() instanceof ContainerAware) {
((ContainerAware)getMonitor()).onContainer(Arrays.asList(container));
}
getLauncher().launchContainer(container, getCommands());
}
#Override
protected void onContainerLaunched(Container container) {
//==allocate resource
Resource resource = new ResourcePBImpl();
resource.setMemory(1300);
resource.setVirtualCores(7);
container.setResource(resource);
//====
if (getMonitor() instanceof ContainerAware) {
((ContainerAware)getMonitor()).onContainer(Arrays.asList(container));
}
}
and in the log it seems works:
2014-12-30 20:06:35,524 DEBUG [AbstractPollingAllocator] - response has 1 new containers
2014-12-30 20:06:35,525 DEBUG [AbstractPollingAllocator] - new container: container_1419934738198_0004_01_000003
//// this line shows the memory is 1300 and cpu core is 7
2014-12-30 20:06:35,525 DEBUG [DefaultContainerMonitor] - Reporting container=Container: [ContainerId: container_1419934738198_0004_01_000003, NodeId: yarn-master1:57799, NodeHttpAddress: yarn-master1:8042, Resource: <memory:1300, vCores:7>, Priority: 0, Token: Token { kind: ContainerToken, service: 192.168.0.170:57799 }, ]
2014-12-30 20:06:35,526 DEBUG [DefaultContainerMonitor] - State after reportContainer: DefaultContainerMonitor [allocated=[container_1419934738198_0004_01_000003,], running=[container_1419934738198_0004_01_000002,], completed=[], failed=[]]
//// this line shows the memory is 1300 and cpu core is 7
2014-12-30 20:06:35,526 DEBUG [DefaultContainerLauncher] - Launching container: Container: [ContainerId: container_1419934738198_0004_01_000003, NodeId: yarn-master1:57799, NodeHttpAddress: yarn-master1:8042, Resource: <memory:1300, vCores:7>, Priority: 0, Token: Token { kind: ContainerToken, service: 192.168.0.170:57799 }, ] with commands $JAVA_HOME/bin/java,org.springframework.yarn.container.CommandLineContainerRunner,container-context.xml,yarnContainer,1><LOG_DIR>/Container.stdout,2><LOG_DIR>/Container.stderr
However, when I try to run an application which resource out of the limit, its log shows that the memory remains 1GB instead of 1300, just see below:
2014-12-30 20:07:05,929 DEBUG [AbstractPollingAllocator] - response has 1 completed containers
//The Same container was stopped because it beyond the limits.
2014-12-30 20:07:05,932 DEBUG [AbstractPollingAllocator] - completed container: container_1419934738198_0004_01_000003 with status=ContainerStatus: [ContainerId: container_1419934738198_0004_01_000003, State: COMPLETE, Diagnostics: Container [pid=10587,containerID=container_1419934738198_0004_01_000003] is running beyond virtual memory limits. Current usage: 86.6 MB of 1 GB physical memory used; 31.8 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1419934738198_0004_01_000003 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 10587 32315 10587 10587 (bash) 2 3 12652544 353 /bin/bash -c /home/novelbio/software/jdk//bin/java org.springframework.yarn.container.CommandLineContainerRunner container-context.xml yarnContainer 1>/home/novelbio/software/hadoop/logs/userlogs/application_1419934738198_0004/container_1419934738198_0004_01_000003/Container.stdout 2>/home/novelbio/software/hadoop/logs/userlogs/application_1419934738198_0004/container_1419934738198_0004_01_000003/Container.stderr
|- 10761 10587 10587 10587 (java) 108 10 34135896064 21811 /home/novelbio/software/jdk//bin/java org.springframework.yarn.container.CommandLineContainerRunner container-context.xml yarnContainer
, ExitStatus: 0, ]
The key point is, in the log :. Current usage: 86.6 MB of 1 GB physical memory used instead of 1.3GB.
So I think my method didn't take effect. Could any body tells me how to allocate resource correctly?
This is one of a problematic areas in YARN which I believe will eventually get better when more and more non-MR apps are used in YARN. I believe your settings are correctly applied but a little strange behaviour from YARN is causing these problems. Currently there is very little we can do from an application point of view because most of the mem settings are enforced in YARN itself and requests from apps are just "requests".
Spring XD on YARN rely in this same stuff and it's worth to check what we wrote into its docs: https://github.com/spring-projects/spring-xd/wiki/Running-on-YARN. (see section Configuring YARN memory reservations).
I'll try to make sure that this same info also goes to our Spring Hadoop and Spring YARN ref docs.