Can't see custom topology on DLUX - opendaylight

I created a custom topology in mininet and added flows rules to the switches. I can ping the hosts but cannot see the topology on DLUX. I tried with other topology such as single and linear, these work fine. I do not understand what is the problem with the custom topology. If someone could shed some light.

try restarting ODL, like this person is doing. I would be suspect that
you are hitting some bug in the l2switch project. But, you can debug further
by inspecting the flows on each switch in your custom topology. Each switch
should have a flow with dl_type=0x88cc that punts to the CONTROLLER. Those
are the LLDP packets, which is how ODL will learn the links, which in turn
is how DLUX will paint them in your GUI. If the flows aren't there, then
you would want to try to figure out why? maybe the switches are ignoring
the flow programming (check switch logs), or maybe the flows are not even
being sent (you could check ODL logs, or even do a tcpdump to see if
openflow rules are being sent to the switch). If the flows are being
programmed, and the LLDP packets are being punted to ODL then the problem
could be internal to ODL and DLUX.
To be fair, DLUX is a stale project that is slated for removal. There
may be bugs you are hitting.

It's strange that I can ping all of a sudden now without making any changes. I have faced this problem earlier too, where the controller doesn't work for a week or so and then starts running suddenly.

the problem is not from ODL but it's from OVS switch u need to this script for your switch controller
sudo ovs-vsctl set bridge s1 protocols=OpenFlow13
http://kspviswa.github.io/Installing-ODL-BE.html

Related

Automatic reconnect in case of network failures

I am testing .NET version of ZeroMQ to understand how to handle network failures. I put the server (pub socket) to one external machine and debugging the client (sub socket). If I stop my local Wi-Fi connection for seconds, then ZeroMQ automatically recovers and I even get remaining values. However, if I disable Wi-Fi for longer time like a minute, then it just gets stuck on a frame waiting. How can I configure this period when ZeroMQ is still able to recover? And how can I reconnect manually after, say, several minutes? How can I understand that the socket is locked and I need to kill/open again?
Q :" How can I configure this ... ?"
A :Use the .NET versions of zmq_setsockopt() detailed parameter settings - family of link-management parameters alike ZMQ_RECONNECT_IVL, ZMQ_RCVTIMEO and the likes.
All other questions depend on your code.
If using blocking-forms of the .recv()-methods, you can easily throw yourself into unsalvageable deadlocks, best never block your own code ( why one would ever deliberately lose one's own code domain-of-control ).
If in a need to indeed understand low-level internal link-management details, do not hesitate to use zmq_socket_monitor() instrumentation ( if not available in .NET binding, still may use another language to see details the monitor-instance reports about link-state and related events ).
I was able to find an answer on their GitHub https://github.com/zeromq/netmq/issues/845. Seems that the behavior is by design as I got the same with native zmq lib via .NET binding.

Sending ZeroMQ messages from openvswitch to SDN controller

I have been working with two C programs for the past few months, one is a ZeroMQ publisher and the other a ZeroMQ subscriber.
They exchange simple string messages between Virtual Machines and everything works fine.
Now, in one of the VMs I've been working on (VM A) I configured an openvswitch and in another VM a Ryu controller. The diagram is the following:
I "bounded" the bridge interface of OVS to the eth3 interface of VM A. Everything works well and flow-entries are added by the Ryu controller or manually added by me.
Now, I want to add the ZeroMQ publisher-subscriber programs I had already used countless times. Here, the controller is the subscriber and OVS the publisher.
However, the messages never arrive to the controller... If I run the ZeroMQ publisher from another machine on net A that does NOT have OVS installed and configured, messages arrive to the Ryu controller successfully.
When I run the publisher and subscriber, this is the output of netstat -at on both machines. VM A is "OpenWrt" and the Ryu controller is "control" (consider only the last line in #control VM):
Is there something I'm missing? Is it really impossible to send TCP messages from de OVS to the controller? Should I create some kind of tunnel from VM A to the controller where messages would flow through? Or is it just an issue with ZeroMQ that does not work with Openflow enabled architectures?
If any of you ever worked with a message queueing technology in Openflow environments, please let me know.
I appreciate any kind of help, I've been stuck for weeks.
Note: I can ping VM A from controller and vice-versa.
Q : Is there something I'm missing?
Given no MCVE-code was provided, there is missing a pair of LoS-visibility test results, from:
<ZMQ_SUB_HOST>:~$ traceroute --sport=<ZMQ_TRANSPORT_PORT#> <ZMQ_PUB_HOST>
and
<ZMQ_PUB_HOST>:~$ traceroute --sport=<ZMQ_TRANSPORT_PORT#> <ZMQ_SUB_HOST>
Note: I can ping VM A from controller and vice-versa.
This remark sounds promising, yet both the LoS-visibility test results and the MCVE-code details are important, but missing.
Q : Should I create some kind of tunnel from VM A to the controller where messages would flow through?
Sure, you definitely can. This will isolate L3-level issues and your tunneling-path will provide a way to ignore all these, at a cost of some slight latency trade-off.
Q : Or is it just an issue with ZeroMQ that does not work with Openflow enabled architectures?
There is no specific reason for blaming ZeroMQ infrastructure to stop working but due to an L3/SDN-related yet unspecified so far issue. Given a fair tcp://-transport-class path exists and is used in a properly configured ZeroMQ transport-infrastructure consisting of ad-hoc, dynamic setup ofM-.bind( <class>://<addr>:<port> )-s : N-.connect( <class>://<addr>:<port> )-s relation, reporting no error-state(s) on respective operation(s), the ZeroMQ shall and will work as it always does.

Is ACK mandatory in CAN bus communication

I am making a CAN simulator for GPS trackers, they only record CAN data and doesn't send ACK. Is it possible to send CAN data with raspberry, using mcp2515/tja1050, without any device on bus that would trigger ACK?
This will usually generate a continuous retransmit.
Some devices have a "one-shot" transmit mode when just sends the CAN frame and does not attempt a retransmission. If you transmitter has this mode you can do what you describe, otherwise you will get a lot of retransmissions.
No it isn't possible, you need at least 2 nodes that are actively participating in the communication. This can however be fixed by just providing another CAN controller on the bus, which doesn't have to do anything intelligent except the ACK part.
For development/debug/test purposes you can however put your own node in "loopback mode", meaning it will speak to itself. Can be handy if you have to proper hardware available yet.
You can try to set the controlmode presume-ack to on.
Assuming you are using the ip command for creating your can sockets that would be something like
ip link set <DEVICE> type can presume-ack on
This will ignore missing ACKs. However I am not sure whether this works with all controllers.

MCP25625 doesnt send CAN messages

Im using MCP25625 which is MCP2515+integrated MCP2551 and trying to send messages in a loop.
For some reson I dont see any signal at all on CANH, CANL lines.
SPI communication works correctly
I use software reset procedure
There is clear 20Mhz sinewave from Crystal
There is TXCAN signal
At the moment there is nothing at all connected to CANL,CANH, just the probe.
I also tried to run in LOOPBACK mode and it works, but in the NORMAL modethere is nothing coming out.
Seems like transciever is broken? I changed 2 chips already, so it shouldnt be the problem.
Any suggestion guys?
Schematics
have you considered the modes of operation of the CAN transceiver?
In your schematic, the pins value is not clear.
If you have connected it to the MCU, Please pull it to LOW to select the normal operation mode for the transceiver (it is different configuration then the CAN controller settings, hence might cause some confusion!).
Controlling it by MCU is a good choice as it gives more control to prevent network communication from being blocked, due to a CAN controller which is out of control.
Else, connect it to ground to ensure normal operation mode specifically for the build in transceiver.
I have referred the data-sheet's of MCP25625, MCP2515 and TJA1050 to bring out this conclusion.
TJA1050 has pin-S for selecting high-speed mode and silent mode. Both modes are similar to normal mode and standby mode respectively of the transceiver of MCP25625.
Also, pin-S configuration in TJA1050 is similar to pin-STBY configuration in MCP25625.
0(LOW) for high-speed/normal mode of TJA1050/MCP25625-Transceiver
1(HIGH) for silent/standby mode of TJA1050/MCP25625-Transceiver
Hope this helps.
At the moment there is nothing at all connected to CANL,CANH, just the probe.
Hope you have connected the termination resistor? It is on the schematic, but ...

Many-to-many messaging on local machine without broker

I'm looking for a mechanism to use to create a simple many-to-many messaging system to allow Windows applications to communicate on a single machine but across sessions and desktops.
I have the following hard requirements:
Must work across all Windows sessions on a single machine.
Must work on Windows XP and later.
No global configuration required.
No central coordinator/broker/server.
Must not require elevated privileges from the applications.
I do not require guaranteed delivery of messages.
I have looked at many, many options. This is my last-ditch request for ideas.
The following have been rejected for violating one or more of the above requirements:
ZeroMQ: In order to do many-to-many messaging a central broker is required.
Named pipes: Requires a central server to receive messages and forward them on.
Multicast sockets: Requires a properly configured network card with a valid IP address, i.e. a global configuration.
Shared Memory Queue: To create shared memory in the global namespace requires elevated privileges.
Multicast sockets so nearly works. What else can anyone suggest? I'd consider anything from pre-packaged libraries to bare-metal Windows API functionality.
(Edit 27 September) A bit more context:
By 'central coordinator/broker/server', I mean a separate process that must be running at the time that an application tries to send a message. The problem I see with this is that it is impossible to guarantee that this process really will be running when it is needed. Typically a Windows service would be used, but there is no way to guarantee that a particular service will always be started before any user has logged in, or to guarantee that it has not been stopped for some reason. Run on demand introduces a delay when the first message is sent while the service starts, and raises issues with privileges.
Multicast sockets nearly worked because it manages to avoid completely the need for a central coordinator process and does not require elevated privileges from the applications sending or receiving multicast packets. But you have to have a configured IP address - you can't do multicast on the loopback interface (even though multicast with TTL=0 on a configured NIC behaves as one would expect of loopback multicast) - and that is the deal-breaker.
Maybe I am completely misunderstanding the problem, especially the "no central broker", but have you considered something based on tuple spaces?
--
After the comments exchange, please consider the following as my "definitive" answer, then:
Use a file-based solution, and host the directory tree on a Ramdisk to insure good performance.
I'd also suggest to have a look at the following StackOverflow discussion (even if it's Java based) for possible pointers to how to manage locking and transactions on the filesystem.
This one (.NET based) may be of help, too.
How about UDP broadcasting?
Couldn't you use a localhost socket ?
/Tony
In the end I decided that one of the hard requirements had to go, as the problem could not be solved in any reasonable way as originally stated.
My final solution is a Windows service running a named pipe server. Any application or service can connect to an instance of the pipe and send messages. Any message received by the server is echoed to all pipe instances.
I really liked p.marino's answer, but in the end it looked like a lot of complexity for what is really a very basic piece of functionality.
The other possibility that appealed to me, though again it fell on the complexity hurdle, was to write a kernel driver to manage the multicasting. There would have been several mechanisms possible in this case, but the overhead of writing a bug-free kernel driver was just too high.

Resources