Apply censorship to LIN Slave transmitted frames before they reach Master via CAPL - bus

In my current setup, I have a LIN Master and a LIN slave. The schedule table is unconditional, and never re-negotiated:
Master Frame
Slave Frame
Slave Frame
I'm using physical bus and simulated Master (physical Slave). My goal is to apply censorship to certain LIN frames, in real-time.
For instance, I know that a request from Master (maybe single or multi-frame) will trigger a specific Slave response. I would like to catch this response, say in a CAPL script, perform checks on the content and apply selective censorship to it, so that the frame received by the Master doesn't say what the Slave transmitted in the first place. When no Master request is sent, both Master and Slave keep sending empty frames to fulfill the schedule table.
I can easily "catch" the frame with a non-transparent CAPL, but I'm unsure of how I would re-transmit it.
According to the output() keyword documentation:
To reconfigure response data of LIN frame. In that case RTR selector has to be set to 0. The LIN hardware responds to the next request of the specified frame with the newly configured data.
I don't want to add a delay of one message in transmission. And given the following constraints, I have no idea of how to do this, or if it is even possible with the CAPL API in CANoe:
I cannot foresee when a Master Request is broadcasted;
I know that Slave will reply immediately with acknowledge;
After a time that I cannot foresee, Slave will send additional data that I want to censor;
The Slave reply has to be modified in place and re-transmitted;
The original Slave reply must never reach the Master.
Rejected pseudo-code:
on linFrame 0x01 // <-- slave frame
{
if( payload I'm looking for )
{
// edit payload content
}
output(this)
}

Especially since
The original Slave reply must never reach the Master.
you have to physically disconnect Master and Slave and put CANoe inbetween
You would need a network interface with (at least) two LIN channels - one connected to the master and one connected to the slave - and need CANoe be setup as a gateway between these two channels. I.e. acting as a Master towards the Slave and acting as a Slave towards the Master.
In your gateway implementation you can then do whatever you want with the messages exchanged between master and slave.
Doable but not that much fun.

Related

How to solve problem of high data rates of flow, hits switches that does not have flowrule for that flow, in openflow mininet?

I am using iperf traffic generation and hard timeout as extension to simple_switch_13.py code in mininet with RYU SDN. I am using linear topology with 8 switches. I set the hard timeout to 5 seconds.
I am working with only one flow. I started the iperf traffic between two hosts(let's say h1 to h7. the terms used are same as terms used in mininet linear topology) for 10 seconds. When the flow started arp packets packets are generated in the network. After that a arp reply from h7 is sent to h1 which creates seven packet in messges from (s7, s6, ... , s1) and respective flowrule is installed in the switches and finally reaches h1. Then h1 sends tcp flow to h7 which also creates seven packet in messages from (s1, s2, ... , s7) and respective flow rule is installed in the switches and reaches h7. So far everything worked fine.
But once the timeout(5 seconds) is completed the flow rule in the switches is deleted. because flow is still in network what actually should happen is controller should send one packet-in message and buffer the rest of the packets so that when the respective flow rule is installed in flow table of the switch then the buffered packets will use the installed flow rule to pass the switch. But that is not happening. The controller is getting a lot of "packet-in" messages before the flow rule get installed into the switch(every packet that came to switch is coming to controller). What might be the reason for the lot of packet in messages. Is the buffers of the switch not working fine (but i am getting packet-in messages with some buffer_id). How to solve this issue?
This is also happening with idle-timeout and udp flow in the starting.(i.e. when h1 starts communication with h7) the switches along the path are generating lot of packet-in messages.
After doing a lot of research I understand that it is not problem with hard timeout or idle timeout. It is happening when a flow with high data rate hits the switch and switch didn't have that flow rule, then it is sending a lot of packet-in messages for the same flow. It is not queuing(or may be not storing the rest of the flow packets after sending one packet-in for that respective flow) those packets. How to solve this issue in mininet?

linUpdateResponse vs Output - how are they different for LIN simulated nodes?

In a LIN simulated slave node, what is the difference between output and linUpdateResponse?
From output docs:
To reconfigure response data of LIN frame. In that case RTR selector has to be set to 0. The LIN hardware responds to the next request of the specified frame with the newly configured data.
So, I can reconfigure the output and next time (real?) hardware should talk I've successfully override it, right?
From linUpdateResponse docs:
Updates the response data of a specific LIN frame.
letting me set data length (dlc) and data content for a specific frame ID.
How are they different and are there examples available? I can't quite understand how to use the latter with the example provided.
For LIN slave nodes, there is not really a difference between output and linUpdateResponse.
Both modify the internal state of the (simulated) slave and change the frame that will be sent the next time the master asks for the frame.
As you have posted, when using output you have the set the RTR selector.
But apart from that, there is no difference.
I, personally, think that linUpdateResponse is more convenient to use.

What happens in linux wifi driver after a packet is sent (life of packet)?

I am working on a low latency application, sending udp packets from a master to a slave. The master acts as access point sending the data directly to the slave. Mostly it is working well but sometimes data is arriving late in the slave. In order to narrow down the possible sources of the delay I want to timestamp the packets when they are sent out in the master device.
To achieve that I need a hook where I can take a timestamp right after a packet is sent out.
According to http://www.xml.com/ldd/chapter/book/ch14.html#t7 there should be an interrupt after a packet is sent out but I can't really find where the tx interrupt is serviced.
This is the driver:
drivers/net/wireless/bcmdhd/dhd_linux.c
I call dhd_start_xmit(..) from another driver to send out my packet. dhd_start_xmit(..) calls dhd_sendpkt(..) and then dhd_bus_txdata(..) (in bcmdhd/dhdpcie.c) is called where the data is queued. Thats basically where I lose track of what happens after the queue is scheduled in dhd_bus_schedule_queue(..).
Question
Does someone know what happens right after a packet is physically sent out in this particular driver and maybe can point me to the piece of code.
Of course any other advice how to tackle the problem is also welcome.
Thanks
In case of any network hardware and network driver these steps happen:-
1.driver have a transmit descriptor which will be in format understandable by hardware.
2.driver fill the descriptor with the current transmitting packet and send it to hardware queue to transmit .
after successful transmission a interrupt is generated by hardware .
this interrupt called transmission completion function in driver , which will be free the memory of previous packet and reset many things including descriptor etc.
here in line no. 1829 , you can see packet has been freeing .
PKTFREE(dhd->osh, pkt, TRUE);
Thanks
The packet is freed in the function
static void BCMFASTPATH
dhd_prot_txstatus_process(dhd_pub_t *dhd, void * buf, uint16 msglen)
in the file dhd_msgbuf.c
with
PKTFREE(dhd->osh, pkt, TRUE);

Hadoop 2.0 data write operation acknowledgement

I have a small query regarding hadoop data writes
From Apache documentation
For the common case, when the replication factor is three, HDFS’s placement policy is to put one replica on one node in the local rack, another on a node in a different (remote) rack, and the last on a different node in the same remote rack. This policy cuts the inter-rack write traffic which generally improves write performance. The chance of rack failure is far less than that of node failure;
In below image, when the write acknowledge is treated as successful?
1) Writing data to first data node?
2) Writing data to first data node + 2 other data nodes?
I am asking this question because, I have heard two conflicting statements in youtube videos. One video quoted that write is successful once data is written to one data node & other video quoted that acknowledgement will be sent only after writing data to all three nodes.
Step 1: The client creates the file by calling create() method on DistributedFileSystem.
Step 2: DistributedFileSystem makes an RPC call to the namenode to create a new file in the filesystem’s namespace, with no blocks associated with it.
The namenode performs various checks to make sure the file doesn’t already exist and that the client has the right permissions to create the file. If these checks pass, the namenode makes a record of the new file; otherwise, file creation fails and the client is thrown an IOException. TheDistributedFileSystem returns an FSDataOutputStream for the client to start writing data to.
Step 3: As the client writes data, DFSOutputStream splits it into packets, which it writes to an internal queue, called the data queue. The data queue is consumed by the DataStreamer, which is responsible for asking the namenode to allocate new blocks by picking a list of suitable datanodes to store the replicas. The list of datanodes forms a pipeline, and here we’ll assume the replication level is three, so there are three nodes in the pipeline. TheDataStreamer streams the packets to the first datanode in the pipeline, which stores the packet and forwards it to the second datanode in the pipeline.
Step 4: Similarly, the second datanode stores the packet and forwards it to the third (and last) datanode in the pipeline.
Step 5: DFSOutputStream also maintains an internal queue of packets that are waiting to be acknowledged by datanodes, called the ack queue. A packet is removed from the ack queue only when it has been acknowledged by all the datanodes in the pipeline.
Step 6: When the client has finished writing data, it calls close() on the stream.
Step 7: This action flushes all the remaining packets to the datanode pipeline and waits for acknowledgments before contacting the namenode to signal that the file is complete The namenode already knows which blocks the file is made up of , so it only has to wait for blocks to be minimally replicated before returning successfully.
Data write operation is considered successful if one replica is successfully written. It is governed by the property dfs.namenode.replication.min in hdfs-default.xml file.
If there is any failure of datanode while writing a replica, the data written is not considered unsuccessful, but under-replicated which while balancing the cluster creates those missing replicas.
Ack packet is independent of the status of data written to datanodes. Even if the data packet is not written the acknowledgement packet is delivered.

LIN bus slave transmitting without master

How does a LIN bus slave device behave, if no master is connected?
In my research I noticed, that in LIN version 2.0 every message is initialized by a header frame which is sent by the master device.
For tests I powered a LIN slave device and did not connect any master to the bus. Then I measured the voltage on the LIN bus line with an oscilloscope. And I seems that the slave device is transmitting data.
How can this be explained?
Have you tried to decode the frames? Do they have the break period? sync byte? id? Possibly this is a master, not a slave.
That is not possible, i mean a slave cannot start the communication.
The master has to send first a Header:
- Sync Break Field (14 Tbit)
- Sync Field (0x55)
- ID field
...and when one slave in the bus detects that this request is for him, then replies, otherwise it ignores the received header.
SLAVE can not transmit data if it does not get ID published by the master. Master must say first ID x please reply, then slave will reply. Otherwise your slave may be designed to keep on answering without request causing collision.

Resources