How can I calculate the flow rate of a tcp session? - session

In my project I need to calculate the flow rate of a tcp session, should I use total_bytes_of_session/total_time_elapsed or use tcp-windows and tcp-rtt to calculate?
Thanks!

In my project I need to calculate the flow rate of a tcp session
I assume you mean the flow rate of a session that is just ending and that you have acquired data for?
should I use total_bytes_of_session/total_time_elapsed
Yes, if you have that data.
or use tcp-windows and tcp-rtt to calculate?
You don't have that data and you can't get it, so you can't use any calculation that relies on it.

Related

How to test N_As, N_Ar timeout parameters in CanTp protocol using CAPL script or or any other possible way?

As part of CanTp protocol related tests, I have been trying to test N_As and N_Ar timeout errors, where N_AsMax = 1000ms and N_ArMax = 1000ms.
Is it possible to create the N_As and N_Ar timeouts with CANalyzer and/or using CAPL?
It would be great help, if you can share a possible way to test these timing parameters using CANalyzer or CANoe.
CanTP is a protocol to extendend the maximum data length (in bytes) of any given CAN data frame over the traditional 8 bytes, please refer to ISO 15765-2. Here you can have Single Frames, or Multi-Frames, which are trains of related frames each one carrying a portion of the overall PDU. A flow control frame is sent, usually by the receiver, to address and instruct the transmitter on the protocol to be used for frame splitting.
According to docs,
N_Ar [is the] Time for transmission of the CAN frame (any N-PDU) on
the receiver side (see ISO 15765-2)
N_As [is the] Time for transmission of the CAN frame (any N-PDU) on
the sender side (see ISO 15765-2).
In addition, the following requirements are relevant:
[SWS_CanTp_00075] ⌈If the transmit confirmation is not received after
a maximum time (equal to N_As), the CanTp module shall act as if it
had received an unsuccessful transmission confirmation and any late
confirmation shall be ignored. The CanTp module shall cancel
(internally) the failed transmission. ⌋ ( )
[SWS_CanTp_00311] ⌈In case of N_Ar timeout occurrence (no confirmation
from CAN driver for any of the FC frame sent) the CanTp module shall
abort reception and notify the upper layer of this failure by calling
the indication function PduR_CanTpRxIndication() with the result
E_NOT_OK. ⌋ ( )
Coming back to your question:
Is it possible to create the N_As and N_Ar timeouts with CANalyzer and/or using CAPL?
Yes, by means of the osek_tp.dll file that you should have in your local CANoe install (I'm using CANoe v10.0). Examples on how to use it are well documented in the help document AN-IND-1-012_CAPL_Callback_Interface.pdf, again it should be distributed in your CANoe install folder.
According to that document,
Basically, the OSEK_TP.DLL implements fault injection functionality
that has to be enabled explicitly in order to prevent unintentional
usage. Once activated, it is possible to setup a specific fault on a
connection that is executed during the next data transfer.
I'd urge to give it a read, and refer to linked documentation as well. I hope this is pointing you in the rigth direction.
Additional info:
Transmitting data over ISO-TP in CANoe using CAPL

How get a data without polling?

This is more of a theorical question.
Well, imagine that I have two programas that work simultaneously, the main one only do something when he receives a flag marked with true from a secondary program. So, this main program has a function that will keep asking to the secondary for the value of the flag, and when it gets true, it will do something.
What I learned at college is that the polling is the simplest way of doing that. But when I started working as an developer, coworkers told me that this method generate some overhead or it's waste of computation, by asking every certain amount of time for a value.
I tried to come up with some ideas for doing this in a different way, searched on the internet for something like this, but didn't found a useful way about how to do this.
I read about interruptions and passive ways that can cause the main program to get that data only if was informed by the secondary program. But how this happen? The main program will need a function to check for interruption right? So it will not end the same way as before?
What could I do differently?
There is no magic...
no program will guess when it has new information to be read, what you can do is decide between two approaches,
A -> asks -> B
A <- is informed <- B
whenever use each? it depends in many other factors like:
1- how fast you need the data be delivered from the moment it is generated? as far as possible? or keep a while and acumulate
2- how fast the data is generated?
3- how many simoultaneuos clients are requesting data at same server
4- what type of data you deal with? persistent? fast-changing?
If you are building something like a stocks analyzer where you need to ask the price of stocks everysecond (and it will change also everysecond) the approach you mentioned may be the best
if you are writing a chat based app like whatsapp where you need to check if there is some new message to the client and most of time wont... publish subscribe may be the best
but all of this is a very superficial look into a high impact architecture decision, it is not possible to get the best by just looking one factor
what i want to show is that
coworkers told me that this method generate some overhead or it's
waste of computation
it is not a right statement, it may be in some particular scenario but overhead will always exist in distributed systems
The typical way to prevent polling is by using the Publish/Subscribe pattern.
Your client program will subscribe to the server program and when an event occurs, the server program will publish to all its subscribers for them to handle however they need to.
If you flip the order of the requests you end up with something more similar to a standard web API. Your main program (left in your example) would be a server listening for requests. The secondary program would be a client hitting an endpoint on the server to trigger an event.
There's many ways to accomplish this in every language and it doesn't have to be tied to tcp/ip requests.
I'll add a few links for you shortly.
Well, in most of languages you won't implement such a low level. But theorically speaking, there are different waiting strategies, you are talking about active waiting. Doing this you can easily eat all your memory.
Most of languages implements libraries to allow you to start a process as a service which is at passive waiting and it is triggered when a request comes.

How to write a LoadRunner script to measure queue depth for JMS?

I need to write a loadRunner script to measure the queue depth per second for JMS. Can anyone give tips on achieving this using LR v12.5?
Many Thanks!
Anuradha
First, ask yourself, how would you manually examine the queue depth for your queue provider? for instance, if your queue is a queue table on ORACLE, then you would simply query the number of rows in the queue table. If your queue is on RabbitMQ, then perhaps you would use the management plug in to issue an HTTP request with the appropriate parameters for queue depth. For MQ you might have a command line option from the system prompt.
Once you understand the manual method then you can look at how you automate this either at the GUI layer or at the protocol layer for the queue provider. The manual process plus the communications architecture for the back end queue provider is the key here.

Where in kernel/socket memory to store long term information between network sessions

I'm trying to implement the QUIC protocol in the linux kernel. QUIC works on top of UDP to provide a connection-oriented, reliable data transfer.
QUIC was designed to reduce the number of handshakes required between sessions as compared to TCP.
Now, I need to store some data from my current QUIC session so that I can use it when the session ends and later on use it to initiate a new session. I'm at a loss about where should this data be stored so that it's not deleted between sessions.
EDIT 1: The data needs to be stored till the socket lives in the memory. Once the socket has been destroyed, I don't need the data anymore.
As an aside, how can I store data even between different sockets? Just need a general answer to this as I don't need it for now.
Thank you.

ZeroMQ, ROUTER - DEALER, send a message to all

One server - ZMQ_ROUTER, many clients - ZMQ_DEALER
How on a server(ZMQ_ROUTER) send a message to all clients(ZMQ_DEALER)?
UPD:
I know there are PUB-SUB pattern and that is really what I need. But I want to use only the current ROUTER-DEALER socket. Is it possible?
Yes, but It won't be the answer you would like to hear. I think there isn't a flag, or socket option for this. What you can do:
Track the connected dealers manually, than create a loop and send the same stuff to every connected dealer. If you send large messages you can zero copy the load, so you don't have to allocate the memory time to time.

Resources