Can Connectivity Parameters Update Using SCP03 be done using one OTA message? - sms

I'm trying to wrap my head around SGP.02-v4.0 - Remote Provisioning Architecture for Embedded UICC Technical Specification and specifically ES3.UpdateConnectivityParameters function.
The puzzling thing is that figure 29 in section 3.14 shows that SM-SR sends only one MT-SMS to eUICC after ES3.UpdateConnectivityParameters received. I understand that actual connectivity parameters in the form of the ES8.UpdateConnectivityParametersSCP03 command are sent in SCP03 tunnel.
Commands sent using SCP03 require that two commands are sent to eUICC first (INITIALIZE UPDATE and EXTERNAL AUTHENTICATION as described in GlobalPlatform Card Specification 2.2.1 section D.1.2). I understand it that these two commands cannot be sent in one MT-SMS as in order to send the second one the result from the first one is required.
So it means that actual execution of the ES3.UpdateConnectivityParameters function requires at least three messages over ES5.
This section from Secure Channel Protocol '03' – Public Release v1.1.1 adds to this confusion:
The Secure Channel is used to personalize cards at Issuance and during Post-Issuance. The mode of the Secure Channel Protocol which uses pseudo-random card challenges allows the offline preparation of personalization scripts while the card is not present and the processing of these scripts on the card without an online connection to the entity that prepared the scripts
I initially interpret edit that all three commands (INITIALIZE UPDATE, EXTERNAL AUTHENTICATION and ES8.UpdateConnectivityParametersSCP03) can be sent in one OTA message. But now when asking this question I see that it may mean a message over ES3 (between SM-DP and SM-SR) not over ES5 as I initially thought.
Is my understanding that the figure in SGP.02 is a simplified explanation and it does not show all OTA messages send to and from eUICC (specifically those required to establish SCP03)?

I have not looked up if your assumption of using SCP03 here is true, assume they are using it:
The pseudo-random card challenges allows to execute both commands in one message. Because like the name suggest it is no real random but random calculated from a known value. The known value is the sequence counter. It can be read out from the card and must be known and in sync and incremented by both parties.

Related

Question about implementing Raft's Client interaction

I'm actually learning MIT6.824,
https://www.youtube.com/channel/UC_7WrbZTCODu1o_kfUMq88g,
and try to implement its lab,
there's a paragraph in raft's paper describing client semantics:
Our goal for Raft is to implement linearizable seman- tics (each operation appears to execute instantaneously, exactly once, at some point between its invocation and its response). However, as described so far Raft can exe- cute a command multiple times: for example, if the leader crashes after committing the log entry but before respond- ing to the client, the client will retry the command with a new leader, causing it to be executed a second time. The solution is for clients to assign unique serial numbers to every command. Then, the state machine tracks the latest serial number processed for each client, along with the as- sociated response. If it receives a command whose serial number has already been executed, it responds immedi- ately without re-executing the
request.
Now I have passed MIT lab 3A, but I have responses map[string]string in kvserver,
which is a map from client's request id to response, but the problem is then the
map will keep increasing if client's keep sending request, Which is problemic in real project. How does Raft handle this in real project? Also, the MIT lab 3 says one client
will execute one command at a time, so probably I can optimize by deleting client's last request's response. But how does Raft handle this in real project where client's behavior is more free?

How to test N_As, N_Ar timeout parameters in CanTp protocol using CAPL script or or any other possible way?

As part of CanTp protocol related tests, I have been trying to test N_As and N_Ar timeout errors, where N_AsMax = 1000ms and N_ArMax = 1000ms.
Is it possible to create the N_As and N_Ar timeouts with CANalyzer and/or using CAPL?
It would be great help, if you can share a possible way to test these timing parameters using CANalyzer or CANoe.
CanTP is a protocol to extendend the maximum data length (in bytes) of any given CAN data frame over the traditional 8 bytes, please refer to ISO 15765-2. Here you can have Single Frames, or Multi-Frames, which are trains of related frames each one carrying a portion of the overall PDU. A flow control frame is sent, usually by the receiver, to address and instruct the transmitter on the protocol to be used for frame splitting.
According to docs,
N_Ar [is the] Time for transmission of the CAN frame (any N-PDU) on
the receiver side (see ISO 15765-2)
N_As [is the] Time for transmission of the CAN frame (any N-PDU) on
the sender side (see ISO 15765-2).
In addition, the following requirements are relevant:
[SWS_CanTp_00075] ⌈If the transmit confirmation is not received after
a maximum time (equal to N_As), the CanTp module shall act as if it
had received an unsuccessful transmission confirmation and any late
confirmation shall be ignored. The CanTp module shall cancel
(internally) the failed transmission. ⌋ ( )
[SWS_CanTp_00311] ⌈In case of N_Ar timeout occurrence (no confirmation
from CAN driver for any of the FC frame sent) the CanTp module shall
abort reception and notify the upper layer of this failure by calling
the indication function PduR_CanTpRxIndication() with the result
E_NOT_OK. ⌋ ( )
Coming back to your question:
Is it possible to create the N_As and N_Ar timeouts with CANalyzer and/or using CAPL?
Yes, by means of the osek_tp.dll file that you should have in your local CANoe install (I'm using CANoe v10.0). Examples on how to use it are well documented in the help document AN-IND-1-012_CAPL_Callback_Interface.pdf, again it should be distributed in your CANoe install folder.
According to that document,
Basically, the OSEK_TP.DLL implements fault injection functionality
that has to be enabled explicitly in order to prevent unintentional
usage. Once activated, it is possible to setup a specific fault on a
connection that is executed during the next data transfer.
I'd urge to give it a read, and refer to linked documentation as well. I hope this is pointing you in the rigth direction.
Additional info:
Transmitting data over ISO-TP in CANoe using CAPL

How to communicate with external system

I'm trying to write a logic (js script) to communicate with external system. As far as understand, logic will be executed on all endorsing peer.
In this case, how can I avoid duplicate operation to external system ? For example, how to increment a value in external database ? If I write a logic to increment the value in js, I think the value will be incremented by all endorsing peer.
I'll appreciate any comment.
Firstly, currently the only way you can interact with external systems is using the experimental post API. This allows your Transaction Processor function to HTTP POST data to an external system and then to process the response.
Documentation here:
https://hyperledger.github.io/composer/integrating/call-out.html
You are correct in stating that if you have 4 peers, then the chain code container for each peer will run your logic, so you'd expect to see 4 calls to your HTTP service. This is required because each peer node is independent and Fabric must achieve consensus across the peers.
The external functions should therefore (ideally) be side-effect free "pure" functions (idempotent), meaning that for a given set of input parameters you always get the same set of output results.
Clearly a function that returns an incrementing integer doesn't fit this description! You probably need to rethink how you are structuring your problem to make it compatible with a decentralised blockchain-based approach.

Mass Text to Speech using Plivo

I am looking at integrating Plivo with our platform to make outgoing text to speech calls. All of our calls made, will be a customized message of about 20 words, or less than a 30 second call.
Daily, we'll batch about 10,000 calls at the same time. It appears I would have to make 10,000 rest API calls vs being able to send a batch at one time, each one with it's own answer_url. Does anyone have experience with this, seems like a ton of overhead.
Another option may be to use parameters in the answer_url, so I can send a list of all phone numbers at once and then based on a parameterized answer_url, tell Plivo what to do next.
With Plivo, you can make bulk outbound calls where you can specify multiple numbers and a single answer_url. See https://www.plivo.com/docs/getting-started/make-bulk-calls/ for a getting started doc.
For each call made, Plivo makes a request to that answer url with the to/from numbers (see this link for more details). Then, based on the to/from numbers, your answer_url can respond with the TTS message to be played for that particular number. You would just need to have a database where you can lookup the number to get the message to play for each request to your answer_url.

Ruby websocket check if user exist

Using Event-machine and Ruby. Currently I'm making a game were at the end of the turn it checks if other user there. When sending data to the user using ws.send() how can I check if the user actually got the data or is alternative solution?
As the library doesn't provide you with access to the underlying protocol elements, you need to add elements to your application protocol to do this. A typical approach is to add an identifier to each message and response to messages with acknowledgement messages that contain those identifiers.
Note that such an approach will only help you to have a better idea of what has been received by a client. There is no assurance of particular state in the case of errors. An example would be losing a connection after the client as sent an ACK, but the service has not received it.
As a result of the complexity I just mentioned, it is often easier to try to make most operations idempotent - that is able to be replayed without detriment to the system, and to replay readily during/after error conditions. You may additionally find a way to periodically synchronize the relevant state entirely, to avoid the long term continuation of minor errors introduced by loss of data/a connection.

Resources