Which eUICC capabilities and how should SMSR use to segment SCP03t script? - ota

From section 5.4.4 of SGP.02-v4.2:
The SM-SR has the responsibility to build the final Command script, depending on eUICC capabilities and selected protocol
It is clear that in some cases SM-SR should split the command script that it received in data field of ES3.SendData. This is supported by examples from SGP.11-v4.0: compare step 14 on page 408 where complete profile package #PROFILE_PACKAGE is passed in ES3.SendData. with steps 3, 5 and 7 on pages 211-212 where segments in the form of #PROFILE_PARTi are sent via ES5.
First of all it is not clear what factors limit the capability to send the whole command script to eUICC. This question is important because the answer to it most probably would allow to answer the most important question: how exactly SMSR decides (as in what parameters it takes into account) how to split the command script.
I know about EUICC-CAPABILITIES but they only specify what transports/algorithms/features are supported. They do not give any hint of how large the single segment (that is #PROFILE_PARTi) of the command script built by SMSR can be. Also specification seems to distinguish eUICC capabilities from selected protocol and that's another factor that makes me think that capabilities mentioned in 5.4.4 are not the same as EUICC-CAPABILITIES.
SGP.11-v4.0 shows that even for CAT_TP or HTTPS segmenting should be used and SMSR somehow needs to decide how to split the command script it received.
So questions:
What eUICC capabilities should be taken into account by SMSR?
How exactly SMSR should use the capabilities to decide the size of the script segment (like #PROFILE_PARTi) it can use?
How SMSR can retrieve those capabilities for a given eUICC?
Can they be retrieved via means defined in some GP specification or is this manufacturer specific?
According to section 4.1.3.3 of SGP.02-v4.2 eUICC SHALL support profile command data segments of at least up to 1024 bytes (including tag and length field). Does it mean that:
SMDP will never create profile command data TLVs with code ‘86’ longer than 1024 bytes?
SMSR is compliant if it sends one command TLV at a time? (This way SMSR does not utilize the capabilities of the eUICC but this algorithm will work for any compliant eUICC regardless of eUICC’s capabilities.)

SM-SR:
Take a look into "EUICC-CAPABILITIES", sect. 5.1.1.2.10 for the supported transport features. For your implementation e.g. also the supported buffer size might be interesting to send as less concatenated SMS as possible or have a better TCP payload size. This must be implicitly known by reading the EID and know the features of the eUICC by interpreting the model and manufacturer. Not sure if yome eUICC are offering here more than the minimum.
2.+3.+4. Store the EIS when the eUICC is registered. (the SM-SR could have it in the database). This contains the EUICC-CAPABILITIES.
SM-DP:
Yes. Including the tag and length, MAC and padding. The profile package is sliced in 1020 byte chunks and then prefixed with the 86 tag and the 3 bytes length.
Yes, most likely, but like mentioned above a streaming protocol like CAT_TP or HTTPS does not have the problems of SMS.

Related

How to segment command script that is too big to fit into one transport APDU

SGP.02 - Remote Provisioning Architecture for Embedded UICC Technical Specification (page 255 of v4.0 specifically) says:
the data format provided by the function caller SHALL NOT depend of the selected OTA protocol capabilities (for example SM-DP can consider there is no limit on data length)
and later
The SM-SR has the responsibility to build the final Command script,
depending on eUICC capabilities and selected protocol:
by adding the Command scripting template for definite or indefinite length,
and, if necessary, by segmenting the provided command script into several pieces
and, if necessary, by adding the relevant Script Chaining TLVs.
I understand it that SM-DP can send arbitrary long data parameter to ES3.SendData and SM-SR should send several APDUs in multiple SMSes if data is too large to fit into one. And that is meant by segmenting.
The problem is I can't find a relevant specification that defines how segmenting should be done. And that's the question: where is the segmenting process defined?
I may be wrong but it seems that is not the same as Concatenated Short Messages described in the section 6.3 of ETSI TS 123 048.
It seems that Script Chaining briefly mentioned in ETSI TS 102 226 is somewhat related so pointers to the specification that defines how it works are also very much welcome (TS 102 226 talks about Script Chaining TLVs but not how to use them, at least I'm definitely missing some broader context how it works so any hints are appreciated).
UPDATE:
ES8.EstablishISDPKeySet function requires 3 APDUs to be sent. And they are quite big as they contain keys. From SGP.02-v4.0 table 150 I understand it that they are sent from SM-DP to SM-SR using Expanded Remote Command Format. The script in this format can be rather large as far as I understand (given that SM-DP can assume that there is no limitation on the data length). And it is not clear how SM-SR should segment it or use chaining. I'm just missing the specs where this is described.
The eUICC has a limited internal buffer, i.e. it cannot store a complete profile package of 10kb or more internally. The message must be chunked. If the eUICC supports just e.g. 1 kb then you have to split the APDU commands after at most 3 APDU commands to stay below 1kb. The SGP.02 specification defines to have at least 1024 bytes. A fully featured SM-SR might store some attributes based on the eUICC vendor in the EID to add special handling and patches for certain eUICCs to support a larger buffer size.
Encode each APDUs chunk (1..n APDUs) into the Expanded Remote Command Format (ETSI TS 102 226, sect. 5.2.1) (compact format can only have one response for the last APDU, but if it works, you could save a few bytes)
Encode each Expanded Remote Commands message into a SMS-DELIVER (TS 123 048 and simalliance Interoperability Stepping Stones release 6) This includes the data encryption using the OTA keys (KiC, KID). gsm0348 is a good Java library for this. Pay attention: For each message the OTA counter must be incremented. Select one reference number and keep it for all messages. I guess this is the identifier how the eUICC knows which messages belong together.
If using SMS (I would suggest to use CAT-TP or HTTPs instead -> faster and more reliable) encode it as SMS-PP Download message (TS 131 111). You will using message concatenation here if the payload is more than 140 bytes.
You will receive as response a SendShortMessage (TS 131 111, 6.4.10). Extract the user data again with TS 123 048 and simalliance Interoperability Stepping Stones release 6. You get the SMS Response message. Look into the response packet to get the user data.
Extract the user data as Expanded Remote Response (ETSI TS 102 226)
The eUICC will handle the streamed messages. The concatenated short messages are only used for a message chunk belonging together during the transit.
The best specification that kind of explains how segmenting and script chaining works in details is SGP.11 Remote Provisioning Architecture for Embedded UICC Test Specification.
It does not have a requirements per se but it has the byte level examples of incoming ES3.SendData and examples of the messages received by eUICC. And this allows to deduce the actual behaviour of the SM-SR pretty easily.
And here is more detailed explanation with references to that specification.
Command script
Command script is a list of commands sent in the data fields of the ES3.SendData. It can be (see data field description in table 150 of SGP.02-v4.0) either:
the list of C-APDUs commands (defined in EXPANDED_COMMANDS method on page 595 of SGP.11-v4.0 and uses in step 2 on page 407 of SGP.11-v4.0)
TLV commands that form a SCP03t script (defined in SCP03T_SCRIPT method on page 600 of SGP.11-v4.0 and used in step 14 on page 408 of SGP.11-v4.0)
Script Chaining
Script chaining is the feature used when there is a need to execute several commands sent in multiple transport messages but when some context of the execution needs to preserved after the first command is executed and used for the next one (for example the first command is INSTALL [for personalization] and the second one is STORE DATA). This context is command session defined in section 4 of TS 102 226.
This is used at least in these cases (which is also supported by Annex K in SGP.02):
when the followup command is based on the response from eUICC
when command script is too large and cannot be sent in one piece
Script chaining is performed by SM-SR and is implemented by adding Script Chaining TLV to the command script in Expanded Remote format. See section 5.4.1.2 in TS 102 226.
Examples of script chaining can be found in SGP.11-v4.0.
For the first case see the scenario for EstablishISDRKeyset procedure in section 4.2.10.2.1.1 . For the second case see the scenario for DownloadAndInstallation procedure in section 4.2.18.2.1.1. The byte level representation of the script chaining is described in SCP80_PACKET method on page 601 (see option CHAINING_OPT).
It seems that explicit chaining is applicable only to SMS or CAT_TP transport. For HTTPS the command session coincides with the administrative session as defined in section 3.6 of Remote Application Management over HTTP – Public Release v1.1.3 so there is not need to add explicit Script Chaining TLVs.
Segmenting
When the command script is too large and cannot be sent in one piece SMSR generates multiple command scripts. The commands that SMSR received from SMDP CMD1, CMD2, …, CMDN (see step 14 on page 408 of SGP.11-v4.0) and that form a command script are split into several command scripts. The first one contains some initial portion of the commands CMD1, CMD2, …, CMDi. The second command script contains commands CMDi+1, CMDi+2, …, CMDj. The third one CMDj+1,…,CMDk and so on.
If SMS or CAT_TP is used then for every command a Chaining TLV is added to the beginning with the appropriate value (see section 4.2.18.2.1.1 in SGP.11-v4.0).
And then every new command script is sent in a separate message to eUICC.

Is it possible to set a setting to show how many pages a spool queue print job has with a raw postscript print file in windows

Currently it will show N/A as number of pages in the windows spool queue, is this possible to show the number of pages and what page is printing with bi-directional support on? with this file format? surely the printer itself must know what page is printing/how many pages have printed. thanks.
I can only answer the PostScript part of this question, not the Windows spool queue. I don't know how you are inserting the PostScript into the spool queue, but assuming you are printing from an application then my very hazy memory suggests that it's the application which declares the number of pages in the job.
The printer doesn't know how many pages a PostScript program contains. In fact, in the most general case, it's not possible to determine how many times a PostScript program will execute the showpage operator. That's because PostScript is a programming language and the program might execute differently on different interpreters, or under different conditions. It might print an extra page on Tuesdays for example.
Or to take an admittedly silly example:
%!
0 1 rand {
showpage
} for
That should print a random number of empty pages.
There are conventions which can indicate how many pages a PostScript program is expected to execute, the Document Structuring Convention contains comments which tell a PostScript processor (not a PostScript interpreter, which ignores comments, a processor treats the PostScript as just data and doesn't execute it) how many pages it has, amongst other data such as the requested media size, fonts, etc.
Not all PostScript programs contain this information and it can sometimes be awkward to extract (the 'atend' value is particularly annoying forcing the processor to read to the end of the file to find the value). Additionally people have been know to treat PostScript programs like Lego bricks and just concatenate them together, which makes the comments at the start untrustworthy. This is especially true if the first program has %% Pages: (atend)
Most printers do execute a job server loop, which means that it is possible for them to track which page they are executing, even if they don't know how many pages there will be.
It's also certainly true that a PostScript program running on a printer with a bi-directional interface could send information back to the print spooler, but there's no guarantee that a printer has a bi-directional communication interface, and there's no standard that I know of for the printer to follow when sending such information back to the computer. What should it send ? Will it be the same for Windows as Mac ?

Force S3 multipart uploads

This is a follow-up to knt's question about PUT vs POST, with more details. The answer may be independently more useful to future answer-seekers.
can I use PUT instead of POST for uploading using fineuploader?
We have a mostly S3-compatible back-end that supports multipart upload, but not form POST, specifically policy signing. I see in the v5 migration notes that "even if chunking is enabled, a chunked upload request is only sent to traditional endpoints if the associated file must be broken into more than 1 chunk". How is the threshold determined for whether a file needs to be chunked? How can the threshold be adjusted? (or ideally, set to zero)
Thanks,
Fine Uploader will chunk a file if its size is less than the number of bytes specified in the chunking.partSize option (default value is: 2000000 bytes). If your file is smaller than the size specified in that value, then it will not be chunked.
To effectively set it to "zero", you could just increase the partSize to an extremely large value. I also did some experimenting, and it seems like a partSize of -1 will make Fine Uploader NOT chunk files at all. AFAIK, that is not supported behavior, and I have not looked at why that is even possible.
Note that S3 requires chunks to be a minimum of 5MB.
Also, note that you may run into limitations on request size as imposed by certain browsers if you make partSize extremely large.

Google Protocol Buffers - Storing messages into file

I'm using google protocol buffer to serialize equity market data (ie. timestamp, bid,ask fields).
I can store one message into a file and deserialize it without issue.
How can I store multiple messages into a single file? Not sure how I can separate the messages. I need to be able to append new messages to the file on the fly.
I would recommend using the writeDelimitedTo(OutputStream) and parseDelimitedFrom(InputStream) methods on Message objects. writeDelimitedTo writes the length of the message before the message itself; parseDelimitedFrom then uses that length to read only one message and no farther. This allows multiple messages to be written to a single OutputStream to then be parsed separately. For more information, see https://developers.google.com/protocol-buffers/docs/reference/java/com/google/protobuf/MessageLite#writeDelimitedTo(java.io.OutputStream)
From the docs:
http://code.google.com/apis/protocolbuffers/docs/techniques.html#streaming
Streaming Multiple Messages
If you want to write multiple messages to a single file or stream, it
is up to you to keep track of where one message ends and the next
begins. The Protocol Buffer wire format is not self-delimiting, so
protocol buffer parsers cannot determine where a message ends on their
own. The easiest way to solve this problem is to write the size of
each message before you write the message itself. When you read the
messages back in, you read the size, then read the bytes into a
separate buffer, then parse from that buffer. (If you want to avoid
copying bytes to a separate buffer, check out the CodedInputStream
class (in both C++ and Java) which can be told to limit reads to a
certain number of bytes.)
Protobuf does not include a terminator per outermost record, so you need to do that yourself. The simplest approach is to prefix the data with the length of the record that follows. Personally, I tend to use the approach of writing a string-header (for an arbitrary field number), then the length as a "varint" - this means the entire document is then itself a valid protobuf, and could be consumed as an object with a "repeated" element, however, just a fixed-length (typically 32-bit little-endian) marker would do just as well. With any such storage, it is appendable as you require.
If you're looking for a C++ solution, Kenton Varda submitted a patch to protobuf around August 2015 that adds support for writeDelimitedTo() and readDelimitedFrom() calls that will serialize/deserialize a sequence of proto messages to/from a file in a way that's compatible with the Java version of these calls. Unfortunately this patch hasn't been approved yet, so if you want the functionality you'll need to merge it yourself.
Another option is Google has open sourced protobuf file reading/writing code through other projects. The or-tools library, for example, contains the classes RecordReader and RecordWriter that serialize/deserialize a proto stream to a file.
If you would like stand-alone versions of these classes that have almost no external dependencies, I have a fork of or-tools that contains only these classes. See: https://github.com/moof2k/recordio
Reading and writing with these classes is straightforward:
File* file = File::Open("proto.log", "w");
RecordWriter writer(file);
writer.WriteProtocolMessage(msg1);
writer.WriteProtocolMessage(msg2);
...
writer.Close();
An easier way is to base64 encode each message and store it as a record per line.

How to find out information on client's reads on a named pipe

Is it possible to figure out on the writer (server) end of a Windows named pipe how much data the client is reading from the other end in each request?
Background: Simple scenario. We have one process that writes to a named pipe that it creates via CreateNamedPipe. The data only flows outward (PIPE_ACCESS_OUTBOUND) and is PIPE_TYPE_BYTE. Another process reads from the pipe and displays some information about it. This repeats about once per second.
What I need to change: I have to add a little bit more data to each write and subsequent read. This is no problem to update both the client and the server, but the person who created this 14 years ago apparently didn't think the structure of the data in the pipe would ever change. No metadata is included whatsoever and the client pays no attention to the amount of data available. For example, let's say the structure size has been 8 bytes for all these years. The server writes 8 bytes, the client reads 8 bytes. Now I want to write 12 bytes. If it is an old client doing the reading, it will get weird results since it just tries to blindly read 8 bytes each time.
What I currently have working: I have an ugly solution working now, but am not overly pleased. I used GetNamedPipeClientProcessId to get the process ID of the reader and then the appropriate calls to get its file name and then version information (OpenProcess, GetModuleFileNameEx, GetFileVersionInfo, ...) to determine the version number of the client and then write the appropriate amount of data. It appears to work, but it feels a bit cumbersome and fragile.
What I think I want: What I would like to do is have the server somehow detect that the client only read 8 bytes from the pipe and then adjust the behavior accordingly. Is it possible to figure this out?
You could have some form of handshake on connection from new clients to say "I support XYZ". If you don't get that, keep at the 8 bytes

Resources