What is the expiration time for "legacy" apns? - apple-push-notifications

The original APNS format - what is now called "legacy" - did not offer a way to specify message expiration (or retention). This feature was introduced later, in the "enhanced" APNS format, as documented here.
Despite being old, this format is still supported and active. For practical reasons, I need to know what the retention behavior is. Is there any retention? is it maximal? or is there some value in between?
As an side:
For those of you who use the java-apns lib, if you're using the SimpleApnsNotification class, then you're using the "legacy" format mentioned above.

Related

Using an alternative connection channel/transport for GRPC

I currently have a primitive RPC setup relying on JSON transferred over secured sockets, but I would like to switch to gRPC. Unfortunately I also need access to AF_UNIX on windows (Which Microsoft recently started supporting, but gRPC has not implemented).
Since I have an existing working connection (managed with a different library), my preference would be to just use that in conjunction with GRPC to send/receive commands in place of my JSON parsing, but I am struggling to identify the best way to do that.
I have seen Plugging custom transport into gRPC but this question differs in the following ways (As well as my hope for a more recent answer)
I am wanting to avoid making changes to the core of gRPC. I'd prefer to extend it if possible from within my library, but the answer here implies adding a new transport to gRPC.If I did need to do this at the transport level, is there a mechanism to register it with gRPC after the core has been built?
I am unsure if I need to define this as a full custom transport, since I do already have an existing connection established and ready. I have seen some things that imply I could simply extend Channel, but I might be wrong.
I need to be able to support Windows, or at least modern versions of it (Which means that the from_fd options gRPC provides are not available since they are currently only implemented for POSIX)
Has anyone solved similar problems with gRPC?
I may have figured out my own answer. I seem to have been overly focused on gRPC, when the service definition component of Protobuf is not dependent on that.
How can i write my own RPC Implementation for Protocol Buffers utilizing ZeroMQ is very similar to my use case, with https://developers.google.com/protocol-buffers/docs/proto#services seeming to resolve my issue (And this also explains why I seem to have been mixing up the different kinds of "Channels" involved
I welcome any improvements/suggestions, and hope that maybe this can be found in future searches by people that had the same confusion.

gRPC and schema evolution guarantees?

I am evaluating using gRPC. On the subject of compatibility with 'schema evolution', i did find the information that protocol buffers, which are exchanged by gRPC for data serialization, have a format such that different evolutions of a data in protobuf format can stay compatible, as long as the schemas evolution allows it.
But that does not tell me wether two iterations of a gRPC client/server will be able to exchange a command that did not change in the schema, regardless of the changes in the rest of the schema?
Does gRPC quarantee that an older generated version of a client or server code will always be able to launch/answer a command that has not changed in the schema file, with any more recent schema generated code on the interlocutor side, reglardless of the rest of the schema? (Assuming no other breaking changes like a non backward compatible gRPC version change)
gRPC has compatibility for a method as long as:
the proto package, service name, and method name is unchanged
the proto request and response messages are still compatible
the cardinality (unary vs streaming) of the request and response message remains unchanged
New services and methods can be added at will and not impact compatibility of preexisting methods. This doesn't get discussed much because those restrictions are mostly what people would expect.
There is actually some wiggle room in changing cardinality as unary is encoded the same as streaming on-the-wire, but it's generally just better to assume you can't change cardinality and add a new method instead.
This topic is discussed in the Modifying gRPC Services over Time talk (PDF slides and Youtube links available). I'll note that the slides are not intended to stand alone.

gmail api batch request to get multiple messages at once golang

Is there a batch Get messages? from the golang client library?
I dont see it
https://godoc.org/google.golang.org/api/gmail/v1
i can get a list of message ids, but have to get the message per Id, one at a time.
Answer
There is a Github issue on the Go client's repo regarding this topic, and apparently it is not likely to see support for this feature anytime soon. It may, however, be implemented in the next generation of the client.
Possible workaround
You can implement yourself the batching feature, by making HTTP calls to the www.googleapis.com/batch or www.googleapis.com/batch/api/version endpoints. The former is due to be deprecated in August 12, 2020 but you can still use the latter past this date for homogeneous requests (in your case, doing GET requests based on messageId, you should have no problem doing so). You can read more about this in the following Official Google Developers Blog post: https://developers.googleblog.com/2018/03/discontinuing-support-for-json-rpc-and.html

Does SNMPv3 require use of username/authentication AND community strings?

Forgive my ignorance if this is a trivial question. I am writing some code to support communication over SNMPv3; our application only supports SNMPv2c currently.
The response object when communicating using SNMPv3 is blank unless I match community strings. I was under the impression that community strings were an "SNMPv2/1 thing" and that "the new way" was to use a username/authentical protocol/privacy protocol.
Wikipedia states that:
Although SNMPv3 makes no changes to the protocol aside from the addition of cryptographic security, it looks much different due to new textual conventions, concepts, and terminology.[1]
This statement leads me to believe that I do, in fact, need to supply community strings, too.
I just wanted to confirm this because it is difficult for me to tell whether I am getting data back because I fulfilled the SNMPv2 requirement or because I successfully fulfilled all the SNMPv3 requirements.
I'm using Dart's SNMP library to communicate with the other device and I have specified that my request should user SNMP version three -- but perhaps it falls back to SNMPv2 behind-the-scenes when given valid SNMP communities?
No, you don't. The internal packet structure changes to a number of new concepts, like the above quote tries to indicate. The protocol side that the above is stating is the same has to do with PDU operations, etc. IE, technically there are 3 versions of SNMP:
version 1: community string based authentication with SMNPv1 PDUs
version 2c: community string based authentication with SNMPv2 PDUs
(the SNMPv2 PDUs add GETBULK, INFORM, and REPORT PDUs)
version 3: modular security with SNMPv2 PDUs
IE, version 3 didn't touch how the actual operations work (it's still using the PDU types from version 2), but merely adds other header-stuff around them (like better and more modular security; in fact we now have 3 different security types to pick from at this point: USM, SSH, and (D)TLS).

Using the Websocket Protocol

An opinion question: do you think it's safe already to use WebSockets what with the changing protocols? If not, when do you reckon the protocols will be finished?
Thanks!
The protocol isn't really changing much any more. Most of the discussion is around optional extensions and phrasing in the specification. There was no wire protocol change between HyBi-08, 09 and 10 (which is why the handshake version has stayed at '8') and very little change between 08 and the previous several versions. The protocol has also completed last call and been referred to the IESG/IETF so radical changes are not likely unless some serious issue is discovered that throws the protocol back into the HyBi work group for rework.
One of the bigger changes coming soon is for in the HTML API related to binary data support and close events. However, these changes are basically just additive and still backwards compatible with the current API.

Resources