I have a hybrid MobileFirst application. Planning to integrate Push Notification. The approach is backend will call an Adapter with array object which will have user-id and message for push notification. array count will be 100. And every 5 minutes backend will call this adapter with 100 counts in the array.
The Adapter will parse the array and get the getUserNotificationSubscription then call notifyAllDevices for each user id in the array.
Array Object looks like below
{
"notifications": [
{
"userId": "userid",
"message": "Push Notification Message",
"notificationType": "Type of Notification",
"lineOfbusiness": "1",
"issueDate":"",
"uniqueIdentifier":"useruidd"
}
]
}
My understanding is once you call the notifyAllDevices, the API will add the notifications in "Notification State Database" and iOS Dispatcher handles the connections between APNS server and Worklight Server.
I read a note in Apple APNS documentation for best practices for managing the connections with APNS server,
Keep your connections with APNs open across multiple notifications; don’t repeatedly open and close connections. APNs treats rapid connection and disconnection as a denial-of-service attack. You should leave a connection open unless you know it will be idle for an extended period of time—for example, if you only send notifications to your users once a day it is ok to use a new connection each day. You can establish multiple connections to APNs servers to improve performance. When you send a large number of remote notifications, distribute them across connections to several server endpoints. This improves performance, compared to using a single connection, by letting you send remote notifications faster and by letting APNs deliver them faster
I want to understand how this MobileFirst iOS Dispatcher works. And its follows the best practices suggested by Apple? Not able to find in-depth information in IBM info center documentation.
IBM MobileFirst follows the best practices suggested by Apple. IBM MobileFirst creates persistent socket connections with APNS.
To improve performance, MobileFirst creates 3 persistent socket connections with APNS by default. This value can be tuned by the user using the JNDI property :
push.apns.connections
The persistent connections are not closed by IBM MobileFirst. However, if the user wishes to close it gracefully ( if there is an extended idle time), they can do so using the JNDI property
push.apns.connectionIdleTimeout
Also, if there are external factors ( such as firewalls) that close the connections opened with APNS, MobileFirst recreates the connections ( as defined in the JNDI or 3 by default) and sends remote notifications over these connections. If user's firewall settings are configured to close idle socket connections, then they can use the idletimeout JNDI property to gracefully close the sockets before the firewall terminates them.
Related
I am designing websockets and trying to understand best practices.
The question is, should my server try to deliver events even if the connection is closed?
Pros of attempting to deliver the events when connection is closed:
I can receive the un delivered events as soon as my connection opens again
Cons:
The events will be backed up, causing the latest events to be delayed.
Can someone recommend me any services that allow 3rd party developers to use websockets and follow the best practices?
Here is my logic:
Open websocket connection
Continue to receive the events while the connection is open
Close the connection when you no longer want to receive the events
I have a Spring Boot based messaging app sending/receiving JMS messages to/from IBM MQ queue manager.
Basically, it uses MQConnectionFactory to organize connection to IBM MQ and a JmsPoolConnectionFactory from messaginghub:pooledjms to enable JMS connection pool, which is removed from MQConnectionFactory in IBM MQ 7.x
The app uses two different appoach to work with JMS. A "correct" one runs a JMSListener to receive messages and then sends a response on each message using JmsTemplate.send(). And there is a second "troubling" approach, where the app sends requests using JmsTemplate.send() and waits for response using JmsTemplate.readByCorrelId() until received or timed out.
I say troubling because this makes JMS sessions last longer if the response is delayed and could easily exhaust IBM MQ connection limit. Unfortunately, I cannot rewrite the app at the moment to the first approach to resolve the issue.
Now I want to restrict the number of connections in the pool. Of course, the delayed requests will fail but IBM MQ connection limit is more important at the moment, so this is kind of appropriate. The problem is that even if I disable the JmsPoolConnectionFactory, it seems that MQConnectionFactory still opens multiple connections to the query manager.
While profiling the app I see multiple threads RvcThread: com.ibm.mq.jmmqi.remote.impl.RemoteTCPConnection#12433875[...] created by JMSCCMasterThreadPool and corresponding connections to the query manager in MQ Explorer. I wonder why there are many of them in spite of the connection pooling is removed from MQConnectionFactory? I suppose it should open and reuse a single connection then but it is not true in my test.
Disabling "troubling" JmsTemplate.readByCorrelId() and leaving only "correct" way in the app removes these multiple connections (and the waiting threads of course).
Replacing JmsPoolConnectionFactory with SingleConnectionFactory has not effect on the issue.
Is there any way to limit those connections? Is it possible to control max threads in the JMSCCMasterThreadPool as a workaround?
Because it affects other applications your MQ admins probably want you to not exhaust the overall Queue Manager's connection limit (MaxChannels and MaxActiveChannels parameters in qm.ini). They can help you by defining an MQ channel exclusively used by your application. By this, they can limit the number of connections of your application with the MAXINST / MAXINSTC channel parameter. You will get an exception when this number is exhausted which is appropriate as you say. Other applications won’t be affected anymore.
Roughly speaking a HTTP SESSION is a kind of secret that the server sends to the client (ex browser) after user's credentials is checked. This secret is passed trough all subsequents HTTP requests and will identify the user. This is important because HTTP are stateless - source.
Now I have a scenario where there is a communication between a machine and a MQTT broker inside the AWS-IoT core. The machine displays some screens. The first screen is the login and password.
The idea here is that after the first screen, IF the credentials are validated, the server should generate a "session" and we should send this "session" across the screen pages. The machine should send this "SESSION" in all subsequent messages and the server must to validate this string before any action. This is a request created by an electrical engineering team.
Me, in the software development side it seems that make no sense since all machines to be connected in the AWS IoT-Core broker (MQTT) must to use a certificate - with is the validation already.
Beside of that, the MQTT broker offers the SESSION persistence capabilities. I know that the SESSIONs (QoS 0/1) in the broker side are related to idea of confidence of delivery and reception of a message.
That being said is it possible to use session persistence in MQTT to behavior like a sessions in HTTP in order to identify users across screens in devices? If yes how?
No, HTTP Session concept is not in any way similar to the MQTT session. The only thing held in a MQTT clients session is the list of subscribed topics, a HTTP session can hold arbitrary data.
Also MQTT messages hold NO information about the user or even the client that published the message when it is delivered to the subscriber, the ONLY information present is the message payload and the topic it was published to.
While MQTTv5 adds the option to include more metadata, trying to add the concept of users sessions is like trying to make a square peg fit in round hole.
If you want to implement something as part of the message payload then that is entirely up to you, but it is nothing to do with the transport protocol.
The point of my question is about "TCP connections flow" between my application server and Google Firebase Cloud Messaging (FCM) server.
I plan that my application disconnect a TCP connection every time for each
HTTP request
and response. (This behavior is like HTTP/1.0).
However, I can't find related mentions about it on FCM's web pages.
(FCM web page (relating legacy HTTP Protocol) has a illustration about communication flow, but I want one about HTTP Protocol).
This is outside of the scope of FCM specifications, for an example,
Apple Notification Service (APNs) specification request that a tcp
connection
must not disconnect while the connection is fine. (If I eagerly want to disconnect, I have been requested once a day by ANPs specification).
Can I disconnect the connection for every HTTP communication with FCM ?
I am worried about that FCM will guess that this behavior is DDoS attack.
However, my application does not repeat connection fast like DDoS attack.
Please excuse my poor English.
Best regards,
The Firebase Cloud Messaging legacy HTTP API is a connectionless protocol. You can either establish a new connection for each request, or reuse an existing connection, as you see fit.
That said, I'd recommend re-using the connection where possible, especially if you expect a high number of requests. This both optimizes throughput, and prevents current or future misclassification as malware.
How can you create a durable architecture environment using MQ Client and server if the clients don't allow you to persist messages nor do they allow for assured delivery?
Just trying to figure out how you can build a salable / durable architecture if the clients don't appear to contain any of the necessary components required to persist data.
Thanks,
S
Middleware messaging was born of the need to persist data locally to mitigate the effects of failures of the remote node or of the network. The idea at the time was that the queue manager was installed locally on the box where the application lives and was treated as part of the transport stack. For instance you might install TCP and WMQ as a transport and some apps would use TCP while others used WMQ.
In the intervening 20 years, the original problems that led to the creation of MQSeries (Now WebSphere MQ) have largely been solved. The networks have improved by several nines of availability and high availability hardware and software clustering have provided options to keep the different components available 24x7.
So the practices in widespread use today to address your question follow two basic approaches. Either make the components highly available so that the client can always find a messaging server, or put a QMgr where the application lives in order to provide local queueing.
The default operation of MQ is that when a message is sent (MQPUT or in JMS terms producer.send), the application does not get a response back on the MQPUT call until the message has reached a queue on a queue manager. i.e. MQPUT is a synchronous call, and if you get a completion code of OK, that means that the queue manager to which the client application is connected has received the message successfully. It may not yet have reached its ultimate destination, but it has reached the protection of an MQ Server, and therefore you can rely on MQ to look after the message and forward it on to where it needs to get to.
Whether client connected, or locally bound to the queue manager, applications sending messages are responsible for their data until an MQPUT call returns successfully. Similarly, receiving applications are responsible for their data once they get it from a successful MQGET (or JMS consumer.receive) call.
There are multiple levels of message protection are available.
If you are using non-persistent messages and asynchronous PUTs, then you are effectively saying it doesn't matter too much whether the messages reach their destination (although they generally will).
If you want MQ to really look after your messages, use synchronous PUTs as described above, persistent messages, and perform your PUTs and GETs within transactions (aka syncpoint) so you have full application control over the commit points.
If you have very unreliable networks such that you expect to regularly fail to get the messages to a server, and expect to need regular retries such that you need client-side message protection, one option you could investigate is MQ Telemetry (e.g. in WebSphere MQ V7.1) which is designed for low bandwidth and/or unreliable network communications, as a route into the wider MQ.