When I read socket.io client API,notice the api expose Manger and Socket interface which look quite similar in terms of event and method
Also,I can conclude from the doc that we can get manager instance from a socket instance by socket.io and manager instance contains and socket instance by manager.engine .That is all I know
Given their APIs are almost the same ,for example,I can not tell any difference between manager.disconnet() and socket.disconnect().I think the root reason is I can not get the relationship between then
Related
I'm a long-time Spring developer learning NestJS. The similarities are so striking, and I've loved how productive that's allowed me to be. Some documentation has me confused about one thing however.
I try to liken Nest "providers" to Spring beans with default scope. For example I create #Injectable service classes and think of them as analogous to Spring #Services. As such I've assumed these service classes needed to be thread safe - no state, etc. However, the Nest documentation here is a little ambiguous to me and kind of implies this might not be necessary (emphasis mine):
For people coming from different programming language backgrounds, it might be unexpected to learn that in Nest, almost everything is shared across incoming requests. We have a connection pool to the database, singleton services with global state, etc. Remember that Node.js doesn't follow the request/response Multi-Threaded Stateless Model in which every request is processed by a separate thread. Hence, using singleton instances is fully safe for our applications.
If individual requests aren't handled in their own threads, is it OK for Nest providers to contain mutable state? It would be up to the app to ensure each incoming request started with a "clean slate" - e.g. initializing that state with a NestInterceptor, for example. But to me, that doc reads that providers are created as singletons, and thus can be used as something akin to a wrapper container for data, like a ThreadLocal in Java.
Am I reading this wrong, or is this a difference in behavior between Nest and Spring?
You really should make request handling stateless.
I don't know anything about Spring, but in NestJS (and async javascript in general) it's single threaded, but doesn't block for I/O. That means the same thread of the same instance of a service can process multiple requests at once. It can only do one thing at a time, but it can start doing the next thing while the previous thing is waiting on a database query, or for the request to finish being transmitted, or for an external service to respond, or for the filesystem to deliver the contents of a file, etc.
So in one thread, with one instance of a service, this can happen:
Request A comes in.
Database query is dispatched for request A.
Request B comes in.
Database query is dispatched for request B.
Database query for request A returns, and the response is sent.
Database query for request B returns, and the response is sent.
What that means for state is that it will be shared between requests. If your service sets an instance property at one step of an async operation, then another async operation may start before the first was complete and set a new value for that instance property, which is probably not what you want.
I believe the "global state" the Nest docs mention is not per request, but general configuration state. Like the URL of an external service, or credentials to your database.
It's also worth mentioning that controllers receive a request object, which represents that specific request. It's common to add properties to that request object, like the current authenticated user for example. The request object can be passed around to give your controller and services context in a way that is friendly to this architecture.
I am thinking of using Spring State Machine for a TCP client. The protocol itself is given and based on proprietary TCP messages with message id and length field. The client sets up a TCP connection to the server, sends a message and always waits for the response before sending the next message. In each state, only certain responses are allowed. Multiple clients must run in parallel.
Now I have the following questions related to Spring State machine.
1) During the initial transition from disconnected to connected the client sets up a connection via java.net.Socket. How can I make this socket (or the DataOutputStream and BufferedReader objects got from the socket) available to the actions of the other transitions?
In this sense, the socket would be some kind of global resource of the state machine. The only way I have seen so far would be to put it in the message headers. But this does not look very natural.
2) Which runtime environment do I need for Spring State Machine?
Is a JVM enough or do I need Tomcat?
Is it thread-safe?
Thanks, Wolfgang
There's nothing wrong using event headers but those are not really global resources as header exists only for duration of a event processing. I'd try to add needed objects into an machine's extended state which is then available for all actions.
You need just JVM. On default machine execution is synchronous so there should not be any threading issues. Docs have notes if you want to replace underlying executor asynchronous(this is usually done if multiple concurrent regions are used).
I have a an application from which I need to send live updates to web clients.
I'm currently happily using websockets for that, via the WAMP protocol, as it provides both publish-subscribe and RPC methods.
Now, I find that in lots of situations, when a user starts the application or a view, I need to send an initial state to the client, and then keep sending updates. I do the first with an RPC call, and the latter via publish-subscribe.
Now, this forces me to write server-side and client-side code for both of the methods, even while I'm basically conveying the same information in both cases.
On server side, I'm moving appropriate code to a common method, but I still need to take care of both sending the event and provide an entry point for the RPC call:
# RPC endpoint for getting mission info
def get_mission_info(self):
return self.get_mission_info()
# Scheduled or manually called method to send mission info to all users
def publish_mission_info(self):
self.wamp.publish("UPDATE_INFO", [self.get_mission_info()])
def get_mission_info(self):
# Here we generate a JSON serializable dict with the info
return info
And you canimagine, client side (JS or Python) shows a similar duplicity (two handler methods).
Question is: is there a more clever way of handling this, and avoiding that boilerplate code? Some approach I could follow, perhaps automatically sending last event of each type just to clients that ask for it, or that just subscribed? Perhaps something at crossbar level?
In general terms, I feel I could be doing a better state synchronization strategy leveraging these two channels (pub-sub and RPC). How does people do it?
My WAMP server is Crossbar, and my client library is autobahn.js in Python and JS.
I am connecting to elastic search using Elastic Search Transport Client. There are two approaches that I've tried
1) Singleton client shared across my entire application. Time to response is between 1-2s
2) New client instance for every call to Elastic Search, takes about 7s to respond. To be specific, there are 5 classes that need to connect the ES cluster and this approach creates a new Transport client for each class.
Is 1) a good approach to go ahead in terms of elastic search, as it is usually not recommended to have singleton db connection object?
Is there any connection pooling mechanism available for Elastic Search, like we have DBCP for relational databases?
Your client should be a singleton.
source : http://elasticsearch-users.115913.n3.nabble.com/What-is-your-best-practice-to-access-a-cluster-by-a-Java-client-td4015311.html
It doesn't have to be a singleton client (by "singleton" i mean an instance that can be initialized only once).
you can save a state of the client's instance and pass it as a parameter between your application modules. this way you won't limit your application to use only one client resource.
i also attach a good reference about why singeltons are bad
I'm storing instances of tornado.websocket.WebSocketHandler in a dictionary so when a message comes for a specific user I can route the message to the appropriate listener.
Implication of this is when the server bounces we lose the listener details and the client would have to create a new WebSocket instance.
I would like to implement means of storing the listener details in persistent store, maybe in redis but am unsure of best approach.
I could pickle the WebSocketHandler instance and write to redis, then read and unpickled when a message to a specific user needs to be routed to their client, but this feels a bit hacky. Is there a less hacky solution?
You can't usefully pickle the WebSocketHandler because connected sockets cannot be transferred in this way. You might be able to do something with a multiprocessing.Queue instead of simply pickling, but this will be tricky and hacky at best. Clients must be able to create new WebSocket connections in any case to recover from network outages; it's normal to simply do the same when the server restarts.