I have a website where each user can have several orders. Each order has its own status. A background process, keeps updating the status of each order as necessary. I want to inform the user in real-time on the status of his orders. As such, I have developed an API endpoint that returns all the orders of a given user.
On the client-side, I've developed a React component that displays the orders, and then every second an AJAX request is performed to the API to get all the orders and their status, and then React will auto-update if necessary.
Is making 1 AJAX call per second to get all orders of a user a bad practice? What are other strategies that I can do?
Yes, it is. You can use Socket to accomplish this. Take a look at Socket.IO
Edit: My point is, why to use AJAX to simulate a task that can be done with a feature that is designed for it? Sockets are just made to do this kind of thing.
Imagine if your user lost internet connection for example. With Socket.IO you can handle this very nicely. But I don't think it will be that easy with AJAX.
And thinking about scalability, Socket.IO is designed to be performant with whatever transport it settles on. The way it gracefully degrades based on what connection is possible is great and means your server will be overloaded as little as possible while still reaching as wide an audience as it can.
AJAX will do the trick, but it's not the best design.
There is no one solution fit all answer for this question.
First off, this is not a chat app, a delay of less than 1 second doesn't change the user experience by much, if any.
So that leaves us with technical reasons, it really depends on many factors:
How many users you have (overall load), how many concurrent users are waiting for their orders, what infrastructure you are using, do you have other important things to build or you just want to spend more time coding things for fun?
If you have a handful of users, there is nothing wrong with querying once per second, it's easy, less maintenance overhead, and you said you have it coded already.
If you have dozens or more of concurrent users waiting for the status it's probably best to use Websockets.
In terms of infrastructure, too many websockets are expensive (some cloud hosting have limits on the number of sockets), so keep that in mind if you want to go with that route.
Related
This is more of a theorical question.
Well, imagine that I have two programas that work simultaneously, the main one only do something when he receives a flag marked with true from a secondary program. So, this main program has a function that will keep asking to the secondary for the value of the flag, and when it gets true, it will do something.
What I learned at college is that the polling is the simplest way of doing that. But when I started working as an developer, coworkers told me that this method generate some overhead or it's waste of computation, by asking every certain amount of time for a value.
I tried to come up with some ideas for doing this in a different way, searched on the internet for something like this, but didn't found a useful way about how to do this.
I read about interruptions and passive ways that can cause the main program to get that data only if was informed by the secondary program. But how this happen? The main program will need a function to check for interruption right? So it will not end the same way as before?
What could I do differently?
There is no magic...
no program will guess when it has new information to be read, what you can do is decide between two approaches,
A -> asks -> B
A <- is informed <- B
whenever use each? it depends in many other factors like:
1- how fast you need the data be delivered from the moment it is generated? as far as possible? or keep a while and acumulate
2- how fast the data is generated?
3- how many simoultaneuos clients are requesting data at same server
4- what type of data you deal with? persistent? fast-changing?
If you are building something like a stocks analyzer where you need to ask the price of stocks everysecond (and it will change also everysecond) the approach you mentioned may be the best
if you are writing a chat based app like whatsapp where you need to check if there is some new message to the client and most of time wont... publish subscribe may be the best
but all of this is a very superficial look into a high impact architecture decision, it is not possible to get the best by just looking one factor
what i want to show is that
coworkers told me that this method generate some overhead or it's
waste of computation
it is not a right statement, it may be in some particular scenario but overhead will always exist in distributed systems
The typical way to prevent polling is by using the Publish/Subscribe pattern.
Your client program will subscribe to the server program and when an event occurs, the server program will publish to all its subscribers for them to handle however they need to.
If you flip the order of the requests you end up with something more similar to a standard web API. Your main program (left in your example) would be a server listening for requests. The secondary program would be a client hitting an endpoint on the server to trigger an event.
There's many ways to accomplish this in every language and it doesn't have to be tied to tcp/ip requests.
I'll add a few links for you shortly.
Well, in most of languages you won't implement such a low level. But theorically speaking, there are different waiting strategies, you are talking about active waiting. Doing this you can easily eat all your memory.
Most of languages implements libraries to allow you to start a process as a service which is at passive waiting and it is triggered when a request comes.
This is more of a hypothetical question, so I can't really show any code examples. Imagine if a site like Twitter wanted to live-update stats on a Tweet via web sockets/Socket.io. In terms of performance, which of these would be the best approach?
Each action (like, retweet, reply) sends a message to the server, which then gets emitted to all clients, and the client is responsible for updating the appropriate tweet.
Each tweet the client loads is connected to a different room so that it only emits and receives messages relevant to itself.
Other?
Or perhaps it's dependent on the scale of the application? Maybe 1 is better if you had a Twitter clone with only a few users, whereas I would think 2 is better in Twitter's case because it's a matter of hundreds of "rooms" vs millions of signals/second? And if that's the case, at what point is one approach preferred over the other?
At scale, you do not want to be sending messages to clients that they did not ask for and do not have any use for. Imagine a twitter client that was receiving every single tweet being sent in real time. That could overwhelm that client and it would mean the server would be delivering every single tweet to every single connected client. That obviously doesn't scale on either the server side or the client side.
So option 1 is out.
The appropriate solution has the server send to the client only the messages that is has a particular interest in seeing. This works just fine at any scale. I can't tell whether your option 2 is that or not since rooms are just a tool for making groups of connections that you can send the same message to - they don't really decide who gets what message - that logic must be baked into your server code.
For a twitter-like service, it seems you're going to have to have a system where your server can easily tell which users have an interest in this particular new message. That can presumably be for a number of reasons such as they are following the author, they are following a hashtag present in the message, they are mentioned in the message, etc... That is server-side logic, not just simple rooms.
I'm creating a raffle site as a small side project. It will handle multiple raffles each with an end time. At the end of each raffle a single winner is chosen.
Are Laravel Jobs the best way to go with this? Do I just create a single forever-repeating job to check if any raffles have ended and need a winner?
If not, what would be the best way to go?
I don't think that forever-repeating scripts are generally a good idea.
I just create a single forever-repeating job
This is almost never a good idea. It has its applications in legacy code bases but websockets and events are best considered for this job. Also, you have the benefit of using a really good framework like Laravel, so take advantage of it
Websockets
If you want people to be notified in real time in the browser.
If you have all your users subscribe to a websocket channel when they load the page, you can easily send a message to a websocket server to all subscribed clients (ie browsers) to let them know who the winner is.
Then, in your client side code (Javascript), you can parse that message to determine who the winner is and render a pop up that let's the user know.
Events
If you don't mind a bit of a delay, most definitely use events for this.
At the end of every action that might potentially end a raffle (ie, a name is chosen at random by a computer - function chooseName()). Fire an event that notifies all participants in the raffle.
https://laravel.com/docs/5.2/events
NB: I've listed the above two as separate issues, but actually, the could be used together. For example, in the event that a name is chosen at random, determine if the raffle is over and notify clients via a websocket connection.
Why I wouldn't use delayed Jobs
The crux of the reason - maintainability
Imagine a scenario where something extends the time of your raffle by a week. This could've happened because a raffle was cheated on or whatever (can't really think of all the use cases in that area).
Now, your job has a set delay in place - is it really a good programming principle to have to change two things when only one scenario changed? Nope. Having something like an event in place - onRaffleEnd - explicitly looks for the occurrence of an event. Laravel doesn't care when that event happens.
Using delayed Jobs can work - it's just not a good programming use case in your scenario and limits what you're able to do in the longer run. It will force you to make more considerations when unforeseen circumstances come along as well as when you want to change things. This also decentralizes the logic related to your raffle. Whilst decoupling code is good practice, having logic sit in completely different places makes maintenance a nightmare.
I just want to ask for your experience. I'm designing a public website, using jQuery Ajax in most of operations. I'm having some timeouts, and I think it should be for hosting provider cause. Any of you have expirience in this case and may advise me on some hints (especially on timeouts handling)?
Thanks in advance to all.
Esteve
If you have a half-decent host, chances are these aren't network timeouts but are rather due to insufficient hardware which causes your server-side scripts to take too long to answer. For example if you have an autocomplete field and the script goes through a database of 100,000 entries, this is a breeze for newer servers but older "budget" servers or overcrowded shared hosting servers might croak on it.
Depending on what your Ajax operations are, you may be able to break them down in shorter chunks. If you're doing database queries for example, use LIMIT and OFFSET and only return say, 5 entries at a time. When those 5 entries arrive on the client, make another Ajax call for 5 more, so from the user's point of view the entries will keep coming in and it will look fluid (instead of waiting 30s and possibly timing out before they see all entries at once). If you do this make sure you display a spiffy web 2.0 turning wheel to let the user know if they should be waiting some more or if it's done.
I'm currently trying to build an application that inherently needs good time synchronization across the server and every client. There are alternative designs for my application that can do away with this need for synchronization, but my application quickly begins to suck when it's not present.
In case I am missing something, my basic problem is this: firing an event in multiple locations at exactly the same moment. As best I can tell, the only way of doing this requires some kind of time synchronization, but I may be wrong. I've tried modeling the problem differently, but it all comes back to either a) a sucky app, or b) requiring time synchronization.
Let's assume I Really Really Do Need synchronized time.
My application is built on Google AppEngine. While AppEngine makes no guarantees about the state of time synchronization across its servers, usually it is quite good, on the order of a few seconds (i.e. better than NTP), however sometimes it sucks badly, say, on the order of 10 seconds out of sync. My application can handle 2-3 seconds out of sync, but 10 seconds is out of the question with regards to user experience. So basically, my chosen server platform does not provide a very reliable concept of time.
The client part of my application is written in JavaScript. Again we have a situation where the client has no reliable concept of time either. I have done no measurements, but I fully expect some of my eventual users to have computer clocks that are set to 1901, 1970, 2024, and so on. So basically, my client platform does not provide a reliable concept of time.
This issue is starting to drive me a little mad. So far the best thing I can think to do is implement something like NTP on top of HTTP (this is not as crazy as it may sound). This would work by commissioning 2 or 3 servers in different parts of the Internet, and using traditional means (PTP, NTP) to try to ensure their sync is at least on the order of hundreds of milliseconds.
I'd then create a JavaScript class that implemented the NTP intersection algorithm using these HTTP time sources (and the associated roundtrip information that is available from XMLHTTPRequest).
As you can tell, this solution also sucks big time. Not only is it horribly complex, but only solves one half the problem, namely giving the clients a good notion of the current time. I then have to compromise on the server, either by allowing the clients to tell the server the current time according to them when they make a request (big security no-no, but I can mitigate some of the more obvious abuses of this), or having the server make a single request to one of my magic HTTP-over-NTP servers, and hoping that request completes speedily enough.
These solutions all suck, and I'm lost.
Reminder: I want a bunch of web browsers, hopefully as many as 100 or more, to be able to fire an event at exactly the same time.
Let me summarize, to make sure I understand the question.
You have an app that has a client and server component. There are multiple servers that can each be servicing many (hundreds) of clients. The servers are more or less synced with each other; the clients are not. You want a large number of clients to execute the same event at approximately the same time, regardless of which server happens to be the one they connected to initially.
Assuming that I described the situation more or less accurately:
Could you have the servers keep certain state for each client (such as initial time of connection -- server time), and when the time of the event that will need to happen is known, notify the client with a message containing the number of milliseconds after the beginning value that need to elapse before firing the event?
To illustrate:
client A connects to server S at time t0 = 0
client B connects to server S at time t1 = 120
server S decides an event needs to happen at time t3 = 500
server S sends a message to A:
S->A : {eventName, 500}
server S sends a message to B:
S->B : {eventName, 380}
This does not rely on the client time at all; just on the client's ability to keep track of time for some reasonably short period (a single session).
It seems to me like you're needing to listen to a broadcast event from a server in many different places. Since you can accept 2-3 seconds variation you could just put all your clients into long-lived comet-style requests and just get the response from the server? Sounds to me like the clients wouldn't need to deal with time at all this way ?
You could use ajax to do this, so yoǘ'd be avoiding any client-side lockups while waiting for new data.
I may be missing something totally here.
If you can assume that the clocks are reasonable stable - that is they are set wrong, but ticking at more-or-less the right rate.
Have the servers get their offset from a single defined source (e.g. one of your servers, or a database server or something).
Then have each client calculate it's offset from it's server (possible round-trip complications if you want lots of accuracy).
Store that, then you the combined offset on each client to trigger the event at the right time.
(client-time-to-trigger-event) = (scheduled-time) + (client-to-server-difference) + (server-to-reference-difference)
Time synchronization is very hard to get right and in my opinion the wrong way to go about it. You need an event system which can notify registered observers every time an event is dispatched (observer pattern). All observers will be notified simultaneously (or as close as possible to that), removing the need for time synchronization.
To accommodate latency, the browser should be sent the timestamp of the event dispatch, and it should wait a little longer than what you expect the maximum latency to be. This way all events will be fired up at the same time on all browsers.
Google found the way to define time as being absolute. It sounds heretic for a physicist and with respect to General Relativity: time is flowing at different pace depending on your position in space and time, on Earth, in the Universe ...
You may want to have a look at Google Spanner database: http://en.wikipedia.org/wiki/Spanner_(database)
I guess it is used now by Google and will be available through Google Cloud Platform.