continuously subscribe to the portfolio data with IBroker package? - portfolio

I tried to use reqUpdateAccounts, but it is just a snapshot. I would like to monitor the portfolio on a continuous basis. I understand .reqUpdateAccounts could subscribe to the portfolio data, but don't know how to present and handle the data properly? I know this is probably very basic, but I am not familiar with R and haven't found a good example. Can someone give a good way to monitor the portfolio data continuously with IBrokers? Thanks.

Related

How get a data without polling?

This is more of a theorical question.
Well, imagine that I have two programas that work simultaneously, the main one only do something when he receives a flag marked with true from a secondary program. So, this main program has a function that will keep asking to the secondary for the value of the flag, and when it gets true, it will do something.
What I learned at college is that the polling is the simplest way of doing that. But when I started working as an developer, coworkers told me that this method generate some overhead or it's waste of computation, by asking every certain amount of time for a value.
I tried to come up with some ideas for doing this in a different way, searched on the internet for something like this, but didn't found a useful way about how to do this.
I read about interruptions and passive ways that can cause the main program to get that data only if was informed by the secondary program. But how this happen? The main program will need a function to check for interruption right? So it will not end the same way as before?
What could I do differently?
There is no magic...
no program will guess when it has new information to be read, what you can do is decide between two approaches,
A -> asks -> B
A <- is informed <- B
whenever use each? it depends in many other factors like:
1- how fast you need the data be delivered from the moment it is generated? as far as possible? or keep a while and acumulate
2- how fast the data is generated?
3- how many simoultaneuos clients are requesting data at same server
4- what type of data you deal with? persistent? fast-changing?
If you are building something like a stocks analyzer where you need to ask the price of stocks everysecond (and it will change also everysecond) the approach you mentioned may be the best
if you are writing a chat based app like whatsapp where you need to check if there is some new message to the client and most of time wont... publish subscribe may be the best
but all of this is a very superficial look into a high impact architecture decision, it is not possible to get the best by just looking one factor
what i want to show is that
coworkers told me that this method generate some overhead or it's
waste of computation
it is not a right statement, it may be in some particular scenario but overhead will always exist in distributed systems
The typical way to prevent polling is by using the Publish/Subscribe pattern.
Your client program will subscribe to the server program and when an event occurs, the server program will publish to all its subscribers for them to handle however they need to.
If you flip the order of the requests you end up with something more similar to a standard web API. Your main program (left in your example) would be a server listening for requests. The secondary program would be a client hitting an endpoint on the server to trigger an event.
There's many ways to accomplish this in every language and it doesn't have to be tied to tcp/ip requests.
I'll add a few links for you shortly.
Well, in most of languages you won't implement such a low level. But theorically speaking, there are different waiting strategies, you are talking about active waiting. Doing this you can easily eat all your memory.
Most of languages implements libraries to allow you to start a process as a service which is at passive waiting and it is triggered when a request comes.

Handling "rooms" with http-kit and Clojure

I have a nice little WebSocket app using http-kit server, and I'm feeling pretty good about myself. Now I want to add different "rooms" (the list of which should be dynamic) to my app, but I am having difficulty finding any documentation or example projects. I'm not afraid to spin my own solution, but it's nice to lean on others' experiences. Does anyone know of any examples of a similar implementation?
I can think of two approaches:
1) I could just keep the "room" in state along with the channel, then just send! to the channels associated to that room. Seems like the easiest approach, but then I'm filtering through each attached channel every time I broadcast a message.
2) I could build a new socket endpoint every time a new room is opened, and send the new URL back to the front end (or send the existing URL if the room was already opened), which would then drop the old socket and open a new one to the new url. Some overhead in building the new endpoint, but then I can just broadcast to every channel subscribed to it.
Any other ideas or input? I'm still pretty new to programming with WebSockets and with Clojure, so I get the feeling there may be a better way.
Both of your solutions are completely fine, though #1 would be slightly improved by maintaining an aditional map in the state so you would have
a map from chan --> room
antoher map from room --> vector of chans.

Live updates on website - 1 ajax per second is bad practice?

I have a website where each user can have several orders. Each order has its own status. A background process, keeps updating the status of each order as necessary. I want to inform the user in real-time on the status of his orders. As such, I have developed an API endpoint that returns all the orders of a given user.
On the client-side, I've developed a React component that displays the orders, and then every second an AJAX request is performed to the API to get all the orders and their status, and then React will auto-update if necessary.
Is making 1 AJAX call per second to get all orders of a user a bad practice? What are other strategies that I can do?
Yes, it is. You can use Socket to accomplish this. Take a look at Socket.IO
Edit: My point is, why to use AJAX to simulate a task that can be done with a feature that is designed for it? Sockets are just made to do this kind of thing.
Imagine if your user lost internet connection for example. With Socket.IO you can handle this very nicely. But I don't think it will be that easy with AJAX.
And thinking about scalability, Socket.IO is designed to be performant with whatever transport it settles on. The way it gracefully degrades based on what connection is possible is great and means your server will be overloaded as little as possible while still reaching as wide an audience as it can.
AJAX will do the trick, but it's not the best design.
There is no one solution fit all answer for this question.
First off, this is not a chat app, a delay of less than 1 second doesn't change the user experience by much, if any.
So that leaves us with technical reasons, it really depends on many factors:
How many users you have (overall load), how many concurrent users are waiting for their orders, what infrastructure you are using, do you have other important things to build or you just want to spend more time coding things for fun?
If you have a handful of users, there is nothing wrong with querying once per second, it's easy, less maintenance overhead, and you said you have it coded already.
If you have dozens or more of concurrent users waiting for the status it's probably best to use Websockets.
In terms of infrastructure, too many websockets are expensive (some cloud hosting have limits on the number of sockets), so keep that in mind if you want to go with that route.

Laravel Raffle Project. Is a Queue the best way to achieve this?

I'm creating a raffle site as a small side project. It will handle multiple raffles each with an end time. At the end of each raffle a single winner is chosen.
Are Laravel Jobs the best way to go with this? Do I just create a single forever-repeating job to check if any raffles have ended and need a winner?
If not, what would be the best way to go?
I don't think that forever-repeating scripts are generally a good idea.
I just create a single forever-repeating job
This is almost never a good idea. It has its applications in legacy code bases but websockets and events are best considered for this job. Also, you have the benefit of using a really good framework like Laravel, so take advantage of it
Websockets
If you want people to be notified in real time in the browser.
If you have all your users subscribe to a websocket channel when they load the page, you can easily send a message to a websocket server to all subscribed clients (ie browsers) to let them know who the winner is.
Then, in your client side code (Javascript), you can parse that message to determine who the winner is and render a pop up that let's the user know.
Events
If you don't mind a bit of a delay, most definitely use events for this.
At the end of every action that might potentially end a raffle (ie, a name is chosen at random by a computer - function chooseName()). Fire an event that notifies all participants in the raffle.
https://laravel.com/docs/5.2/events
NB: I've listed the above two as separate issues, but actually, the could be used together. For example, in the event that a name is chosen at random, determine if the raffle is over and notify clients via a websocket connection.
Why I wouldn't use delayed Jobs
The crux of the reason - maintainability
Imagine a scenario where something extends the time of your raffle by a week. This could've happened because a raffle was cheated on or whatever (can't really think of all the use cases in that area).
Now, your job has a set delay in place - is it really a good programming principle to have to change two things when only one scenario changed? Nope. Having something like an event in place - onRaffleEnd - explicitly looks for the occurrence of an event. Laravel doesn't care when that event happens.
Using delayed Jobs can work - it's just not a good programming use case in your scenario and limits what you're able to do in the longer run. It will force you to make more considerations when unforeseen circumstances come along as well as when you want to change things. This also decentralizes the logic related to your raffle. Whilst decoupling code is good practice, having logic sit in completely different places makes maintenance a nightmare.

How to update/migrate data when using CQRS and an EventStore?

So I'm currently diving the CQRS architecture along with the EventStore "pattern".
It opens applications to a new dimension of scalability and flexibility as well as testing.
However I'm still stuck on how to properly handle data migration.
Here is a concrete use case:
Let's say I want to manage a blog with articles and comments.
On the write side, I'm using MySQL, and on the read side ElasticSearch, now every time a I process a Command, I persist the data on the write side, dispatch an Event to persist the data on the read side.
Now lets say I've some sort of ViewModel called ArticleSummary which contains an id, and a title.
I've a new feature request, to include the article tags to my ArticleSummary, I would add some dictionary to my model to include the tags.
Given the tags did already exist in my write layer, I would need to update or use a new "table" to properly use the new included data.
I'm aware of the EventLog Replay strategy which consists in replaying all the events to "update" all the ViewModel, but, seriously, is it viable when we do have a billion of rows?
Is there any proven strategies? Any feedbacks?
I'm aware of the EventLog Replay strategy which consists in replaying
all the events to "update" all the ViewModel, but, seriously, is it
viable when we do have a billion of rows?
I would say "yes" :)
You are going to write a handler for the new summary feature that would update your query side anyway. So you already have the code. Writing special once-off migration code may not buy you all that much. I would go with migration code when you have to do an initial update of, say, a new system that requires some data transformation once off, but in this case your infrastructure would exist.
You would need to send only the relevant events to the new handler so you also wouldn't replay everything.
In any event, if you have a billion rows of data your servers would probably be able to handle the load :)
Im currently using the NEventStore by JOliver.
When we started, we were replaying our entire store back through our denormalizers/event handlers when the application started up.
We were initially keeping all our data in memory but knew this approach wouldn't be viable in the long term.
The approach we use currently is that we can replay an individual denormalizer, which makes things a lot faster since you aren't unnecessarily replaying events through denomalizers that haven't changed.
The trick we found though was that we needed another representation of our commits so we could query all the events that we handled by event type - a query that cannot be performed against the normal store.

Resources