Can we Initialize one state from another in react redux? - react-redux

How can we initialize one state from another state in redux, is it a good practice?
am trying to do that but don't know if it is a good practice or not.

Related

In Vue JS, how do I store the "data" (true/false) of a component on a page refresh?

Using Vue Js with Laravel I would like to keep open an expanded nav bar when the user refreshes the page (if they have chosen to open it initially).
The nav bar open/close status is stored in the component's data with a true/false boolean.
I'm a bit confused of which approach to take as have researched various options, so seek the best advice as a noob. Ideally there is a simple way to have the 'data' persist in my component rather than it getting re-rendered to the default of false! Therefore I guess it needs storing in the user's "session" state locally right?
But what do I use and how?
sessionStorage
localStorage
Vuex - https://v2.vuejs.org/v2/guide/state-management.html
a plugin - https://github.com/vuejs/vuex
Laravel session - https://laravel.com/docs/5.5/session
Thanks.
Vuex with a plugin such as this one if your application is large/is growing in complexity. Its a bit of a learning curve though so if you only have a few things to keep track of their states, then localstorage would be a good solution.
Localstorage has the advantage of being easy to use, widely adopted and if you need to in the future, it integrates nicely with state management systems like vuex. The api is quite simple, really just getItem and setItem for common simple use cases. It persists across browser sessions as well.
https://developer.mozilla.org/en-US/docs/Web/API/Window/localStorage
Sessionstorage is roughly equivalent to localstorage with the key difference being once you close the browser, it gets wiped.

How is state immutability actually used in Redux?

I am trying to understand how immutability is actually (if at all) used in Redux. Every tutorial/article/document I found states that reducers should never alter the state object but instead create a new copy of the changed data (i.e. the reducers must be pure functions). I understand this but I could not find any place where there is an explanation of how this guideline is actually used by Redux internal implementation.
Is this just a strong recomendation or will something inside Redux breaks if I make the reducers non pure?
If it is the later then what exactly will break?
I did find several places where Dan says that in some (very rare) cases the reducers may be non pure but it is risky (again, with no explanation what is the risk exactly).
Redux, when used with React, is typically connected by using connect from react-redux. Wrapping a component with connect, makes it subscribe to changes to the redux store by specifying a change handler, which gets invoked after an action is dispatched and the reducers are called. Upon invocation of the change handler, connect compares the current state of the store with the new state using identity comparison (previousState !== newState) – which is faster than doing a shallow or deep comparison – and only when the two states aren't identical, does it update the wrapped component using setState. Mutating the state directly will not cause the references to change, thereby causing the wrapped component to not re-render.
This is how connect determines if it should call setState:
if (!pure || prevStoreState !== storeState) {
this.hasStoreStateChanged = true
this.setState({ storeState })
}
connect also provides an option to override this behaviour by specifying that the wrapped component is not pure using:
connect(mapStateToProps, mapDispatchToProps, mergeProps, {pure: false} )
Identity comparison is also used by pure components by implementing shouldComponentUpdate to prevent unnecessary calls to render.
TL;DR: a component wrapped with connect will not re-render if a store's state changes due to mutation.
Edit: Nothing in Redux itself will break since it is so minimal that it doesn't try to inspect the state returned by reducers.

Laravel Raffle Project. Is a Queue the best way to achieve this?

I'm creating a raffle site as a small side project. It will handle multiple raffles each with an end time. At the end of each raffle a single winner is chosen.
Are Laravel Jobs the best way to go with this? Do I just create a single forever-repeating job to check if any raffles have ended and need a winner?
If not, what would be the best way to go?
I don't think that forever-repeating scripts are generally a good idea.
I just create a single forever-repeating job
This is almost never a good idea. It has its applications in legacy code bases but websockets and events are best considered for this job. Also, you have the benefit of using a really good framework like Laravel, so take advantage of it
Websockets
If you want people to be notified in real time in the browser.
If you have all your users subscribe to a websocket channel when they load the page, you can easily send a message to a websocket server to all subscribed clients (ie browsers) to let them know who the winner is.
Then, in your client side code (Javascript), you can parse that message to determine who the winner is and render a pop up that let's the user know.
Events
If you don't mind a bit of a delay, most definitely use events for this.
At the end of every action that might potentially end a raffle (ie, a name is chosen at random by a computer - function chooseName()). Fire an event that notifies all participants in the raffle.
https://laravel.com/docs/5.2/events
NB: I've listed the above two as separate issues, but actually, the could be used together. For example, in the event that a name is chosen at random, determine if the raffle is over and notify clients via a websocket connection.
Why I wouldn't use delayed Jobs
The crux of the reason - maintainability
Imagine a scenario where something extends the time of your raffle by a week. This could've happened because a raffle was cheated on or whatever (can't really think of all the use cases in that area).
Now, your job has a set delay in place - is it really a good programming principle to have to change two things when only one scenario changed? Nope. Having something like an event in place - onRaffleEnd - explicitly looks for the occurrence of an event. Laravel doesn't care when that event happens.
Using delayed Jobs can work - it's just not a good programming use case in your scenario and limits what you're able to do in the longer run. It will force you to make more considerations when unforeseen circumstances come along as well as when you want to change things. This also decentralizes the logic related to your raffle. Whilst decoupling code is good practice, having logic sit in completely different places makes maintenance a nightmare.

How to avoid action chains

I'm trying to understand Flux pattern.
I believe that in any good design the app should consist of relatively independent and universal (and thus reusable) components glued together by specific application logic.
In Flux there are domain-specific Stores encapsulating data and domain logic. These could be possibly reused in another application for the same domain.
I assume there should also be application-specific Store(s) holding app state and logic. This is the glue.
Now, I try to apply this to imaginary "GPS Tracker" app:
...
When a user clicks [Stop Tracking] button, corresponding ViewController raises STOP_CLICK.
AppState.on(STOP_CLICK):
dispatch(STOP_GEOLOCATION)
dispatch(STOP_TRACKING)
GeolocationService.on(STOP_GEOLOCATION):
stopGPS(); this.on = false; emit('change')
TrackStore.on(STOP_TRACKING):
saveTrack(); calcStatistics(); this.tracking = false; emit('change')
dispatch(START_UPLOAD)
So, I've got an event snowball.
It is said that in Flux one Action should not raise another.
But I do not understand how this could be done.
I think user actions can't go directly to domain Stores as these should be UI-agnostic.
Rather, AppState (or wherever the app logic lives) should translate user actions into domain actions.
How to redesign this the Flux way?
Where should application logic go?
Is that correct to try to keep domain Stores independent of the app logic?
Where is the place for "services"?
Thank you.
All of the application logic should live in the stores. They decide how they should respond to a particular action, if at all.
Stores have no setters. The only way into the stores is via a dispatched action, through the callback the store registered with the dispatcher.
Actions are not setters. Try not to think of them as such. Actions should simply report on something that happened in the real world: the user interacted with the UI in a certain way, the server responded in a certain way, etc.
This looks a lot like setter-thinking to me:
dispatch(STOP_GEOLOCATION)
dispatch(STOP_TRACKING)
Instead, dispatch the thing that actually happened: STOP_TRACKING_BUTTON_CLICKED (or TRACKING_STOPPED, if you want to be UI-agnostic). And then let the stores figure out what to do about it. All the stores will receive that action, and they can all respond to it, if needed. The code you have responding to two different actions should be responding to the same action.
Often, when we find that we want dispatch within a dispatch, we simply need to back up to the original thing that happened and make the entire application respond to that.

Model View Controller and Callbacks

I'm currently developing a multiplayer cardgame for android, with libgdx as the game engine. My question is more generel tho.
I'm not sure whats the best practice for handling callbacks in this architecture. My controller is a big statemachine, that checks inputs over and over while beeing called from the render() method of the gameengine.
I have two main callbacks, userinput from the gui, and network callbacks from the android google play services part.
Currently these callback methode/ inputListeners just set member variables, which are check by getter methods from the controller/statemachine, for example i call this from the controller over and over, check if its != null and proceed if it is.
#Override
public Boolean allPlayersConnected() {
Boolean allConnected = null;
if (startGame != null) {
allConnected = startGame;
startGame = null;
}
return allConnected;
}
The startGame "flag" beeing set by callbacks from the google play services api.
I dont know if this is good practice, doesnt look like.
I could call controller methods from the google play services callbacks that set a controller member variable, and check this in each render loop, but thats just moving the variable.
I could also design the controller as an observer of those events, but what am i going to do in the update method inside the controller thats beeing called if an event happens. i dont think i want change stats in these, even if i can access the currrent state. Im spreading state code all over the place with this, some in different parts of a huge update method and some in the actual state machine code. Just setting a member variable in the update method is quite similar to the think i did above.
Another thing would be, to directly change controller state from the callback methods. That would be less code, less variables and a little faster, but i think i'd destroy the MVP concept, cause i take away the control from the controller and let i.e. the gui change the state of the controller.
Any input on this ?
Edit:
The more i think about it, the more i think a combination of observer and command pattern is the way to go.
So i could indeed cut big part of the current state machine and pack it into the observer update() method. Instead of sending the commands through a big command enum, i could create command object with the information available, and pass them to the observer(controller), where i check the command as viable, and call the execute with the information needed to be excecuted, eg the model interface.
First, I think whether your commands are enums or command objects is independent of the main problem here -- which is how to connect user and network input to state management.
The most common game architecture I've seen is an update loop that checks input, iterates the game simulation, and then renders a frame. In the MVC world, this structure just synchronizes those steps; you still have an encapsulated view and data model, with the controller (the game loop) serving as a a bridge between those two worlds.
Input, whether from the local user or one over the net, is generally treated as a request to modify game state. That is, the controller (as the first part of its loop) reads in pending input messages and processes them, modifying state as it goes. This way, the code that changes state is in one place: that controller. You are right, spreading state-modification code throughout the app is a bad practice; basically, it's not MVC.
In other words, all of your callbacks should convert the input to commands and stick them into a queue. You don't want to synchronize the controller -- whose job it is to modify state -- with those input callbacks. You have no idea when input will occur relative to the game loop, so it's best to decouple them. Serializing input processing with game simulation should also make your logic simpler.
You have some choice in how to connect the callbacks to the controller; a shared queue (where one side writes into it and the other reads out from it) is a strong pattern and easy to make thread-safe.

Resources