Change apollo local state whenever a remote mutation is called - react-apollo

I'm studying the usage of Apollo Client Local State Management, and came across this issue: I want to change my local state whenever a specific mutation is called.
From what I could read the only way would be writing to the cache or calling a #client mutation in every place that calls the remote mutation.
Is there a way to do that in one place and only call the remote mutation in all places?

If anyone gets here I solved by creating a local mutation that calls the server mutation and does whatever it wants as well:
resolver(parent, args, context) {
// do anything else
return context.client.mutate(...);
}

Related

FireAndForget call to WebApi from Azure Function

I want to be able to call an HTTP endpoint (that I own) from an Azure Function at the end of the Azure Function request.
I do not need to know the result of the request
If there is a problem in the HTTP endpoint that is called I will log it there
I do not want to hold up the return to the client calling the initial Azure Function
Offloading the call of the secondary WebApi onto a background job queue is considered overkill for this requirement
Do I simply call HttpClient.PutAsync without an await?
I realise that the dependencies I have used up until the point that the call is made may well not be available when the call returns. Is there a safe way to check if they are?
My answer may cause some controversy but, you can always start a background task and execute it that way.
For anyone reading this answer, this is far from recommended. The OP has been very clear that they don't care about exceptions or understanding what sort of result the request is returning ...
Task.Run(async () =>
{
using (var httpClient = new HttpClient())
{
await httpClient.PutAsync(...);
}
});
If you want to ensure that the call has fired, it may be worth waiting for a second or two after the call is made to ensure it's actually on it's way.
await Task.Delay(1000);
If you're worried about dependencies in the call, be sure to construct your payload (i.e. serialise it, etc.) external to the Task.Run, basically, minimise any work the background task does.

Can a client subscribe to multiple graphql subscriptions in one connection?

Say we have a schema like this:
type Subscription {
objectAddedA: ObjectA
objectAddedB: ObjectB
}
Can a graphql client subscribe to both the objectAddedA and objectAddedB subscriptions at the same time? I'm having a hard time finding good examples of subscriptions on the web, and the graphql docs don't seem to mention them at all unless I'm missing it. We are designing a system that runs in kubernetes where a single pod will be getting api requests to add/update/delete configuration and we want to use graphql subscriptions to push these changes to any pods that care about them (they would be the graphql clients). However there are going to be lots of different object types and potentially several different types of events that they will want to be notified about at any time, so not sure if you can subscribe to several different subscriptions at once or if you have to designed the schema in a way that a single subscription will give all the possible events you'll need.
It is possible with graphql-python/gql
See the documentation here
Extract:
# First define all your queries using a session argument:
async def execute_query1(session):
result = await session.execute(query1)
print(result)
async def execute_query2(session):
result = await session.execute(query2)
print(result)
async def execute_subscription1(session):
async for result in session.subscribe(subscription1):
print(result)
async def execute_subscription2(session):
async for result in session.subscribe(subscription2):
print(result)
# Then create a couroutine which will connect to your API and run all your queries as tasks.
# We use a `backoff` decorator to reconnect using exponential backoff in case of connection failure.
#backoff.on_exception(backoff.expo, Exception, max_time=300)
async def graphql_connection():
transport = WebsocketsTransport(url="wss://YOUR_URL")
client = Client(transport=transport, fetch_schema_from_transport=True)
async with client as session:
task1 = asyncio.create_task(execute_query1(session))
task2 = asyncio.create_task(execute_query2(session))
task3 = asyncio.create_task(execute_subscription1(session))
task4 = asyncio.create_task(execute_subscription2(session))
await asyncio.gather(task1, task2, task3, task4)
asyncio.run(graphql_connection())
Actually, the GraphQL standard explicitly says that
Subscription operations must have exactly one root field.
Python's "graphql-core" library ensures it through a validation rule. Libraries that are based on it (graphene, ariadne and strawberry) would follow this rule as well.
This is what the server says if you attempt multiple subscriptions in one request:
"error": {
"message": "Anonymous Subscription must select only one top level field.",
You can remove this validation rule and see what happens, but remember that your in the no-standards land now, and things usually don't end well there... :D

Allow-listing IP addresses using `call.cancel()` from within `EventListener.dnsEnd()` in OkHttp

i am overriding the dnsEnd() function in EventListener:
#Override
public void dnsEnd(Call call, String domainName, List<InetAddress> inetAddressList) {
inetAddressList.forEach(address -> {
logger.debug("checking if url ({}) is in allowlist", address.toString());
if (!allowlist.contains(address)) {
call.cancel();
}
});
}
i know, in the documentation it says not to alter call parameters etc:
"All event methods must execute fast, without external locking, cannot throw exceptions, attempt to mutate the event parameters, or be re-entrant back into the client. Any IO - writing to files or network should be done asynchronously."
but, as i don't care about the call if it is trying to get to an address outside the allowlist, i fail to see the issue with this implementation.
I want to know if anyone has experience with this, and why it may be an issue?
I tested this and it seems to work fine.
This is fine and safe. Probably the strangest consequence of this is the canceled event will be triggered by the thread already processing the DNS event.
But cancelling is not the best way to constrain permitted IP addresses to a list. You can instead implement the Dns interface. Your implementation should delegate to Dns.SYSTEM and them filter its results to your allowlist. That way you don't have to worry about races on cancelation.

Redux Local Storage Workflow

I am using Redux and would like to store some state on local storage.
I only like to store the token which I receive from the server. There are other things in the store that I don't like to store.
The workflow that I found on google is to grab from local storage in initial store map.
Then he uses store.subscribe to update the local storage on regular interval.
That is valid if we are storing the entire store. But for my case, the token is only updated when user logs out or a new user logs in.
I think store.subscribe is an overkill.
I also read that updating local storage in reducers is not the redux way.
Currently, I am updating local storage in action before reducer is updated.
Is it the correct flow or is there a better way?
The example you found was likely about serializing the entire state tree into localstorage with every state change, allowing users to close the tab without worrying about constantly saving since it will always be up to date in LocalStorage.
However, it's clear that this isn't what you are looking for, as you are looking to cache specific priority data in LocalStorage, not the entire state tree.
You are also correct about updating LocalStorage as part of a reducer being an anti-pattern, as all side-effects are supposed to be localized to action creators.
Thus you should be reading from and writing to LocalStorage in your action creators.
For instance, your action creator for retrieving a token could look something like:
const TOKEN_STORAGE_KEY = 'TOKEN';
export function fetchToken() {
// Assuming you are using redux-thunk for async actions
return dispatch => {
const token = localStorage.getItem(TOKEN_STORAGE_KEY);
if (token && isValidToken(token)) {
return dispatch(tokenRetrieved(token));
}
return doSignIn().then(token => {
localStorage.setItem(TOKEN_STORAGE_KEY, token);
dispatch(tokenRetrieved(token));
}
}
}
export function tokenRetrieved(token) {
return {
type: 'token.retrieved',
payload: token
};
}
And then somewhere early on in your application boot, such as in one of your root component's componentWillMount lifecycle methods, you dispatch the fetchToken action.
fetchToken takes care of both checking LocalStorage for a cached token as well as storing a new token there when one is retrieved.

Request/Acknowledge pattern ASP.NET WebAPI

Does WebAPI have any built in support to fire off some work and return immediately to the caller with an acknowledgement. I am looking to build a data processing server which has some long running processes that need to be ran. The client never expects the results straight away and can query for them later.
With this being the case I am looking for a way to fire off some work in such a way that wont block the controller from returning.
There's nothing in WebAPI keeping you from starting off some background work and returning immediately. So you could have an action implementated like this:
public HttpResponseMessage Post()
{
Task.Factory.StartNew(() => DoWork());
return new HttpResponseMessage(HttpStatusCode.Accepted);
}
This is just a simple example, but you would probably want to track the Task in some kind of dictionary, so you could return the results when the client queries for them later.

Resources