I'm trying to render all the transactions for a given item after the completion of the Plaid Link process. I call my transaction sync endpoint in my onSuccess method after the exchange token process. However, the first call to the transaction sync endpoint never renders anything, but when I add another item through Link, the intial item's transactions renders and the current one doesn't. I'm wondering why I need to call the sync endpoint twice to actually sync it into my db. The first call I make returns this response:
{'added': [],
'has_more': False,
'modified': [],
'next_cursor': '',
'removed': [],
'request_id': 'qmJn54LkWHqo4kh'}
I'm assuming that the item isn't ready to sync transaction yet because the cursor is at its initial state, which is why the next time I link an item it syncs. If that is the case where should I call the sync enpoint, currently calling in the onSuccess method(also tried calling the endpoint of the "HANDOFF" event but same result). How would I know when the item's transactions are ready to sync.
This is my onSuccess method, I'm calling the sync endpoint after token exchange
const onSuccess = useCallback<PlaidLinkOnSuccess>((publicToken, metadata) => {
fetch('/item/public_token/exchange',{
method: 'POST',
headers:{
'Content-Type': 'application/json',
},
body: JSON.stringify({public_token: publicToken, id:uid, metadata:metadata})
})
.then((r) =>{
fetch('/transactions/sync', {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({uid: uid})
})
})
}, []);
This is method the sync endpoint hits
#app.route('/transactions/sync', methods=['POST'])
def transactions_sync():
id = request.json["uid"]
items = collection.find_one({"_id": ObjectId(id)},{"items":1})
print(items)
for item in items["items"]:
item_id = item["item_id"]
access_token=item["access_token"]
cursor = item['cursor']
added = []
modified = []
removed = []
has_more = True
while has_more:
transaction_request = TransactionsSyncRequest(
access_token=access_token,
cursor=cursor
)
print(transaction_request)
response = client.transactions_sync(transaction_request)
print(response)
added.extend(response["added"])
modified.extend(response["modified"])
removed.extend(response["removed"])
has_more = response["has_more"]
cursor = response["next_cursor"]
for add in added:
res = collection.update_one({"_id": ObjectId(id)},{"$push":{
"transactions":{
"account_id": add["account_id"],
"transaction_id":add["transaction_id"],
"amount":add["amount"],
"name": add["name"],
#"date": add["date"],
"category":add["category"]
}
}} )
print(cursor)
collection.update_one({
"_id":ObjectId(id),
"items.item_id": item_id
},{"$set":{
"items.$.cursor": cursor
}})
return "Success",200
I tried syncing the item's transactions in the onSuccess event for Plaid Link but it didn't sync into my database until I added another item through Link (essentially it took 2 calls to the endpoint to sync). I'm expecting to sync the transactions of the newly added item to my database so I can render on the frontend, with just one call the to sync enpoint
This sounds like a timing issue. From the /transactions/sync API reference:
"Note that for newly created Items, data may not be immediately available to /transactions/sync. Plaid begins preparing transactions data when the Item is created, but the process can take anywhere from a few seconds to several minutes to complete, depending on the number of transactions available."
If you call /transactions/sync immediately after the token exchange, it won't return any transactions, because they aren't available yet. By the time the new Item has been added, enough time has passed that the transactions have been loaded on the original Item.
After calling /transactions/sync for the first time on an Item, you will register the Item for the SYNC_UPDATES_AVAILABLE webhook, which you can then use to see when to call /transactions/sync. For more info, see the documentation for that webhook.
Related
While delete a comment, I can delete two comments back to back but when I tried to delete next comment(3rd comment). It shows error in console “Rate limited due to excessive requests.” But after few seconds when I try to delete, it works fine for next two comments. I have tried to use “wait” function for few seconds to make it work but there is inconsistency in result. Sometimes it works and sometimes it doesn’t.
My code as follows,
function deleteComment(MessagePostId) {
var result = confirm("Are you sure you want to delete this Comment?");
if (result) {
yam.platform.request({
url: "https://api.yammer.com/api/v1/messages/" + MessagePostId,
method: "DELETE",
async: false,
beforeSend: function (xhr) { xhr.setRequestHeader('Authorization', token) },
success: function (res) {
alert("The Comment has been deleted.");
//Code to remove item from array and display rest of the comment to screen
},
error: function (res) {
alert("Please try again after some time.");
}
})
}
}
You are hitting rate limits which prevent multiple deletion requests from a regular user like those which you've hit. The API is designed for client applications where you perhaps make an occasional deletion, but aren't deleting in bulk.
To handle rate limits, you need to update your code to check the response value in the res variable. If it's an HTTP 429 response then you are being rate limited and need to wait before retrying the original request.
I am facing some troubles with my GraphQL optimistic rendering. I'm using Apollo Client, such as:
const history = useHistory();
const [addItem] = useMutation(ADD_ITEM_QUERY);
const onClick = () => {
addItem({
variables: {
id: '16b1119a-9d96-4bc8-96a3-12df349a0c4d',
name: 'Foo Bar'
},
optimisticResponse: {
addItem {
id: '16b1119a-9d96-4bc8-96a3-12df349a0c4d',
name: 'Foo Bar',
__typename: 'Item'
}
},
update: () => {
// Update item cached queries
history.push(`/items);
}
});
};
The issue comes from the redirect. As far as I understand it, the update function is called twice: a first time with the optimisticResponse, and a second time with the network response (which should normally be the same).
However, let's consider the following scenario:
I create a new item,
I receive the optimistic response, and I'm redirected to the list of items,
I click on "Add an item" to be redirected to the form
I receive the server response, hence I'm redirected again to the list.
What is the correct way to prevent this double redirection?
I thought checking the current cache value. If the value is already the latest one, I won't apply the redirect. Not sure if there is a better way? How do you proceed such a scenario?
You should call history.push outside of the update function. You can do
addItem({...}).then(() => history.push('/items')).
Background
I use 3 back-end servers to provide fault tolerance for one of my online SaaS application. All important API calls, such as getting user data, contact all 3 servers and use value of first successfully resolved response, if any.
export function getSuccessValueOrThrow$<T>(
observables$: Observable<T>[],
tryUntilMillies = 30000,
): Observable<T> {
return race(
...observables$.map(observable$ => {
return observable$.pipe(
timeout(tryUntilMillies),
catchError(err => {
return of(err).pipe(delay(5000), mergeMap(_err => throwError(_err)));
}),
);
})
);
}
getSuccessValueOrThrow$ get called as following:
const shuffledApiDomainList = ['server1-domain', 'server2-domain', 'server3-domain';
const sessionInfo = await RequestUtils.getSuccessValueOrThrow(
...(shuffledApiDomainList.map(shuffledDomain => this.http.get<SessionDetails>(`${shuffledDomain}/file/converter/comm/session/info`))),
).toPromise();
Note: if one request resolve faster than others, usually the case, race rxjs function will cancel the other two requests. On Chrome dev network tab it will look like bellow where first request sent out was cancelled due to being too slow.
Question:
I use /file/converter/comm/session/info (lets call it Endpoint 1) to get some data related to a user. This request dispatched to all 3 back-end servers. If one resolve, then remaining 2 request will be cancelled, i.e. they will return null.
On my Cypress E2E test I have
cy.route('GET', '/file/converter/comm/session/info').as('getSessionInfo');
cy.visit('https://www.ps2pdf.com/compress-mp4');
cy.wait('#getSessionInfo').its('status').should('eq', 200)
This sometimes fails if the since getSessionInfo alias was hooked on to a request that ultimately get cancelled by getSuccessValueOrThrow$ because it wasn't the request that succeeded.Bellow image shows how 1 out of 3 request aliased with getSessionInfo succeeded but the test failed since the first request failed.
In Cypress, how do I wait for a successful i.e. status = 200 request?
Approach 1
Use .should() callback and repeat the cy.wait call if status was not 200:
function waitFor200(routeAlias, retries = 2) {
cy.wait(routeAlias).then(xhr => {
if (xhr.status === 200) return // OK
else if (retries > 0) waitFor200(routeAlias, retries - 1); // wait for the next response
else throw "All requests returned non-200 response";
})
}
// Usage example.
// Note that no assertions are chained here,
// the check has been performed inside this function already.
waitFor200('#getSessionInfo');
// Proceed with your test
cy.get('button').click(); // ...
Approach 2
Revise what it is that you want to test in the first place.
Chances are - there is something on the page that tells the user about a successful operation. E.g. show/hide a spinner or a progress bar, or just that the page content is updated to show new data fetched from the backend.
So in this approach you would remove cy.wait() altogether, and focus on what the user sees on the page - do some assertions on the actual page content.
cy.wait() yields an object containing the HTTP request and response properties of the XHR. The error you're getting is because you're looking for property status in the XHR object, but it is a property of the Response Object. You first have to get to the Response Object:
cy.wait('#getSessionInfo').should(xhr => {
expect(xhr.response).to.have.property('status', 200);
});
Edit: Since our backend uses graphql, all calls use the single /graphql endpoint. So I had to come up with a solution to differentiate each call.
I did that by using the onResponse() method of cy.route() and accumulating the data in Cypress environment object:
cy.route({
method: 'GET',
url: '/file/converter/comm/session/info',
onResponse(xhr) {
if (xhr.status === 200) {
Cypress.env('sessionInfo200') = xhr;
}
}
})
You can then use it like this:
cy.wrap(Cypress.env()).should('have.property', 'sessionInfo200');
I wait like this:
const isOk = cy.wait("#getSessionInfo").then((xhr) => {
return (xhr.status === 200);
});
Today I've received bug reports that some of our application's POST requests are being duplicated.
These requests results in the creation of objects on the remote database such like Tasks, Meetings, etc, so the duplication of them implies in Tasks being created with the same name, same due date, user, etc.
I've tried to replicate this but it seems to be a random behavior, and the only concurrent symptom reported is that the request takes more time than normal to complete.
I'm currently using the stack React + Redux + Axios + Asp.Net WebApi, and I have reached to the following considerations in order to understand and solve the issue.
Any tips on any of these topics are really appreciated.
Root cause identification
React + Redux:
The action creator that dispatches the request is called just once, on a onClick event. Apparently there's no state change or page refresh that could cause a multiple call of this fuction.
handleClick(event) {
var postData = {
title: this.state.text,
dueDate: this.state.date,
assignedUser: this.state.user.id
};
this.props.contentPost(postData);
}
Axios:
One of my suspicions is that, for some unknown reason, the users' requests fails, or take to long to complete and then axios supposedly sends the request it again. I've checked it's documentation and issues databases, and didn't found nothing like Automatic Retries or request duplication.
export const contentPost = (props) => {
return function (dispatch) {
dispatch({ type: CONTENT_POST_ERROR, payload: null })
dispatch({ type: CONTENT_POST_LOADING, payload: true });
dispatch({ type: CONTENT_POST_SUCCESS, payload: null })
const request = axios({
method: 'post',
url: `/api/content`,
headers: auth.getHttpHeader(),
validateStatus: function (status) {
return status >= 200 && status < 300; // default
},
data: props
}).then((res) => {
dispatch({ type: CONTENT_POST_LOADING, payload: false });
dispatch({ type: CONTENT_POST_SUCCESS, payload: true, data: res })
}).catch(err => {
dispatch({ type: CONTENT_POST_ERROR, payload: err.code })
dispatch({ type: CONTENT_POST_LOADING, payload: false });
dispatch({ type: CONTENT_POST_SUCCESS, payload: false, data: null })
});
}
};
WebApi
The Controller's method don't have any sort of throttling or "uniqueness token" to identify and prevent duplicated requests. Since this is a random behavior, I would not bet that the Routing or any part of my server side application has any responsibility on this.
Solutions so far...
Axios:
Throttle the axios request at my action creator, to prevent it to be called to often at a given time interval.
Send a client-generated "uniqueness token" in the request body
WebApi
Implement request filtering/throttling, in order to prevent duplicated request, based on body contents at a given time interval.
Receive and handle the "uniqueness token" to prevent post duplicity.
Even with the described above, these solutions looks like an overkill, while the root causes also seems a little too subjective.
I've found out that a late loading state feedback for the users was causing this behavior.
Some people stills double-click stuff on websites. It was naive not to have prevented this by code.
There are a lot of ways to accomplish a better user experience in this scenario. I chose to completely change the button while user are sending our POST request, and we are also implementing the throttling at our backend.
{this.props.content_post_loading &&
<a className="btn btn-primary disabled">Creating...</a>
||
<a className="btn btn-primary" onClick={this.handleCreateClick}>Create</a>
}
Our application handles and manages records changes on the client side. We use ExtJS5 Data Session mechanism.
A session tracks records that need to be updated, created or destroyed
on the server. It can also order these operations to ensure that newly
created records properly reference other newly created records by
their new, server-assigned id.
Let me introduce short use case.
User opens and fills a form. Behind the scene fields are binded to entity object which is tracked by session. When user clicks Submit then session is synchronized, i.e. Ext sends requests to the server and parse response. Here I've encountered a problem.
Server returns following object but Ext does not recognize it:
[{"success": false, errorMessage: "error"}]
Ext prints warning:
[W] Ignoring server record: {"success":false}
or
[W] Ignoring server record: {"success":true}
My question is how should look server response in order to indicate that record is not accepted/saved by backend?
The source code where above warning is printed: http://docs-origin.sencha.com/extjs/5.0/apidocs/source/Operation.html (in function doProcess)
Below I put snippet how I'm starting a batch operation (sync session):
var session = this.getViewModel().getSession(),
saveBatch = session.getSaveBatch();
saveBatch.on('complete', function (batch, operation, eOpts) {
// whole batch processing has been completed
/*...*/
});
saveBatch.on('exception', function (batch, operation, eOpts) {
// exception has been occurred (possible for each operation) (such as HTTP 500)
/*...*/
});
saveBatch.on('operationcomplete', function (batch, operation, eOpts) {
// single operation has been completed
// now, every operation is marked as successful
/*...*/
});
saveBatch.start();
update 26.09.2014
Sencha developer has suggested including an id of object in the response. so I've modified server response to:
[{"id": 10, "success": false}]
but this does not solve the problem.
I spend some time on debugging Ext.data.operation.Operation.doProcess method and I analyzed a sample code from sencha support. Ultimately I've found the solution.
There is my proxy config:
proxy: {
type: 'rest',
// ...
reader: {
type: 'json',
rootProperty: 'data',
totalProperty: 'totalCount',
successProperty: 'success',
messageProperty: 'errorMessage'
}
}
Server response when some error occured:
{
"success": false,
"errorMessage": "<error message>"
}
Server response when data was successfully saved:
The minimal form for delete or update record without changing data:
{
"success": true,
}
The extended form for creating or updating record with changing record data:
{
"success": true,
"data": [{
clientId: 5, // clientIdProperty
id: 5,
// [optional] fields to change, e.g.:
name: 'new name'
}]
}
I modified a demo which I have from sencha support:
https://fiddle.sencha.com/#fiddle/bem
in proxy config use data2.json for error and data3.json for success response.