onLoadMoreItemsRequested is not fired after an api query :
I am pulling data asynchronously with an api query for ListView items. After the loading is done onLoadMoreItemsRequested is never called.On the contrary, If I work with the local database it works fine.
I follow the NS documents but it looks like I am missing something.
Related
New to Elixir/Phoenix and GraphQL. I have created a simple API that retrieves "drawings" from a PostgreSQL database. The table consists of an "id" (uuid) and "drawing_json" (text). In the table is one row with a json string of about 77Kb. My schema and queries are defined using Absinthe. I have 1 query called "all_drawings" which resolves and reaches out to the Repo and pulls in all drawings. When using postman to call this API, the following query works fine:
{
allDrawings
{
id
}
}
However, when I try to return the json field as well the postman request times out and I get a "socket hang up" error.
Looking at the Debug Console in Visual Studio Code, I can see the query gets the data from the db just fine and almost immediately. Something seems to be happening though in returning it to the client that I can't detect. No errors are thrown. Any ideas? Not sure what information will help but happy to provide more.
A colleague helped me find the fix. Not sure why this is (maybe someone can elaborate), but the issue was with trying to run it in Visual Studio Code. Previously, I set up the run task in launch.json and would click the run button to start debugging my application. This would cause the aforementioned crash every time. When I went to the terminal and ran the "mix phx.server" command, everything worked fine. Not sure if the hang up is with Visual Studio Code or the ElixirLS extension when using the IDE to debug. But starting the application through the command line allowed me to use postman to hit the api and retrieve the data just fine. I'm very new to the Elixir/Phoenix world so I'm unable to draw a more intelligent conclusion as to why this happens.
I'm building an Expo mobile app using AWS AppSync and Apollo and I've got an intermittent but very serious issue with the cache getting corrupted, or at least not being updated properly. Unfortunately, because I'm using AppSync and I want the offline capability, I can't upgrade to the latest Apollo client, so as a result the data is stored in Redux, there are 4 top-level keys: offline, rehydrated, appsync and appsync-metadata.
This is what I expect to happen:
GraphQL query for a "project" returns the correct data
This data is written into the cache. In particular, I'm expecting that in appsync.ROOT_QUERY there'll be an entry for the project, something like getProject({"input":{"id":"project-7"}}) plus a top level entry in appsync for the project with all of its properties.
When I do execute a mutation, I'm expecting the project entry to be updated.
Since the project is updated, I'm expecting the UI to be refreshed reflecting the updated data.
Most of the time, this happens exactly as above. However, sometimes something happens to the cache. I'm not sure exactly what, but it gets into a weird state and I can't fix it.
Here are the symptoms:
When I start the app, the cache is initialised to an "old" state, one that doesn't include the query for project-7 even though I had queried for it just moments before killing the app.
When I do a search for project-7, it then adds the getProject...project-7 query to the cache and an entry for project-7, but for some reason doesn't seem to have all the fields.
When I do a mutation, there's an AAS_WRITE_CACHE which actually removes the getProject...project-7 entry from the query cache! The mutation succeeds though, I can see that the data in the AppSync server is updated, and the client doesn't log any errors anywhere.
The UI does NOT update.
I tried adding an update to the mutation so that I could update the cache myself, but when I execute const data = proxy.readQuery({ query: ProjectQuery } ... ) (specifying project-7), it throws an exception to say that it can't find that query, so I can't update the project. If I manually re-get the project, then everything works again until the next mutation.
What's really difficult, is that once my app is in this state, I can't work out how to fix it. I've tried client.resetStore() but it just gets rehydrated. I've tried calling AsyncStorage.clear() and then stopping the app and restarting it, but that doesn't work either. How can this be? Where is it storing the data?
It's worth saying again that on most of the devices I've tested it on (both Android and iOS), it works for days without any problems, but on one Android device in particular, it happens once every day or two. Twice I've been able to fix it using the "Clear Async storage" in the React Native Debugger, but now even that doesn't seem to fix it.
So, here are my questions:
Can anyone suggest what might be causing the cache to get into this weird state? Or how I can try and track down the problem.
Where is it storing the data that it then puts back? There are snapshots of the cache in appsync-metadata but surely they should also be removed when I clear all the AsyncStorage?
I'm really stuck!!
PS Here are the relevant (I think) packages I'm using:
"apollo-client": "^2.5.1",
"aws-amplify": "^1.1.27",
"aws-amplify-react-native": "^2.1.11",
"aws-appsync": "^1.7.2",
"aws-appsync-react": "^1.2.7",
"expo": "^33.0.0",
"graphql": "^14.3.0",
"graphql-tag": "^2.10.1",
"react": "16.8.3",
"react-apollo": "^2.5.8",
I'm using the SonarQube API in a java tool to process issues and add comments to them/change the issue status (e.g. wont fix)
The api/issues/search function has a page size limitation of 500 max. I have more than 500 issues and need to read this. I thought of performing mulitpule queries, but the issue keys are not numberical so I can not just increment and perform a query on the next 500.
Is there any way I can handle more than 500 issues from the API? I thought a workaround would be to get the list of issue keys from the api and query in batches, but this doesnt seem possible.
Short answer : no, it's not possible to get more than 500 issues in a single web service call.
Long answer : you should try to use a hook (either by using a plugin or by using a web hook) that is triggered on each project analysis => You'll then be able to browse all project's issues using pagination : api/issues/search?componentKeys=PROJECT_KEY&ps=500&p=1, then api/issues/search?componentKeys=PROJECT_KEY&ps=500&p=2, etc.
The total number of items can be found in the response, under "paging" -> "total".
we have an integrated with google picker(read-only scope,Docs view) it use to work fine but recently some users are getting blank screens as soon as the pop up shows but when they select some filter everything starts working fine after that no problems.
using developer tools i see all apis returning 200 for that first request
but there were no docs in response(i believe this is the api responsible for bringing docs in picker 'https://docs.google.com/picker/pvr')
when there are no docs returned in above api google is calling another api i assume it is to log error's probably(//docs.google.com/picker/ohnoes)
this api has following error params in it
&error=Cached and requested query mismatch
&line=Not available
&viewToken=["all",null,{"query":null}]
&ms=97
&transferDocs=false
&numErrors=1
has anybody else faced the similar problem
what do error "Cached and requested query mismatch" means in context of drive docs
Fyi - most accounts facing this problem seems like are of company domain for ex "jondoe#company.org"(this is a google account with company domain)
Filters Image
Thanks for your help.
not sure but looks like issue was may be related to google bug
https://issuetracker.google.com/issues/64825685
for me the code that was not working was:
addView(google.picker.ViewId.DOCS)
replaced this code with below code which works as expected
var view = new google.picker.DocsView();
view.setIncludeFolders(true).setOwnedByMe(true).setParent('root');
addView(view).
I'm using webview in my app which is loading remote web page, which is then using socket.io (node.js) via xhr-pooling.
Problem is that I can't disable caching of received data through socket.io.
For example, every 10 seconds my node server does io.emit, and my webview receives it and saves it in:
/data/data/...../webviewCache
I do not want my webview anything to save, because as time passes number of those files is just rising and they aren't helping my app run faster...
I've tried:
browser.getSettings().setCacheMode(2); //(2 is LOAD_NO_CACHE)
browser.getSettings().setAppCacheEnabled(false);
but neither of those works. My webview is still saving files to the cache folder.
At this moment, I've set up timer which is emptying cache folder every 60 seconds but that's not solution I would like to release in production...
Am I missing something here or there is bug with disabling cache within android?
UPDATE 1: After whole day of debugging I've found out something interesting.
Logcat shows two interesting things: saveCacheFile and getCacheFile
Then I've decided once again to try turn off the cache...
browser.getSettings().setCacheMode(android.webkit.WebSettings.LOAD_NO_CACHE);
That actually caused that WebView wasn't loading files from cache anymore, but it was still saving them. Log cat says something like this:
saveCacheFile for url .../socket.io/1/xhr-polling/BLNN28E7S4PZJsy2pWaF?t=13537
So I believe actual question would be, how to prevent webview from SAVING cache files on every request.
How about adding random string in the query part of your URL? This trick works under some cases.
The only solution I found was to send "Cache-Control: no-store" in the HTTP response header.