I'm using Play Framework 2.3.8 (was on 2.2.4 with the same issue, upgrading didn't help).
I'm not using Play Framework's Cache API. But it seems like GET results are still somehow being strangely cached by Play.
For example, if I hit api GET /api/GetPurchases, I'll get 2 results. Play will log that the GetPurchases api has been hit (and I also override OnRouteRequest in Global.java and log it there). Then I'll hit api POST /api/CreatePurchase and confirm that in the database there are now 3 purchase objects.
I'll call GET /api/GetPurchases again and I'll get 2 results. And the logs show that it's as if the api /api/GetPurchases was never hit.
Also, this only happens when my app is deployed to Heroku. Locally everything works perfectly. But I've spoken with Heroku support to ensure that Heroku is platform-only and they would never cause Play to act any differently.
We eventually found that Play Framework was caching GET results.
We found that by default, Play does not cache results in development mode (so when deployed locally). But in production mode (so when deployed to Heroku), Play does cache results by default.
To change this, we added this line of code in our action methods where we didn't want Play to cache the results:
response().setHeader(CACHE_CONTROL, "no-cache");
It can also be helpful to locally mock a production deployment by running foreman start.
thanks to Salem and millhouse from comments above
Related
I'm using AWS AppSync, Apollo and React Native. One of the great advantages of using these together is that I get good offline behaviour. In my app, I can make changes while offline and they all get queued up and executed when I get back on line.
However, I'd like to be able to show the user if there are mutations which haven't been sent to the server yet. Just some little icon or something which goes away when everything is up-to-date.
Can anyone point me in the right direction? I've looked at the offline configuration for AWSAppSyncClient, and can see there's a callback I can hook into, but it doesn't give me enough information as far as I can tell.
Thanks!
Have you looked into using the amplify library - https://aws-amplify.github.io/docs/android/start?
When you make a mutation while the device is offline - it gets added to a local queue (persisted by sqlite). It is from this queue that they are read and sent to server serially once back online.
Now while offline - your appcode would be able to query the local datastore to determine the mutations still 'offline'
Read more here:
https://aws-amplify.github.io/docs/android/api#client-architecture
https://aws-amplify.github.io/docs/android/api#offline-mutations
I have some Web API applications that uses OWIN for authentication. Currently they are hooked up to Google and Facebook. I have them installed in multiple environments (local, dev, test, etc). Recently ALL of my applications in my development environment started failing. When trying to authenticate I would get a response back "access_denied". The URL would look like this:
https://{mydevserver}/{mywebapiapp}/#error=access_denied
The same code base works locally as well as in my test environment.
I tried using the same project (just adding redirect uris and orgins) as well as creating a new project.
I also updated my test environment to use the dev project (id and secret).
Nothing seems to have changed on the server recently. But it seems to be environment specific (because multiple applications are affected as well as multiple providers).
Are there any logging techniques I can use to drill down to a more detailed error message? Any tips or hints for what to try next?
The fix was a bit of an odd one. I had to log into my server, open up a browser and connect to a web page (any page). After doing so it started working again.
I have basically two URL's http://xyzwebsite.com (for Development Testing) and http://abcwebsite.com (For Production). I have a simple Login mechanism where a user can click on Google Plus icon to log in rather than using their Username and Password. I created one Project for Development with obviously different Client ID and different for Production with a separate client ID.
But I tested both the URL's above with the client ID of Development project and it worked fine. I am wondering why there is a need ot having multiple projects in Google API console?
There is no particular need. A single project can have several URLs and client IDs for use.
Some reasons you might use multiple projects include:
Changing project settings in dev without worrying about breaking production
If you have a development script that gets into an endless loop or something it might use up all of the quota and the production app might start throwing errors
You might want clear branding on the dev app that explicitly identifies as not production.
Some unknown reason I can't think of.
I just started using Heroku for one of my node apps.
When I run the heroku logs command it is so cluttered that i cant pick out the data I want from all the other information I don't need.
Is there a way to clean up that log output so it's more human friendly?
It's like it just dumps a wall of text at me.
Thanks!
I am using Papertrail add-on on Heroku for viewing the logs.
It has a free plan which is enough for small application. It gives you flexibility for searching your logs by text and time. A browser URL is provided by Papertrail to view the logs, which is convenient to access from mobile also. Adding this add-on to your application is quite simple, no app changes are required. Below filters are available out of the box on its dashboard to view the logs-
All events
Deploys
Dyno state changes
Platform errors
Web app output
I generated a self-signed cert, exported and uploaded to the subscription. When I deploy to staging, all works well when I navigate to the app (MVC) it redirects to https appropriately, I get the warning but all works perfect when I continue to the site. When I deploy same package to production, none of my roles responds, to web requests or tcp, I can't even RDP into the VMs.
Any ideas?
EDIT: I'm going to say this is closed. I gave it an hour (watched tv to clear my head).. and it started functioning correctly. Perhaps it just took an extra long time to spin up the VMs.
Some time it take longer then expected for the role to start and if you are interested to know why I have explained in the following SO question:
Is there a way to reduce time between Azure deployment start and role OnStart() code being invoked?
Want to know more let me know and I would love to explain in much finer details.