I have a very simple plaintext web app on a free, unverified account (which is how I would prefer to leave it). It only has 1 web dynmo and works perfectly after I update the git repository, but when I refresh the app's page (without updating the git), it does not update the content of the website.
How do I make it rerun the command on each page refresh (or even at all)? I understand that ideally I would use 2 web dynmos so it would never go to sleep but the delay on awakening isn't detrimental in my case. I just don't get how heroku ps can show a running web dynmo without doing anything new.
Related
I have created a new grid editor, and have deployed it to my production server. When on my development machine, a change to the grid.editors.config.js is reflected immediately.
However, on my production server, a change to grid.editors.config.js has no effect.
After some research, I have found that the issue is probably the client dependency cache. I have tried the following:
Removing the files from App_Data/TEMP/ClientDependency
Incrementing the version number in Config/ClientDependency.config
Recycling the application pool
Clearing the browser cache
Restarting the server
What am I missing? When I add a query string, ie. https://mywebsite/config/grid.editors.config.js?v=1 then the changes are shown, which means the file has definitely updated on the server.
What do I need to do to update the file?
Are you using any expiration headers for caching js on your website?
You could try to delete the following files:
App_Data/TEMP/DistCache
App_Data/TEMP/PluginCache
I find that it's a simple case of the browser caching your assets locally. You can usually force a refresh by pressing CTRL + F5 or holding CTRL and clicking refresh in your web browser and the changes are then visible.
As it turns out, the issue was caused by a third party that provides DDoS protection to the site - content was cached via the third party and so changes to files were not being reflected.
I am working on a trip planning application that used to implement offline support with the combination of appcache and localStorage. As soon as Service Worker became a viable option we started using it. Transition went without a hitch in Chrome (Chromium, Opera etc.) but Firefox (both 44 and 45) is posing some issues: Firefox registers new service worker but sill loads pages in its scope from the outdated appcache.
In other words, if you are lucky enough to never have stumbled upon our website, and if you open it for the first time in FF 44/45, you are going to get a new shiny service worker that takes care of all your offline needs. Life is great.
However if you had a misfortune to use Firefox before Service Worker was enabled you'd still have our website's older version in your appcache.
you go to the welcome page - the service worker gets activated and it (supposedly) takes care of handling everything for the entire scope
you log in, which redirects you one of the pages in SW scope (/ui) - it still should be handled by service worker, but instead Firefox suddenly realizes that it has that old appcache and without even trying to load anything from the network it loads the old content from the appcache
I would OK with that (alhough my reading of the Mozilla docs tells me that appcache should be ignored in the scope controlled by service worker) if that happened only once. Sadly Firefox does not even try to GET a manifest to check if that old appcache is up to date. If it did attempt to GET manifest, it would have received 404, which would invalidated the appcache (as it did on Chrome). I do not see anything like that on the wire (or on the server side).
To add insult to the injury Firefox console proudly anounces: The Application Cache API (AppCache) is deprecated and will be removed at a future date. Please consider using ServiceWorker for offline support. :-)
Simple refresh (F5) does load the current version of the page through the service worker. Sadly it only works once. After closing and reopening the tab the whole dance replays itself: service worker takes care of all the pages in the scope with the exception of the ones that used to have appcache manifest declaration.
Clearing the appcache (appcache clear in developer's console or through Settings UI) does remedy the situation, but I cannot possible suggest it to all our Firefox users.
I tried to find something Firefox bugzilla without much luck. If someone can find a relevant issue that would be great.
For now we just had to disable SW support for Firefox.
Is there any way of signaling to Firefox that it should ignore the old appcache when in Service Worker scope?
Why can't you just modify your appCache .manifest file? Provide an empty file, or garbage link, or some dummy page, just to get the new .manifest accepted, and the old one should be history.
I'm developing a web app in Dart, packaged in tomcat 6 as a deployable .war. This app is used by a bunch of clients, all with Google Chrome.
Every time I republish a new version, every single client must clear his browser cache before seeing the updated files: this is very annoying and I can't find any solution other than broadcast a mail to everyone "Please clear the browser cache".
The desirable solution is not a complete cache disable but that the browser keeps caching all stuff to be the quicker it could, and that I can control this at my wish.
I'm not sure what your question is about exactly.
There is nothing specific to Dart. Caching is handled by the browser depending on the expires headers the server returns with a response to a request.
What you can do is something like explained here Force browser to clear cache or Forcing cache expiration from a JavaScript file, and make the client application poll the server frequently for updates and then redirect to the new URL. You could implement some kind of redirection on the server or ignore the version URL query parameter, to be able to actually keep the same names of the resources.
Another possibility could be to use AppCache and serve the manifest file with immediate expiration. When you have an updated version modify the manifest file which makes the client reload the resources listed in the manifest (https://stackoverflow.com/a/13107058/217408, https://developer.mozilla.org/en-US/docs/Web/HTML/Using_the_application_cache, http://alistapart.com/article/application-cache-is-a-douchebag#section4).
I have an application running in Azure (trial account). So far so good, everything has been nice, except for a long deploy times (10-15 minutes).
I've done a deploy recently and got a lot of weird bugs I cannot trace. For example, if I log in and thus a cookie is created (I use FormsAuthentication) all I get from the application is a blank page, as in, absolutely nothing is sent to the browser. The application works fine in the ASP.NET Web Dev Server, IIS Express, even the Azure Emulator!
What could be the issue? Searching the web hasn't been much help, with only a couple of unrelated issues.
I tried logging into the site (if I correctly understood from one if the comments, the url is versulo.com) and I didn't get any blank page with 404 status code.
However, there is another problem I spotted. Your site seems to be implementing caching inappropriately. The main page, the one from which you trigger the login and which is dynamic in nature contains an Expires header set at 5 minutes after the pages first load. That means that each call or redirect to that page within 5 minutes since it was first loaded, will be served from the browser's cache.
Because of that, after I login into your application I am redirected back to the home page which looks like I am not logged in. If I force a F5 refresh on the browser, then the page will indeed show me as logged in.
If instead of a refresh I try to login again (which is what I did in my first trials, since it looked like the login didn't work in the first time), I am getting an error page with the following message:
Sorry, there has been an error on the server.
500
The page looks like an application error page and even if it displays the 500 number, it is actually served with an HTTP 200.
So, while I am not 100% sure if this is also the cause of the problem described by you, you should remove the Expires headers from the dynamic pages your application is serving.
This can be because you're combining Forms Authentication with multiple instances. Are you using multiple instances? If that's the case, could you:
Try to change it to 1 instance. Does this fix the issue?
Try to make the following change to the web.config (configure machineKey): http://msdn.microsoft.com/en-us/library/ff649308.aspx
some partial views are not rendered at all;
Do you mean some pages are working fine, but others are not? It would be better if you can point out a pattern on what’s working and what’s not? For now, please make sure all referenced assemblies (except for default .NET assemblies and Windows Azure runtime) have Copy Local set to true. For example, MVC assemblies are considered as extensions to .NET, so please set Copy Local to true. In addition, you can also try to use Fiddler to monitor the requests to see what’s returned from the server.
Best Regards,
Ming Xu.
Could you provide a link to the application, or perhaps some source code?
When you say 'blank page', what is actually returned, a 404 / 500?
Have you inspected the IIS logs, or added some trace information to your code?
Have you tried accessing the service using it's ip address rather than domain name?
what do I have to do to add a ?_escaped_fragment_= support to my server? I want google to be able to crawl through my ajax site. My hashes are already in #! form
But I have no idea how to tell my server that when I enter mywebsite.com/?_escaped_fragment_=section to my browser so the url mywebsite.com/section and it would be equal to mywebsite.com/#!
thanks
Simple answer - my method (soon to be used for a site with ca. 50,000 AJAX-generated URLs) is to have a node.js server using a headless environment (try zombie, phantomjs, or any other) to load the site, making sure it's able to execute javascript and read the DOM - then at runtime, if it's google requesting the fragment, fire a request to the node.js server, which loads the site, executes the javascript, waits for the response, and delivers back the HTML, which is output to the browser.
If that sounds like a lot of work - I'm about 90% finished on the code that does it all for you, where you'd simply drop one line of (PHP) code at the top of your site/app and it does the rest for you, using a remote node.js server.
The code will be open source so if you want to set it up yourself on a node server, you can - or if it's a PITA to set it up yourself, I'll probably have a live server up and running which your app/website would fire ?_escaped_fragment_ requests to, and get the html snapshot back. It also implements caching so that these are only requested once every X days.
Watch this space - just got a few kinks to work out, and it'll be on my site (josscrowcroft.com) and I'll put it in a github repo too.