I have a site where I am getting:
I had to do a full re-install where this previously all worked fine, and this is going to Azure. I re-imported my publish settings from Azure and see this. That looks good too so I assumed we were back to normal.
Except that the "Publish Succeeded" stuff, when I visit that actual URL I publish to (which I had to blur), none of my changes are there.
Any ideas?
I'm fully checked-in on the git branch and this runs fine locally.
From your comments I understand you're using FTP to make changes to your Azure project. That's not really the best way of deploying an application in 2020, but for this particular issue that you're facing it doesn't matter what method you use.
The most likely scenario is that when you visit the URL you are being given a cached version of your website.
That can happen for multiple reasons:
1) Your browser stored a cached version of the website
2) You are using a CDN (content delivery network) such as Cloudflare, which most often comes with an enabled cache feature that ensures your users will get your static pages lightning fast
3) Your web application implements one or more caching procedures
If none of those is the case (ie: you have tried using incognito mode, you don't use a CDN and you haven't implemented a caching strategy) then you might need to double-check you have pushed to the correct branch and that the commits contain your recent changes.
EDIT: if you actually have everything checked, including that your Git repo is properly synchronized, then it might be worth trying a different deployment method -- normally it shouldn't affect the end result, but there is the possibility that the Microsoft Azure platform has certain hidden bugs - this being one of them.
Have you thoroughly check the directories and stuff if its correct? most of the time issues like this are some minor errors like cache, wrong directories, and same output from previous files etc.
Related
We've recently launched a new website in Azure (i.e. Azure Websites) and as is typical with new launches we've had to deploy a few tweaks to fix minor issues shortly after launch.
We want to use Slots in the long run but this is not possible at the moment. Hence we are deploying to the live site. It's a fairly busy site with a good amount of traffic and obviously want to keep downtime to am minimum.
We are using Visual Studio to publish file changes to Azure but have noticed that even if we publish a relatively insignificant single file the whole site goes down and struggles to come back up. I was assuming that publishing a single file would literally just replace that file on the file system but it's behaving more like it recycles the application pool (or Azure equivalent) for the site. The type of files I've been publishing have been Razor views, hence would not typically cause a recycle.
Does anyone know what actually happens under the hood of VS Publish and if there is a way to avoid this happening?
Thanks.
I just tried this using a basically clean new MVC app (https://github.com/KuduApps/Dev14_Net46_Mvc5), and I did not see this behavior. The Index.html view has a hit count based on a static, which would tell us if the app or the page got restarted (or if that specific page got recompiled).
Then the test is to publish it, make a change to some other view (about.cshtml), and publish again. WHen doing this and hitting Index.cshtml, the count keeps going up, and there is minimal slowdown.
If you see it getting restarted after a view change, I suggest using Kudu Console to look at the files in site\wwwroot before/after the publish, and check what has a newer timestamp (e.g. check web.config, bin folder, ...).
This is related to my problem here:
Editing velocity template of Liferay changes not showing or takes a while
I've tried to investigate the caching problem and tried different things. Here's what I found out.
I just found out that my css is also delayed in showing the updates. I tweak my css file adding comment and testing it on my browser. Now just requesting it on the URL it will just give me an older version of the file. now if i put a query asking for version, (eg. mystyle.css?v2) it will pull the latest file and it records it. I can even see different version, eg. (mystyle.css?v1) or (mystyle.css?v2. I've cleaned my cache so it's definitely on the server side (i think).
Is there a way I can clear my cache?
Thank You!
you can activate Liferay's developer setting by including the properties that are in ROOT/WEB-INF/classes/portal-developer.properties.
Explanation: Liferay minifies and caches CSS and Javascript - once this is done it will not examine those for changes. The developer settings will disable that.
However, you don't want this setting active in production as this will mean that you'll have to load dozens of files instead of very few combined+minified, well cached, files.
I assume you are using this for development, not in production.
If you're having these problems in production, you should rather work with a proper theme plugin and redeploy that.
I'm using Publish/Web Deploy to deploy an asp.net aplication from Visual studio 2010. It works perfect, but there is a problem. If the new release is not working as expected, the old version is already replaced by the new one and there is no easy way to roll back to the working version. How is this best solved? I wish it was possible to keep the old version on the server so I could just switch back if needed.
With WebDeploy there is no built in rollback feature, so once you've deployed that's it.
There's a number of hand rolled strategies you could put in place, for example:
Limited Access e.g. Shared Hosting:
Where you don't have full access to the machine -
Backup the live site beforehand by downloading it.
Keep copies of what you deployed so you can push the previous version should something break
Full Access:
Maintain two sets of folders for the application and map your site to one or other of these folders. When you come to deploy switch the IIS site's physical path to the other folder then deploy. If the site fails then just knock the site back to the original folder. Each successful deploy would alternate between these two folders.
For stuff like user uploaded content you'd need to map virtual directories to a place on the file system that's always the same place because you don't want to be copying these around each time.
You're not the only one who has encountered these issues. Have a look at this article by Rob Conery and his observations about the state of affairs regarding ASP.NET deployment.
ASP.NET Deployment Needs To Be Fixed
Getting Constructive On ASP.NET Deployment
Using some form of Source Control would be another alternative. We use subversion, so if the publish goes bad, we can just update back to the last-good revision, and publish that. Even if you're the only developer, using source control can be very useful.
I'll try to make this as straight forward as possible.
Currently our team has a VSS database where our projects are stored.
Developers grab the code and place on their localhost machine and develop locally.
Designated developer grabs latest version and pushes to development server.
The problem is, when a file is removed from the project (by deleting it in VS2008) then the next time another developer (not the one who deleted it) checks in, it prompts them to check in those deleted files because they still have a copy on their local machine.
Is there a way around this? To have VSS instruct the client machine to remove these files and not prompt them to check back in? What is the preferred approach for this?
Edit Note(s):
I agree SVN is better than VSS
I agree Web Application project is better than Web Site project
Problem: This same thing happens with files which are removed from class libraries.
You number one way around this is to stop using web site projects. Web Site Projects cause visual studio to automatically add anything it finds in the project path to the project.
Instead, move to Web Application Projects which don't have this behavior problem.
Web Site projects are good for single person developments.
UPDATE:
VB shops from the days gone past had similiar issues in that whatever they had installed affected the build process. You might take a page from their playbook and have a "clean" build machine. Prior to doing a deployment you would delete all of the project folders, then do a get latest. This way you would be sure that the only thing deployed is what you have in source control.
Incidentally, this is also how the TFS Build server works. It deletes the workspace, then creates a new one and downloads the necessary project files.
Further, you might consider using something like Cruise Control to handle builds.
Maybe the dev should take care to only check in or add things that they have been working on. Its kind of sloppy if they are adding things that they were not even using.
Your best solution would be to switch to a better version control system, like SVN.
At my job we recently acquired a project from an outsourcing company who did use VSS as their version control. We were able to import all of the change history into SVN from VSS, and get up and running pretty quickly with SVN at that point.
And with SVN, you can set up ignores for files and folders, so the files in your web projects dont get put into SVN and the ignore attributes are checked out onto each developer's machine
I believe we used VSSMigrate to do the migration to SVN http://www.poweradmin.com/sourcecode/vssmigrate.aspx
VSS is an awful versioning system and you should switch to SVN but that's got nothing to do with the crux of the problem. The project file contains references to what files are actually part of the project. If the visual studio project isn't checked in along with the changes to it, theres no way for any other developer to be fully updated hence queries to delete files when they grab the latest from VSS. From there you've got multiple choices...
Make the vbproj part of the repository. Any project level changes will be part of the commit and other developers can be notified. Problem here is it's also going to be on the dev server. Ideally you could use near the same process to deploy to dev as you would to deploy as release. This leads into the other way...
SVN gives you hooks for almost all major events, where hooks are literally just a properly named batch file / exe. For your purposes, you could use a post-commit hook to push the appropriate files, say via ftp, to the server on every commit. File problems solved, and more importantly closer towards the concept of continuous integration.
Something you may want to consider doing:
Get Latest (Recursive)
Check In ...
Its a manual process, but it may give you the desired result, plus if VS talks about deleted files, you know they should be deleted from the local machine in step 1.
I am the web guy for a large TV station. Our site is cached by Akamai. Pages render perfectly in our testing environment (not cached) and on our "origin" page (again, not cached), but when they are viewed on our live environment (the cached site), they do not render exactly the same as how I coded them. Maybe it's a tiny bit of spacing, maybe it's a CSS element (backgrounds especially) not displaying, and worst of all, forget all about floating DIVs. It's insane how much table design I have to do because of the failure to float.
Does anyone else have experience with caching like this? Is there a tool I can use to see the changes in rendering?
There is no one I can go to for support, because the company doesn't believe the problem exists. Please assist if you can.
The site is built on a VB.Net backend that I do not have access to. I only have access to the front end.
I've been working on sites behind Akamai and can honestly say they don't mess with your code, so that's not the issue. It's more than likely one of the following:
You have a cache latency issue - You updated your html and css, and one of the two updated while the other is still cached by Akamai or using timestamps to increment dependent files. There are several solutions here including making sure to clear cache via Akamai's control panel as well as more programmatic ways of coding. Headers can also be used though not really a preferred way.
Absolute URL's - Relative url's are best when testing on multiple environments to ensure your pointing everything to the same environment.
This is definitely an environment issue not an Akamai issue.
Are stylesheets, Javascript files etc all loading correctly from Akamai?
Can you save a copy of a page retrieved directly from your "origin" server and a copy saved using Akamai, then use diff to look for changes?
And, most importantly, have you asked Akamai about it? It's not really a programming question :)
Download all files as static files from development and then from production. And use a tool like WinMerge to see the differences.
Also does this problem go away if you do CTRL-F5 to refresh the browser?
Perhaps Akamaia isn't seeing the updated versions of your CSS files that are <link />'d in your HTML code? It might be a good idea to embed a version number in the URL so that when you release an updated version of the HTML, it's always going to ask Akamai for a new version of the CSS as well (this applies to images as well I suppose).
Theoretically, Akamai should recognize updated caching headers that your web server sends but I've never worked at a job where we didn't have to have some counter-measures in place to make sure that we could force Akamai to refresh its cached version.