Picked node in Umbraco found on local server, but not live - umbraco7

I am in a strange situation where by I can't access the Umbraco admin panel on the live server (The server itself has Url Scanner installed which is blocking access to folders containing '.', this cannot be changed right now as the impact this will have on other applications is currently unknown).
So, my set up: local build connects to the live database, a page is set up on the live site that I can visit to republish once changes have been made. All has worked perfectly, until now.
The Problem:
I have a node in Umbraco that I have added a content picker to and selected a node within the website. The code for the page in question has an if statement that checks for the value and outputs code accordingly, this all works fine locally, if there's no node selected, the code is output without anchor tags.
On live, the code is output as though there is no node selected. I have published a number of times, other changes are visible. I checked the file itself, the code is identical to my local build. I checked the umbraco.config file, searched for the node in question and that shows that a node has indeed been selected. So, if the node on the live server HAS got a node selected, why is the code not currently working when the exact piece of code works locally?
This would be so much easier if I could look at Umbraco in the live instance to see what's going on but at the moment, that's out of the question.
Has anyone experienced anything similar? I understand my setup is quite strange, so I won't hold my breath, but I'm at a loss.

May be a caching issue? Take a look here to see if clearing the cache helps: Should I delete TEMP folder when publishing Umbraco?

Related

Odoo - How To Manage & Update Static Files

Static Files in Odoo
I'm new to Odoo, and am working through developing a custom theme for a client. I've worked through the theme tutorial despite the many errors and omissions that exist in that documentation (going to make a pull request to update that after I'm done). My latest struggle is dealing with static files in Odoo, specifically images in the theme.
The Setup
Running Odoo 13.0.20200323 on Ubuntu 18.04 in VirtualBox managed by Vagrant and provisioned with Ansible
The Problem
Changes to image files in the static folder are not reflected on the website. This includes updating the theme in the website theme settings (the update function seems to update everything else). I've changed image names, image content, moved them into other folder, and have not been able to figure out how to have any changes updated on the website. I've restarted the server, doesn't change anything. Updated the theme as stated above, doesn't work. The only way I've been able to have any changes reflected on the front end is to completely destroy and rebuild the server.
Questions
What am I missing? Is there a function I'm not running to trigger Odoo to update what it serves from the static folder?
How does Odoo work with static files in general? On the fly updates to files in the static folder don't seem to have any changes on the front end. Are the files in the static folder copied somewhere else on install, and then served from that other location?
Understanding
I understand that having images and other files change in a folder called static, doesn't make much sense functionally, and that's not my intention for this. Since I'm in development I need to make changes to files, like SCSS, JS, and images, and have those updates reflected on the front end without having to destroy and rebuild the server every time. To be clear, changes to SCSS and JS files that I've registered in .xml files and bundled with various Odoo bundles update just fine when I make changes to them and then update the theme on the backend in the theme management view.
My desire is to understand how Odoo handles the files in the static folder in general, how to update those files properly, and how to manage them while developing and for release.
Answers
I've figured out an answer to the first part of the question as to what I'm doing wrong. It seems like a browser caching issue. When performing a hard reload, empty cache and hard reload, or visiting the site in an incognito window changes are reflected to imagery.
As far as I can tell Odoo is just serving files from the static folder directly. Please correct me if this is not the case.
Follow-up
Does anyone have a good solution for working with changes like this and dealing with browser caching issues?
Answer: I've set up a a couple gulp tasks that use gulp-rev (will replace with gulp-rev-all soon) rev-del and rev-rewrite to handle cache busting through appending hashes to the file names.
I'm going to try setting up Browser-sync in proxy mode to see how that deals with changes to files on reload. I'll report what I find!
Update: Browser-sync has worked well so far as expected. But was kind of useless until I figured out how to work out a fix for the problem below
Does anyone know of how to automate Odoo rebuilding SCSS, JS etc. bundles? So that on file change the theme can be updated and the results seen without having to manually update the theme on the backend to see the results?
Answer: The main task was figuring out how to get live HTML/XML updates working. Which meant building Odoo from the source, and not making any updates or changes to the theme on the backend or frontend from within the Odoo interface. Passing the option --dev xml to Odoo when starting it with odoo-bin allows for the XML code to be evaluated directly, and makes live updates possible. But this extremely helpful (almost necessary) functionality is broken when you make any updates to the them from within Odoo. I'll report on any work arounds to this, but for now as long as I don't touch the theme from within Odoo (update the theme or make edits to it with their editor) then it works great. Also I had to bypass bundling my CSS and JS with their bundler initially to get those updates working live, but may be able to go back and rebundle them now that the code is being evaluated directly.
⭐️Boilerplate and Tutorial Series ⭐️
I'm going to get my whole process for theme building dialed in and then I'll be sharing the boilerplate and build tools on GitHub and also writing and filming a tutorial series on it. Since the built in documentation on that front is straight up error filled, omits critical information and also

Publish success but no changes on site?

I have a site where I am getting:
I had to do a full re-install where this previously all worked fine, and this is going to Azure. I re-imported my publish settings from Azure and see this. That looks good too so I assumed we were back to normal.
Except that the "Publish Succeeded" stuff, when I visit that actual URL I publish to (which I had to blur), none of my changes are there.
Any ideas?
I'm fully checked-in on the git branch and this runs fine locally.
From your comments I understand you're using FTP to make changes to your Azure project. That's not really the best way of deploying an application in 2020, but for this particular issue that you're facing it doesn't matter what method you use.
The most likely scenario is that when you visit the URL you are being given a cached version of your website.
That can happen for multiple reasons:
1) Your browser stored a cached version of the website
2) You are using a CDN (content delivery network) such as Cloudflare, which most often comes with an enabled cache feature that ensures your users will get your static pages lightning fast
3) Your web application implements one or more caching procedures
If none of those is the case (ie: you have tried using incognito mode, you don't use a CDN and you haven't implemented a caching strategy) then you might need to double-check you have pushed to the correct branch and that the commits contain your recent changes.
EDIT: if you actually have everything checked, including that your Git repo is properly synchronized, then it might be worth trying a different deployment method -- normally it shouldn't affect the end result, but there is the possibility that the Microsoft Azure platform has certain hidden bugs - this being one of them.
Have you thoroughly check the directories and stuff if its correct? most of the time issues like this are some minor errors like cache, wrong directories, and same output from previous files etc.

Does Visual Studio Publish to Azure Website Cause Whole Site to Recycle?

We've recently launched a new website in Azure (i.e. Azure Websites) and as is typical with new launches we've had to deploy a few tweaks to fix minor issues shortly after launch.
We want to use Slots in the long run but this is not possible at the moment. Hence we are deploying to the live site. It's a fairly busy site with a good amount of traffic and obviously want to keep downtime to am minimum.
We are using Visual Studio to publish file changes to Azure but have noticed that even if we publish a relatively insignificant single file the whole site goes down and struggles to come back up. I was assuming that publishing a single file would literally just replace that file on the file system but it's behaving more like it recycles the application pool (or Azure equivalent) for the site. The type of files I've been publishing have been Razor views, hence would not typically cause a recycle.
Does anyone know what actually happens under the hood of VS Publish and if there is a way to avoid this happening?
Thanks.
I just tried this using a basically clean new MVC app (https://github.com/KuduApps/Dev14_Net46_Mvc5), and I did not see this behavior. The Index.html view has a hit count based on a static, which would tell us if the app or the page got restarted (or if that specific page got recompiled).
Then the test is to publish it, make a change to some other view (about.cshtml), and publish again. WHen doing this and hitting Index.cshtml, the count keeps going up, and there is minimal slowdown.
If you see it getting restarted after a view change, I suggest using Kudu Console to look at the files in site\wwwroot before/after the publish, and check what has a newer timestamp (e.g. check web.config, bin folder, ...).

ClickOnce Error "different computed hash than specified in manifest" when transferring published files

I am in an interesting situation where I maintain the code for a program that is used and distributed primarily by our sister company. We are ready to distribute the program to all of the 3rd party users and since it is technically our sister companies program, we want to host it on their website. (in the interest of anonimity, I'll use 'program' everywhere instead of the actual application name, and 'www.SisterCompany.com' instead of their actual URL.)
So I get everything ready to go, setup the Publish setting to check for updates at program start, the minimum required version, and I set the Insallation Folder URL and Update Location to "http://www.SisterCompany.com/apps/program/", with the actual Publishing Folder Location as "C:\LocalProjects\Program\Publish\". Everything else is pretty standard.
After publish, I confirm that everything installs and works correctly when running directly from the publish location on my C: drive. So I put everything on our FTP server, and the guy at our sister company pulls it down and places everything in the '/apps/program/' directory on their webserver.
This is where it goes bad. When I try to install it from their site, I get the - File, Program.exe.config, has a different computed hash than specified in manifest. Error. I tested it a bit, and I even get that error trying to install from any network location on our network other than my local C: drive.
After doing the initial publish in visual studio, I have changed no files (which is the answer/reason I've found by doing some searching about this error).
What could be causing this? Is it because I set the Installation Folder URL to a location that it isn't initially published too?
Let me know if any additional info is needed.
Thanks.
After bashing my head against this all weekend, I have finally found the answer. After unsigning the project and removing the hash on the offending file (an xml file), I got the program to install, but it was giving me 'Windows Side by Side' Errors. I drilled down into the App Cache were the file was, and instead of a config .xml file, it was one of the HTML files from the website the clickonce installer was hosted on. Turns out that the web server didn't seem to like serving up an .XML (or .mdb it turns out) file.
This MSDN article ended up giving me the final solution:
I had to make sure that the 'Use ".deploy" file extension' was selected so that the web server wouldn't mangle files with extensions it didn't like.
I couldn't figure out why that one file's hash would be different. Turns out it wasn't even the same file at all.
It is possible that one of the FTP transfers is happening in text mode, rather than binary?
For me the problem was that .config transformations were done after generating manifest.
To anyone else who's still having trouble, five years later:
The first problem was configuring the MIME type, which on nginx (/etc/nginx/mime.types) should look like this:
application/x-ms-manifest application
See Click Once Server and Client Configuration.
The weirder problem to me was that I was using git to handle the push to the server, i.e.
git remote add live ssh://user#mybox/path/to/publish
git commit -am "committing...";git push live master
Works great for most things, but it was probably being registered as a "change," which prevented the app from installing locally. Once I started using scp instead:
scp -r * user#mybox/path/to/dir/
It worked without a hitch.
It is unfortunate that there is not a lot of helpful information out there about this.

magento - $ is not a function. But only on local dev server

I took a backup of my live Magento site yesterday (zipped up the files and took a DB dump then created the site from those dumps).
Oddly though, on my local machine I get a firebug error that states "$ is not a function" and this error occurs every 500ms or so. So after a minute or 2 I have thousands of errors in the console all the same.
The site is an exact replica of my live site and I don't get the error on that so I'm stumped!
Usually I would think this is a prototype/jquery conflict, but it only seems to happen on my local machine.
Any one have a clue what might be going on?
Thanks
Load a page where you see the error.
View the source of the page.
Find the line that's supposed to load prototype.js by searching for the string prototype.js.
ex. http://magento.example.com/js/prototype/prototype.js
Discover that, for one of myriad reasons, the file isn't loading. (wrong URL, permissions, corrupt file, etc.)
Address problem discovered above.
Ok so this was the problem:
The reason it worked on live and not dev was because I had merge JS enabled on live and not on dev. Live was therefore looking at an old cached bunch of js. Disabling merge js on live highlighted that the problem did in fact occur on the live site.
This knowledge allowed me to debug further and I discovered that the problem lay with my jquery.hove.intent.js file. I updated this to the latest version and it solved everything! :)
Thanks all for your help and input though.

Resources