My Continuous Integration is running ver very slow.
After launch with -r "ContinuousIntegration.exe -r" it hangs after "Restoring objects…" and before "Optimizing file repository…". It can last within that state even for an hour. After all everything is imported well...
With a profiler I've found that most time is consumed by CMS.DataEngine.TranslationHelper
Anyone has some ideas what is wrong ? Click here to see the screenshot of profiler
If you have a lot of custom objects or data within those out of the box or custom objects with relationships when you are doing a -r it can take a long time to update your local instance. Simply put, it's rebuilding the whole database with the structure in the CI files. Also, the documentation states:
To ensure that the restore process works correctly, you need to stop your Kentico application before running the restore process. Otherwise you may encounter the following problems:
Deadlocks or data inconsistencies if the system attempts to write to the CIRepository folder while data is being restored from the files
Outdated content in the application's cache if you restore without restarting (can cause inconsistencies in the Kentico administration interface or the website's content)
So be sure to stop your instance when restoring to help with the performance.
Related
We've recently launched a new website in Azure (i.e. Azure Websites) and as is typical with new launches we've had to deploy a few tweaks to fix minor issues shortly after launch.
We want to use Slots in the long run but this is not possible at the moment. Hence we are deploying to the live site. It's a fairly busy site with a good amount of traffic and obviously want to keep downtime to am minimum.
We are using Visual Studio to publish file changes to Azure but have noticed that even if we publish a relatively insignificant single file the whole site goes down and struggles to come back up. I was assuming that publishing a single file would literally just replace that file on the file system but it's behaving more like it recycles the application pool (or Azure equivalent) for the site. The type of files I've been publishing have been Razor views, hence would not typically cause a recycle.
Does anyone know what actually happens under the hood of VS Publish and if there is a way to avoid this happening?
Thanks.
I just tried this using a basically clean new MVC app (https://github.com/KuduApps/Dev14_Net46_Mvc5), and I did not see this behavior. The Index.html view has a hit count based on a static, which would tell us if the app or the page got restarted (or if that specific page got recompiled).
Then the test is to publish it, make a change to some other view (about.cshtml), and publish again. WHen doing this and hitting Index.cshtml, the count keeps going up, and there is minimal slowdown.
If you see it getting restarted after a view change, I suggest using Kudu Console to look at the files in site\wwwroot before/after the publish, and check what has a newer timestamp (e.g. check web.config, bin folder, ...).
I have created database project in visual studio 2013 from existing database. Then I have done lot of changes in database project like modify stored procedures, post deployment script, table structure, etc . Now database project is ready to deploy. But I am worry if any script fails then How I can retain the original state though it build properly.
Please suggest that if any query fails then I want ROLLBACK the all changes that I have made in database project.
Firstly you need to trust your tools and either believe they will work or find other tools.
While you are building the trust I would add a create backup to the pre-deployment script or run a backup before you deploy then if anything goes wrong you can restore and figure out what went wrong.
As David said to roll-back, you would get the previously deployed dacpac and generate a new deployment script from that but fixing forward is almost always the correct thing to do rather than rolling back to a previous version.
ed
Have you been checking your changes into version control? If so, all you need to do is to revert back to the last known good version.
Or... simply work out why it's failing now and fix the root cause?
I used Db projects some time ago and as far as I remember the deploy script was wrapped in a transaction. It is possible to generate sql script without executing it. That setting was somewhere in DB project settings. You can take a look inside that script and make sure that it'll rollback on error.
Doing a backup would still be a recommended practice especially when you deploy to production.
when working on important scripts I developed a habit of always starting a transaction and commenting out the commit.
If you accidentally run it, it won't take effect. The commented out commit would only come out when the thing was done.
While this answer indicates that you CAN revert in source control (Assuming SSDT at this point) it would be nice to get a pointer to the exact process to do this. On a file by file basis the history works the same but how to revert the entire database at once isn't immediately obvious.
My application is a combination of Spring/Hibernate/JPA. Recently my development environment was migrated to RAD 7 with WAS7. Previous I was using v.6 for RAD & WAS.
The problem is,
when I make a Java change, the server publishes for a long time, sometimes it takes upto 10 mins for a single line of change to take effect. Also even JSP changes alone takes much time during publishing!!
This was not the case in WAS6. Publishing java changes was not even a concern in WAS6. The changes takes effect immediately as the publish process is done within a few seconds.
This publishing process keeps on running several times as I make changes in my code, and I have to wait (for long intervals during work hours) till it completes, to verify/test my changes during runtime. This is horrible!!
Is there a way to make WAS7 publish JSP/Java changes faster in few seconds as like WAS6? Is there any fix/refresh pack for this?
Can someone help me with this?
Thanks in adavance.
This problem can be overcome if you have the control to publish rather than automatically publishing it. You can wait to make all your changes and then publish it.
To do that
In the server view double click on the server that you are working on and under publishing check the option "Never publish automatically".
Also if you can give the option "Run server with resources within the workspace" that would do reduce the time of copying the files from your workspace to server space while publishing.
I just discovered (the hard way) that if you deploy your application to a device after doing a "Rebuild" or a "Clean -> Build" from Visual Studio your app is first uninstalled and then reinstalled resulting in the isolated storage files being wiped.
The Application Deployment Tool always seems to do uninstall - reinstall irrespective of whether it was an incremental build or not.
Has anybody found a workaround to this? Of course, the most obvious one is never to rebuild your application, but what if you accidentally do? Currently, I don't have all the generated files under source control, so if I were to try to build the app on another computer it would be a rebuild (maybe I will add all the generated junk into source control if no one has a workaround)
If I can suggest an alternative appraoch.. I think you will find it beneficial in other situations as well if you can introduce a little process to the generation of your test data so that it is easier to either a) restore or b) generate.
You could for example have a debug build only feature to upload/download the files on the device to a wcf service running locally on your PC (a simplified version of what Rongchaua did here).
Or, more work, if you are willing, but offering even more additional benefits would be to develop some automated testing capability into your app.. starting with generation of initial test data. Here's something you could look at to get started on that path.
Claus Konrad Blog: WP7: How to unit test a MVVM Light WP7-application
Granted these would take a bit of effort, but it's an approach that gives you some independence from manually generated test data, which in my experience invariably turns out to be a hassle at various times. And once solved, you find all sorts of reasons to thank yourself for doing it later.. whether it be saved time, or more robust testing because you can afford to be more aggresive with your test data/test execution and manage multiple test data configurations.
There is a a workaround:
open the solution configuration manager
next to build is a deploy column, uncheck your project
press F5
This will launch the app that is already on the device without overwriting it (and deleting its storage).
I have a fairly large PHP codebase (10k files) that I work with using Eclipse 3.4/PDT 2 on a windows machine, while the files are hosted on a Debian fileserver. I connect via a mapped drive on windows.
Despite having a 1gbit ethernet connection, doing an eclipse project refresh is quite slow. Up to 5 mins. And I am blocked from working while this happens.
This normally wouldn't be such a problem since Eclipse theoretically shouldn't have to do a full refresh very often. However I use the subclipse plugin also which triggers a full refresh each time it completes a switch/update.
My hunch is that the slowest part of the process is eclipse checking the 10k files one by one for changes over samba.
There is a large number of files in the codebase that I would never need to access from eclipse, so I don't need it to check them at all. However I can't figure out how to prevent it from doing so. I have tried marking them 'derived'. This prevents them from being included in the build process etc. But it doesn't seem to speed up the refresh process at all. It seems that Eclipse still checks their changed status.
I've also removed the unneeded folders from PDT's 'build path'. This does speed up the 'building workspace' process but again it doesn't speed up the actual refresh that precedes building (and which is what takes the most time).
Thanks all for your suggestions. Basically, JW was on the right track. Work locally.
To that end, I discovered a plugin called FileSync:
http://andrei.gmxhome.de/filesync/
This automatically copies the changed files to the network share. Works fantastically. I can now do a complete update/switch/refresh from within Eclipse in a couple of seconds.
Do you have to store the files on a share? Maybe you can set up some sort of automatic mirroring, so you work with the files locally, and they get automatically copied to the share. I'm in a similar situation, and I'd hate to give up the speed of editing files on my own machine.
Given it's subversioned, why not have the files locally, and use a post commit hook to update to the latest version on the dev server after every commit? (or have a specific string in the commit log (eg '##DEPLOY##') when you want to update dev, and only run the update when the post commit hook sees this string).
Apart from refresh speed-ups, the advantage of this technique is that you can have broken files that you are working on in eclipse, and the dev server is still ok (albeit with an older version of the code).
The disadvantage is that you have to do a commit to push your saved files onto the dev server.
I solwed this problem by changing "File Transfer Buffer Size" at:
Window->Preferences->Remote Systems-Files
and change "File transfer buffer size"-s Download (KB) and Upload (KB) values to high value, I set it to 1000 kb, by default it is 40 kb
Use offline folder feature in Windows by right-click and select "Make availiable offline".
It could save a lot of time and round trip delay in the file sharing protocol.
The use of svn externals with the revision flag for the non changing stuff might prevent subclipse from refreshing those files on update. Then again it might not. Since you'd have to make some changes to the structure of your subversion repository to get it working, I would suggest you do some simple testing before doing it for real.