Scenario:
Version 1.0.0.0 of my App uses certain IsolatedStorageSettings say Key = ID and Value being an object having numbers 1,2 and 3. Now, I update my App to version 1.1.0.0 and the logic in the new version assumes four numbers for ID. Number 3 becomes 4 and a new operation is mapped to the new number 3.
This calls for a data migration in the IsolatedStorageSettings at the time of App update.
My question is that is there any standard way of doing such migration since this seems to be a standard scenario.
(If there is none, then I am planning to write a logic in Application Class constructor by checking if the Isolated Storage version number (another Settings) is same as the current App Version. If not then run the Migration logic required for the current version.
Is this approach correct?)
I am not sure about any standard procedure but you could put the following code in Application_Launching method of your app.
if(IsolatedStorageSettings.ApplicationSettings.Count==3)
{
//remove all 3 settings
//like IsolatedStorageSettings.ApplicationSettings.Remove("gps");
// add new settings
//like IsolatedStorageSettings.ApplicationSettings["offers"]="5";
}
Related
This isn't a question I've seen around, usually it's 'EPiServer isn't clearing the output cache'. I'm trying to achieve the opposite. Each time a page is published the entire cache is dropped and as the client publishes several times a day, this is frustrating.
I'm using the [ContentOutputCache] attribute and tried to implement a httpCacheVaryByCustom rule with an accompanying scheduled task in EPiServer to invalidate the cache when we decide to i.e. bundle updates together and invalidate at a predetermined time.
I've tested this rule and it works using:
public override string GetVaryByCustomString(HttpContext context, string custom)
I was under the impression that by using this type of caching rule it would stop EPiServer dumping my cache whenever something is published / media uploaded.
It doesn't though is there a way to stop this from happening?
I've had success by using the standard [OutputCache] with the same custom string rule the only problem with this is that editors will always see a cached version of the page they are editing.
The application settings I have in my web.config for EPiServer are:
<applicationSettings globalErrorHandling="Off" operationCompatibility="DynamicProperties" uiSafeHtmlTags="b,i,u,br,em,strong,p,a,img,ol,ul,li" disableVersionDeletion="false"
httpCacheability="Public" uiEditorCssPaths="~/assets/css/styles.css, ~/assets/css/editor.css" urlRebaseKind="ToRootRelative"
pageUseBrowserLanguagePreferences="false" uiShowGlobalizationUserInterface="false" subscriptionHandler="EPiServer.Personalization.SubscriptionMail,EPiServer"
uiMaxVersions="20" pageValidateTemplate="false" utilUrl="~/util/"
uiUrl="~/EPiServer/CMS/" httpCacheExpiration="01:00:00" httpCacheVaryByCustom="invalidateSiteCache" />
A custom GetVaryByCustomString function will determine when the cache is invalidated, but any request for content that is using the ContentOutputCache is checked against a master cache key Episerver.DataFactoryCache.Version. This version number is incremented any time content is published, updated etc, and the cache is invalidated if the version number is changed.
To understand what you need to do, I recommend using a decompiler (e.g. DotPeek) and looking at the ContentOutputCacheAttribute and OutputCacheHandler classes in the Episerver dll.
You will need to:
Derive a new handler from EPiServer.Web.OutputCacheHandler
Create an alternative method to ValidateOutputCache(...) that still calls OutputCacheHandler.UseOutputCache(...) but ignores the cache version number
Derive a new attribute from ContentOutputCacheAttribute
Override the method OnResultExecuting(ResultExecutingContext filterContext) using the same logic as the current method (this is where a decompiler is useful), but that adds a callback to your new validate method instead of the current one. Unfortunately we can't inject the new handler because the validate method is passed statically.
e.g.
public override void OnResultExecuting(ResultExecutingContext filterContext)
{
// Rest of method
filterContext.HttpContext.Response.Cache.AddValidationCallback(new HttpCacheValidateHandler(CustomOutputCacheHandler.CustomValidateOutputCache), (object) tuple);
}
Use the new attribute in place of [ContentOutputCache]
I've got the task of updating a CRM plugin for a system migrating from cm 2013 to 2016. The plugin fails because it tries to set the opportunity state to won, simply by updating the field. And you need to use the WinOpporunityRequest to do so.
The logic is as follows:
When the opportunity is won the plugin executes and runs on the opportunityclose entity
The plugin creates a new custom entity record (project) and updates several other records.
It gets the current opportunity by using the opportunityid of the opportunityclose entity
It updates a field on the opportunity with a reference to the newly created project record.
That update is done through the Update() method.
On 5 it fails because when at 3 it gets the current opportunity it already has the state of won. And if you try to update the record with a new state it fails.
My question is, how can I get the opportunity when acting on the opportunityclose entity and update only the one single field. I do not need to set the state as this is done in the standard CRM flow.
--Edit
The line of code that fetches the opportunity:
Xrm.Opportunity currentOpportunityObjectToUpdate = serviceContext.CreateQuery<Xrm.Opportunity>().First(x => x.Id == entityRef.Id);
The platform allows you to update closed opportunities, I just tried it to verify. What is the error you are getting?
In step #5, make sure you're only sending the attributes you're trying to update (opportunityid and lookup to project). So, when you issue the update, don't use any preexisting opportunity object that you either retrieved or created...doing so sends all attributes that are on the object and the platform will process each attribute as if it were being changed even if the value is unchanged. Instead, create a new opportunity object with just the id and project specified, something like this:
context.AddObject(new Opportunity() {
Id = idOfOpportunity, // you may have to specify id both here...
OpportunityId = idOfOpportunity, // ...and here, can never remember. Doesn't hurt to specify in both places.
new_ProjectId = idOfProject
});
context.SaveChanges();
If you get stuck, you always have an easy workaround option: take the logic from #4 and move it to an async plugin on create of project (even a workflow should work).
Is there a way to define a connection to a new Solr core on the fly, based on dynamic data?
We have a scenario where our Solr installation has multiple Cores/Indexes for the same type of document, separated by date (so a given week's documents will be on Index 1, the previous week's on Index 2, etc).
So when I receive my query, I check to see the required date range, and based on it, I want to query a specific core. I don't know in advance, at startup, which cores I will have, since new ones can be created dynamically during runtime.
Using the built-in ServiceLocation provider, there's no way to link two different Cores to the same document class. But even if I use a different DI container (currently Autofac in my case), I still need to specify all Core URLs in advance, during component registration.
Is there a way to bypass it except for always creating a new Autofac Container, generating the ISolrOperation<> class from it, and releasing it until the next time I need to connect to a core?
Mauricio Scheffer (developer of Solr.Net)'s comment confirmed that there's no built-in support for connecting to different index URLs on the fly. So instead of instantiating the internal objects myself, I used a hack on top of my existing Autofac based DI container:
public ISolrOperations<TDocument> ConnectToIndex<TDocument>(string indexUrl)
{
// Create a new AutoFac container environment.
ContainerBuilder builder = new ContainerBuilder();
// Autofac-for-Solr.Net config element.
var cores = new SolrServers
{
new SolrServerElement
{
Id = indexUrl,
DocumentType = typeof (TDocument).AssemblyQualifiedName,
Url = indexUrl,
}
};
// Create the Autofac container.
builder.RegisterModule(new SolrNetModule(cores));
var container = builder.Build();
// Resolve the SolrNet object for the URL.
return container.Resolve<ISolrOperations<TDocument>>();
}
So i'm using Entity Framework Code First Migrations.
I make a change to my model, add a new manual migration and it gets the up script wrong.
So I delete the migration, and at the same time that I am not going to change it the way I thought. Upon deletion of the migration class and a reset of the model (ie setting it back as it was) I then change my model again.
When I generate a new migration, this migration acts as If it is changing from the one that I deleted.
how does entity framework code first know the last model state if you clean and delete a migration?
And how do you reset this?
In your database, under "Tables / System Tables" (assuming you use SQL Management Studio), edit the Table __MigrationHistory.
Got me stumbled too, after I had deleted all migrations *.cs files and still VS "knew" about old migrations!!
You probably didn't delete the Designer file underneath it that contains information about automatic migrations up until that point.
http://msdn.microsoft.com/en-US/data/jj554735
Run the Add-Migration AddBlogRating command...
The migration also has a code-behind file that captures some metadata. This metadata will allow Code First Migrations to replicate the automatic migrations we performed before this code-based migration. This is important if another developer wants to run our migrations or when it’s time to deploy our application.
The code-behind is a file like 201206292305502_AddBlogRating.Designer.cs, underneath the manual migration class you created. It looks like:
public sealed partial class AddBlogRating : IMigrationMetadata
{
string IMigrationMetadata.Id
{
get { return "201206292305502_AddBlogRating"; }
}
string IMigrationMetadata.Source
{
get { return "H4sIAAAAAAAEAOy9B2AcSZ...=="; }
}
string IMigrationMetadata.Target
{
get { return "H4sIAAAAAAAEAOy9B2AcSZ...=="; }
}
}
Those 2 strings are base64 encoded dumps of your entire model prior to the migration and after it. The idea being that anything prior to the first manual migration logged was automatic, so when you apply all this to a fresh DB it can look and say:
Manual1
Manual2
Check Source to determine goal model before Manual1, apply using the Automatic approach, apply Manual1, check Source on Manual2, use automatic approach to get there, apply Manual2, finally use automatic approach to get from there to the current compiled model state.
I am using the EF in VS2010. I first created a DB and then choose create model from DB.. Then I go and choose Add Code Generation item.. everything looks good so far. Now I added a new table in my DB, I then choose update model from DB. This is still OK. How do I tell VS 2010 to generate a model file for that new table?
I end up deleting everything and and repeating the step over and over for every time I make a change to the database. ANy suggestions?
When you click on Add Code Generation item inside *.edmx it will create two files:
YourModel.Context.tt (produces a strongly typed ObjectContext for the YourModel.edmx)
YourModel.tt (responsible for generating a file for each EntityType and ComplexType in the YourModel.Context.tt)
When you update your *.edmx you just need to right click on YourModel.tt and choose Run Custom Tool.
More info:
Because you're using such approach i would recommend that you move the YourModel.tt file into the separate Class Library project (hold the Shift key and drag and move it)
Modify:
string inputFile = # "YourModel.edmx"; to
string inputFile = # "..\YourNamespaceWhereEdmxIS\YourModel.edmx";
in your YourModel.tt.
Change Custom Tool Namespace for your YourModel.Context.tt in the properties browser to match YourClassLibraryName
Regards.