I have a Windows Phone app that relies on an XML data file that comes packaged with the app. When the app is ran the first time on a phone, I load the file into isolated storage. Once the file is loaded into isolated storage, the app uses the isolated storage version of data. In the next version of my app (the Marketplace update), the XML file will have more elements, how do I update the data file once per app update (new version on the Marketplace)?
I thought I could change the file name in the isolated storage, but that would leave trash behind. I could also check for exceptions when I load the XML file, but are there any other, more elegant ways? I do not want to check for the old file in the isolated storage every time my app runs.
The ideal scenario would be to put a piece of code that would be executed once when the new version of the app is loaded onto the phone, is there a way to do that?
To my knowledge there isn't an "out of the box" event that will run a single time at the first run of an app after it was installed/updated.
You'd have to flag the run your self, like you are already stating (save the current version, compare version at each run of the app to see if app was updated!)
I think I now understand what you want.
Add the XML file as a resource.
Use GetResourceStream to get the content of the XML.
Note that the name for the resource would be something like /DllName;component/Folder/ResourceName
Here is what I did:
In the constructor method of my DataLayer class, I added the following code:
private bool AppIsOld
{
get
{
string storedVersion = GetStoredAppVersion(); //stored previously "seen" version
string currentVersion = GetCurrentlyRunningAppVersion();
return !(storedVersion == currentVersion);
}
}
private string GetCurrentlyRunningAppVersion()
{
var asm = Assembly.GetExecutingAssembly();
var parts = asm.FullName.Split(',');
return parts[1].Split('=')[1].ToString();
}
And then I run the following check:
if (AppIsOld)
RefreshResources(); //do whatever to refresh resources
The code for GetCurrentlyRunningAppVersion() function is taken from here.
This solution is not what I had in mind because it runs every time the class constructor is called while I wanted something that would run once upon version update.
Related
I have a core data implementation. The stack is loaded using a NSPersistentContainer. During setup, I set the NSPersistentHistoryTrackingKey on the NSPersistentStoreDescription.
description.setOption(true as NSNumber, forKey: NSPersistentHistoryTrackingKey)
I'm trying to implement progressive migrations along this line https://williamboles.me/progressive-core-data-migration/ (fantastic article, by the way!)
The first problem I ran into was forcing checkpointing in the WAL. The code is pretty straightforward:
func forceWALCheckpointingForStore(at storeURL: URL) {
guard let metadata = NSPersistentStoreCoordinator.metadata(at: storeURL), let currentModel = NSManagedObjectModel.compatibleModelForStoreMetadata(metadata) else {
return
}
do {
let persistentStoreCoordinator = NSPersistentStoreCoordinator(managedObjectModel: currentModel)
let options = [NSSQLitePragmasOption: ["journal_mode": "DELETE"]]
let store = persistentStoreCoordinator.addPersistentStore(at: storeURL, options: options)
try persistentStoreCoordinator.remove(store)
} catch let error {
fatalError("failed to force WAL checkpointing, error: \(error)")
}
}
The problem occurs when the NSPersistentStoreCoordinator runs addPersistentStore. I get the following error:
Store opened without NSPersistentHistoryTrackingKey but previously had been opened with NSPersistentHistoryTrackingKey - Forcing into Read Only mode store at url...
This makes perfect sense. The Core Data framework created additional tables (NSPersistentHistoryToken, NSPersistentHistoryTransaction etc..) that you have access to to manage changes in history. If you "open" or "access" the database without the history tracking option, the Core Data framework puts the database in Read Only mode to avoid data integrity issues.
As far as I can see in the documentation, the NSPersistentHistoryTrackingKey can only be set on the container, and not on the NSPersistentStoreCoordinator directly (via options).
Trying to stay "within the boundaries of the Container", I decided to call addPersistentStore (with the Pragmas Option) on the "persistentStoreCoordinator" property of the Container. The problem with this approach is that the container is instantiated with the latest version NSManagedObjectModel. Because we're smack bang in the middle of a migration process, the migration hasn't happened yet. Attempting to manipulate the store via the store coordinator inside the container in any way results in this error:
The model used to open the store is incompatible with the one used to create the store.
I had to therefore instantiate a container with the existing version of the model. Further research also revealed that I can force a WAL checkpoint via the NSPersistent container (and avoid having to use the coordinator) by setting the following option on the container description:
description.setValue("DELETE" as NSObject, forPragmaNamed: "journal_mode")
I created a temporary pre-migration container, instantiated it with the current (old) version of the managed object model, and set the above pragmas option to force a WAL checkpoint. It worked like a charm! The existing database was now ready to be migrated to the new model version(s).
The migration process kicks off and I hit a wall here:
NSMigrationManager.migrateStore(from: currentURL, sourceType: NSSQLiteStoreType, options: nil, with: mappingModel, toDestinationURL: destinationURL, destinationType: NSSQLiteStoreType, destinationOptions: nil)
Once again, we're back to the original problem:
Store opened without NSPersistentHistoryTrackingKey but previously had been opened with NSPersistentHistoryTrackingKey - Forcing into Read Only mode store at url...
I want to migrate my database WITH the history tracking option enabled. After all, the history tracking tables created by the Core Data framework must also be migrated to the new version of the database. But, I don't know how to achieve this with the available Core Data classes. It is always best to stay as close to the vendor recommended implementations as possible and not do weird workarounds.
Here's what I know:
With lightweight migration options set on my Container, I can create new model versions to my heart's content!
With the NSPersistentHistoryTrackingKey also set on the same container, the Core Data framework automatically migrates my store from one model version to the next without missing a beat!
Therefore, if I now want to migrate my database manually with all the options set on the container, I should be able to do it because if Core Data can do it, I should be able to do it.... yes?
The documentation on these issues is a bit light. One of two things is happening here. Either the documentation is not updated OR ... I'm trying to do the weirdest thing known to man and it should never be done .... ever...
I am writing an app which uses CoreData using NSPersistentContainer to save data.
While I am developing the app, I would like to:
examine the data directly
back up the data
see what happens when I change the bundle id
I assume the data is physically stored somewhere, but I’m not sure where to look.
By default NSPersistentContainer stores the database inside app container under directory Libray/Application Support
To locate the full path, in simulator, you can print the applicationSupportDirectory using urls(for:in:) function of the default FileManager:
print(FileManager.default.urls(for: .applicationSupportDirectory, in: .userDomainMask).first?.path ?? "nil")
If you are running your app on an actual device you can download the application container following this answer.
For sandboxed apps the location goes like this:
~/Library/Containers/…/Data/Library/Application Support/…
I heavily make use of unit tests for my developer needs (POCs, unit tests, etc). For one particular test method there was a line that went...
var file = #"D:\data\file.eml";
So I am referencing some file on my file system.
Now in a team when other people are trying to run my "personal" tests (POCs or whatever) they don't have a reference to that file in that path...hence the tests fails. How we'd have to normally make this work is to provide the test data, and allow the user to modify the test code so that it runs on his computer.
Any visual studio way to manage this particular problem?
Whats the benefit in this? Well, people can review the test data (email in my case) as well as the method I wrote for testing, and can raise defects in TFS (the source control system) relating to it if need be.
One way I often handle data files for unit test projects are to set the data files as Resources. (* Note that this link is for vs2010 but I have used this approach through vs2015RC).
In the project with the data file: Project -> Properties -> Resources and choose to add a resource file if you the project doesn't already have one. Select Files in the resource pane and click Add Resource or just drag and drop your data files onto the resource manager. By default resources are marked internal, so to access the resources from another project you have several ways:
In the assembly with the data files, add the following to your AssemblyInfo.cs file and this will allow only specified assemblies to access the internal resources
[assembly: InternalsVisibleTo("NameSpace.Of.Other.Assembly.To.Access.Resources")]
Create a simple provider class to abstract away the entire Resource mechanism, such as:
public static class DataProvider
{
public static string GetDataFile(int dataScenarioId)
{
return Properties.Resources.ResourceManager.GetString(
string.Format("resource_file_name_{0}", id));
}
}
Change the resource management to public (not an approach I have used)You can then access the data file (now a resource) from a unit test such as:
[TestCase(1)]
public void X_Does_Y(int id)
{
//Arrange
var dataAsAString = Assembly_With_DataFile.DataProvider.GetScenario(id);
//Act
var result = classUnderTest.X(dataAsAString);
//Assert
Assert.NotNull(result);
}
Note that using data files as resources, the ResourceManager handles the file I/O and returns strings of the file contents.
Update: The test method in the example above is from an NUnit project and is not meant to imply process, but a mechanism by which a data file can be accessed from another project.
What you'd normally do is add the file to your project and check it into TFS. Then make sure the item's settings are:
Build action: Content
Copy to output: If newer
Then put an attribute on your Test method or Test class:
[DeploymentItem("file.eml")]
You can optionally specify an output dircetory:
[DeploymentItem("file.eml", "Directory to place the item")]
If you put the files in subdirectories of your test project, then adjust the attribute accordingly:
[DeploymentItem(#"testdata\file.eml")]
The file will be copied to the working directory of your test project and that makes it easy to access from your test code. Either load the file directly, or pass the path to any method that needs it.
If you tests expect the files in a specific location you can use a simple System.IO.File.Copy() or System.IO.File.Move() to put the item in the place you need it to be.
The process is explained here on MSDN.
I suppose the most straight forward way is to simply add whatever to the project, and set the correct value for Copy To Output Directory. In other words, say your data is in a text file.
Add text file to your test project
Right-click to access properties window
Set copy to output directory field as Always or Copy if newer.
Now if you build the test project, the file gets copied to your output directly. This enables to write unit test code of the fashion:
var dataFile = File.OpenRead("data.txt");
I have had some experience writing container-bound scripts, but am totally new to web apps.
How do I debug (e.g. look at variable values, step through code etc) a web app? In a container bound script it was easy, because I could set breakpoints, use the apps script debugger - how do I go about this in a web page e.g. when I execute a doPost?
In his excellent book "Google Script", James Ferreira advocates setting up your own development environment with three browser windows; one for the code, one for the live view (in Publish, Deploy as web app, you are provided with a "latest code" link that will update the live view to the latest save when it is refreshed), and one for a spreadsheet that logs errors (using try/catch wrapped around bits of code you want to keep an eye on).
In Web Apps, even the most basic debugging of variables through Logger.log() does not work!
A great solution to have at least simple variable logging available is Peter Herrmann's BetterLog for Apps Script. It allows you to log into a spreadsheet (the same as your working spreadsheet or a separate one).
Installation is very simple - just add an external resource (see the Github readme) and a single line of code to override the standard Logger object:
Logger = BetterLog.useSpreadsheet('your-spreadsheet-key-goes-here');
Remember, that the spreedsheet that you give here as a parameter will be used for the logging output and thus must be writable by anybody!
BetterLog will create a new sheet called "Log" in the given spreadsheet and will write each log call into a separate row of that sheet.
So, for me, I debug the front-end using inspector, I haven't found a way to step through code yet, but you can use 'debugger' in your javascript (along with console.log) to stop the code and check variables.
to debug the backend, what I've been doing is to write my functions like
function test_doSomething(){
payload = "{item1: 100, item2: 200}" //<- copy paste from log file
backend_doSomething(payload)
}
function backend_doSomething(payload){
Logger.log(payload)
params = JSON.parse(payload)
...
}
Then after refreshing your project on the backend, you can look at executions, grab the payload from the log file, and paste it into your test_doSomething() function.
From there, you are re-creating the call that you want to debug and you can run that, stepping through the backend code as usual.
My company currently builds separate MSI's for all of our clients, even though the app is 100% the same across the board (with a single exception, an ID in the app.config).
I would like to show them that we can publish in once place with ClickOnce, and simply add a query string parameter for each client's installer.
Example: http://mysite.com/setup.exe?ID=1234-56-7890
The issue that I'm having is that the above ("ID=1234...") is not being passed along to the "myapplication.application". What is happening instead is, the app is being installed successfully, and it is running the first time with an activation context, but the "ActivationUri" does not contain any query string values.
Is there a way to pass query string values FROM THE INSTALLER URL to the application's launch URL? If so, how?
After much searching (and discussing), the answer is simply that the current version of ClickOnce doesn't work that way. The installer does not pass the URL onto the application up it's first run.
Here is what I have done for a workaround (and it works great).
Change my setup package to have all of the required files uncompressed and loose (as apposed to using a CAB file, or embedding them in the installer).
Make an ASP.NET application (using Routing for URL handling) that listens for a request to "mysite.com/Installer/00123/Setup.exe"
Note: the route should listen for "/Installer/{ID}/*" where {ID} is 5 digits.
There is actually no directory called "00123", but rather, I'm using ASP.NET Routing to pickup those requests and then I map it to my actual directory that has the installer file in it.
I then hijack the request (parse the setup.exe to find the embedded URL that tells the installer program where to find the rest of the files... I then replace "/00000/" with the request URL that the user went to - in this case "00123".
As each file is being requested, I know which "version" of the file to send, because the ClickOnce Installer will be looking for "mysite.com/Installer/00123/SomeFile.dll" (or whatever).
Instead of using a 5-digit ID, you could use a GUID... it's up to you.
This solution works great for our organization... we currently have 37 clients who require unique customizations to their installer package, but we only have to actually build and publish ONE installer package and simply use the hijack method above.
At this point we have placeholders that we swap out so that it's easy to customize installers for as many clients as we want.
Example: in the app.config file we have displayName="{OrgName}" which is automatically replaced by one of the values in the database.
For me, "http://mysite.com/myapplication.application?id=1234-56-7890" seems to do the trick.
I know this is outdated, but I just wanted to provide the current solution.
To retrieve querystring parameters in a ClickOnce application:
Point the app/download/setup link to the application (with .application extension), not "setup.exe"
Add this function to your ClickOnce application to retrieve the querystring parameter collection:
private NameValueCollection GetQueryStringParameters()
{
NameValueCollection nameValueTable = new NameValueCollection();
if (ApplicationDeployment.IsNetworkDeployed)
{
string queryString = ApplicationDeployment.CurrentDeployment.ActivationUri.Query;
nameValueTable = HttpUtility.ParseQueryString(queryString);
}
return (nameValueTable);
}
Then to get a querystring param value:
var querystringParams = GetQueryStringParameters();
string param_value = querystringParams["param_name"];
Don't forget the Usings:
using System.Collections.Specialized;
using System.Deployment.Application;
using System.Web;
Source: https://learn.microsoft.com/en-us/visualstudio/deployment/how-to-retrieve-query-string-information-in-an-online-clickonce-application?view=vs-2019