migrating data with pre-populated database android room - android-room

my app mostly work with local database for this i am using room Prepopulate database for my project with Hilt and other jetpack component . Using DB Browser to create db file
Room.databaseBuilder(appContext, AppDatabase::class.java, "Sample.db")
.createFromAsset("database/myapp.db")
//.fallbackToDestructiveMigration()
//.addMigrations(MIGRATION_1_2)
.build()
database is simple with 3 column id name and isAlive , isAlive is a boolean type and this will toggle as true or false .
suppose - initially their will be 10 rows only , in next update their will be 5 more rows and 1 new column strength, and this new column will be different for all 15 rows . How to migrate this without loosing previous data and adding new data
if i use .fallbackToDestructiveMigration() then all 15 rows with 4 column will be visible but isAlive value will be lost
if i use .addMigrations(MIGRATION_1_2) then my new 5 row are not getting view and strength column is set to 0
val MIGRATION_1_2 = object : Migration(2, 3) {
    override fun migrate(database: SupportSQLiteDatabase) {
        database.execSQL("alter table user add column strength not null default 0")
    }
}

You could, in the Migration (as you need both the existing data and the new asset data), in addition to ALTERING the structure, open the asset as a database extract the data and then on a per row basis,
try to insert the data, but to IGNORE the insert if the row exists (i.e. duplicate id)
if the id exists to instead update the columns that are not user changeable (i.e. the isAlive column should be left asis).
I believe that you would have issues/complexities if you attempted this via Room.
The following is a working solution that instead uses the good old SQliteDatabase to handle the prepackgaed database and do the above.
#Database(entities = [User::class], exportSchema = false, version = 2 /* CHANGED FOR V2 */)
abstract class AppDatabase: RoomDatabase() {
abstract fun getTheDAOs(): TheDAOs
companion object {
private var instance: AppDatabase?=null
const val workingCopyExtension = "_wc" /* FOR MIGRATION */
const val databaseName = "Sample.db" /* ADDED/SUGGESTED as value is used more than once */
const val assetFileName = "database/myapp.db" /* ADDDED SUGGESTED as value is used more than once */
lateinit var appContext: Context
fun getInstance(context: Context): AppDatabase {
appContext = context
if (instance == null) {
instance = Room.databaseBuilder(context,AppDatabase::class.java, databaseName)
.allowMainThreadQueries() /* For brevity and convenience of demo */
.addMigrations(MIGRATION_1_2)
.createFromAsset(assetFileName)
.build()
}
return instance as AppDatabase
}
val MIGRATION_1_2 = object: Migration(1,2) {
#SuppressLint("Range")
override fun migrate(database: SupportSQLiteDatabase) {
database.execSQL("alter table user add column strength integer not null default 0")
/* cope the asset file */
getAssetFileCopy(context = appContext, assetFileName, databaseName)
/* open the asset as an SQLiteDatabase */
val asset = SQLiteDatabase.openDatabase(appContext.getDatabasePath(databaseName +workingCopyExtension).path,null,0)
/* Extract all the data from the asset (now database) */
val csr = asset.query("user",null,null,null,null,null,null,null)
val cv = ContentValues()
/* Loop through the extracted asset data */
while (csr.moveToNext()) {
/* Prepare to INSERT OR IGNORE row as per Cursor */
cv.clear()
cv.put("id",csr.getInt(csr.getColumnIndex("id")))
cv.put("name",csr.getString(csr.getColumnIndex("name")))
cv.put("isAlive",csr.getInt(csr.getColumnIndex("isAlive")))
cv.put("strength",csr.getInt(csr.getColumnIndex("strength")))
/* Do the INSERT OR IGNORE testing the returned value (-1 if not inserted else 1 or greater) */
/* if inserted then as row did not exist then values are as per the asset so no need to do anything*/
/* if not inserted then the row existed so the strength should be applied i.e. as per the asset update */
if (database.insert("user",SQLiteDatabase.CONFLICT_IGNORE,cv) <= 0) {
cv.clear()
cv.put("strength",csr.getInt(csr.getColumnIndex("strength")))
database.update("user",OnConflictStrategy.IGNORE,cv,"id=?", arrayOf<String>(csr.getString(csr.getColumnIndex("id"))))
}
}
/* Clean up (close the Cursor and delete the )*/
csr.close()
asset.close() /* not too worried about closing it as it has not been changed */
File(appContext.getDatabasePath(databaseName + workingCopyExtension).path).delete()
}
}
/* As it says create a copy of the asset in the device's databases folder */
fun getAssetFileCopy(context: Context,assetFileName: String, databaseName: String) {
val assetInputStream = context.assets.open(assetFileName)
val workingCopy = File(context.getDatabasePath(databaseName + workingCopyExtension).path)
if (workingCopy.exists()) {
workingCopy.delete()
}
assetInputStream.copyTo(workingCopy.outputStream())
}
}
}
Note that to reduce the possibility of typos repeated names/values have largely been assigned to val's.
Note run on the mainThread (aka .alloMainThreadQueries) for the convenience of testing
Testing
V1 was originally loaded from the asset based upon:-
After loading the isAlive values were all flipped using #Query("UPDATE user SET isAlive= NOT isAlive")
So the database then being (via App Inspection):-
i.e. isAlive values have been flipped (true to false and false to true) to reflect user changes.
The asset then changed to:-
note that the isAlive is as per the original asset (even rows are true aka not flipped)
Using the changed asset, the new strength column and V2 the resultant database (via App Inspection) is:-
5 new rows added
10 existing rows have the flipped isAlive
10 existing rows have the strength applied as per the asset
Final test App is uninstalled and rerun at V2. The database:-
i.e. the new install is with data as per the asset (isAlive aren't flipped)

Related

Entity Framework Core - Upsert entities from other database encounters tracking problems

I have a flatfile from a different database. I import it and map it to my application's entities. Because the flatfile does not contain ids I cannot be sure the entries I handle are not duplicates of what has already been added to my database earlier or to my context at this moment.
The error message I get is:
The instance of entity type 'Car' cannot be tracked because another
instance with the same key value for {'Make', 'Model'} is already
being tracked. When attaching existing entities, ensure that only one
entity instance with a given key value is attached. Consider using
'DbContextOptionsBuilder.EnableSensitiveDataLogging' to see the
conflicting key values.
An example:
Data rows from flatfile
Volvo V70 Steve
Volvo V70 John
Having mapped these rows and trying to put them in db
foreach(var row in flatFileRows){
Car existingCar = null;
if(dbContext.Cars.Any(c => c.Make == row.Make && c.Model == row.Model)){
existingCar = dbContext.Cars
.SingleOrDefault(c => c.Make == row.Make && c.Model == row.Model);
}
//I also do the same for existingDriver
var car = existingCar != null
? existingCar
: new Car()
{
Make = row.Make,
Model = row.Model,
Drivers = new List<Driver>();
};
var driver = new Driver()
{
CarId = existingCar != null ? exsitingCar.Id : 0,
Name = row.Name
};
car.Drivers.Add(driver);
dbContext.Cars.Update(car); //Second time we hit this the error is thrown
}
dbContext.SaveChanges();
Make and Model are set to keys in the schema because I don't want duplicate entries of the car models.
The above example is simplified.
What I want is to check if I already put a car in the db with these attributes and then build according to my schema from that entity. I don't care to track any entries, disconnected or otherwise, because I just need to populate the database.

Copy Document/Page excluding field/column or setting new value

I'm using version 8 of Kentico and I have a custom document/page that has a unique numeric identity field, unfortunately this data from an existing source and because I cannot set the primary key ID of the page's coupled data when using the API I was forced to have this separate field.
I ensure the field is new and unique during the DocumentEvents.Insert.Before event using node.SetValue("ItemIdentifier", newIdentifier); if the the node's class name matches, etc. So Workflow is handled as well I also implemented the same method for WorkflowEvents.SaveVersion.Before.
This works great when creating a new item, whoever if we attempt to Copy an existing node the source Identifier remains unchanged. I was hoping I could exclude the field from being copied, but am as yet to find an example of that.
So I went ahead and implemented a solution to ensure a new identifier is created when a node is being copied by handling the DocumentEvents.Copy.Before and DocumentEvents.Copy.After.
Unfortunately in my case the e.Node from these event args are useless, I could not for the life of me get the field modified, when I opened IlSpy I realized why, the node copy method grabs a fresh copy of the node from the database always! Hence rendering DocumentEvents.Copy.Before useless if you want to modify fields before a node is copied.
So I instead pass the identifier along in a RequestStockHelper that the Insert, further down the cycle, handles to generate a new identifier for the cloned node.
Unfortunately, unbeknownst to me, if we copy a published node, the value on the database is correct, but the NodeXML value of it is not.
This IMO sounds like a Kentico bug, it's either retaining the source node's NodeXML/version, or for some reason node.SetValue("ItemIdentifier", newIdentifier); is not working properly on the WorkflowEvents.SaveVersion.Before since it's a published and workflowed node.
Anyone come across a similar issue to this? Is there any other way I can configure a field to be a unique numeric identity field, that is not the primary key, and is automatically incremented when inserted? Or exclude a field from the copy procedure?
As a possible solution, could you create a new document in DocumentEvents.Copy.Before and copy the values over from the copied document, then cancel the copy event itself?
ok, turns out this is not a Kentico issue but the way versions are saved.
if you want to compute a unique value in DocumentEvents.Insert.Before you need to pass it along to WorkflowEvents.SaveVersion.Before because the node that is sent in the later is the same as the original from the former. e.g. whatever changes you do in the Insert node are not sent along to SaveVersion, you need to handle this manually.
So here's the pseudo code that handles the copy scenario and insert of a new item of compiled type CineDigitalAV:
protected override void OnInit()
{
base.OnInit();
DocumentEvents.Insert.Before += Insert_Before;
DocumentEvents.Copy.Before += Copy_Before;
WorkflowEvents.SaveVersion.Before += SaveVersion_Before;
}
private void Copy_Before(object sender, DocumentEventArgs e)
{
if (e.Node != null)
{
SetCopyCineDigitalIdentifier(e.Node);
}
}
private void SaveVersion_Before(object sender, WorkflowEventArgs e)
{
if (e.Document != null)
{
EnsureCineDigitalIdentifier(e.Document);
}
}
private void Insert_Before(object sender, DocumentEventArgs e)
{
if (e.Node != null)
{
EnsureCineDigitalIdentifier(e.Node);
}
}
private void SetCopyCineDigitalIdentifier(TreeNode node)
{
int identifier = 0;
if (node.ClassName == CineDigitalAV.CLASS_NAME)
{
identifier = node.GetValue<int>("AVCreation_Identifier", 0);
// flag next insert to create a new identifier
if (identifier > 0)
RequestStockHelper.Add("Copy-Identifier-" + identifier, true);
}
}
private void EnsureCineDigitalIdentifier(TreeNode node)
{
int identifier = 0;
if (node.ClassName == CineDigitalAV.CLASS_NAME)
{
identifier = node.GetValue<int>("AVCreation_Identifier", 0);
}
if (identifier == 0 || (identifier != 0 && RequestStockHelper.Contains("Copy-Identifier-" + identifier)))
{
// generate a new identifier for new items ot those being copied
RequestStockHelper.Remove("Copy-Identifier-" + identifier);
int newIdentifier = GetNewCineDigitalIdentifierAV(node.NodeSiteName);
node.SetValue("AVCreation_Identifier", newIdentifier);
// store the newidentifier so that saveversion includes it
RequestStockHelper.Add("Version-Identifier-" + identifier, newIdentifier);
}
else if (RequestStockHelper.Contains("Version-Identifier-" + identifier))
{
// handle saveversion with value from the insert
int newIdentifier = ValidationHelper.GetInteger(RequestStockHelper.GetItem("Version-Identifier-" + identifier), 0);
RequestStockHelper.Remove("Version-Identifier-" + identifier);
node.SetValue("AVCreation_Identifier", newIdentifier);
}
}
private int GetNewCineDigitalIdentifierAV(string siteName)
{
return (DocumentHelper.GetDocuments<CineDigitalAV>()
.OnSite(siteName)
.Published(false)
.Columns("AVCreation_Identifier")
.OrderByDescending("AVCreation_Identifier")
.FirstObject?
.AVCreation_Identifier ?? 0) + 1;
}

Bind 2 tableviews and a lineChart together in JavaFX2

My requirement is to use 2 tables and 1 chart to visualize my data set. Each data element contains its (unique) name and a bunch of data belonging to it. The first table will show the name of every dataset I have and the second table will show the data belonging to the dataset (row) being selected in the first table. The second table and the chart will show the same data and both belong to the dataset (row) being selected in the first table. I have achieved half of this behavior (linking both table) now by using the code below.
The problem I currently have now is: I can't figure out the way to let the chart display the same data as the second table. My idea is to set the chart data in the ChangeListener, but the problem is the data model of the table is likely to not suitable with the chart. I have the readingData field as ObservableList in the TableDataModel class which is the type that the chart accept but it is an ObservableList of ReadingData not XYChart.Data. Is there any way I can use the XYChart.Data in the ReadData class?
My main class:
ObservableList<TableDataModel> tableData = FXCollections.observableArrayList();
// Other code omitted
/* Create the first table */
TableView<TableDataModel> myTable = new TableView<TableDataModel>();
TableColumn nameColumn = new TableColumn("Name");
nameColumn.setCellValueFactory(new PropertyValueFactory<TableDataModel, String>("name"));
// Other column omitted
myTable.setItems(tableData);
myTable.getColumns().addAll(nameColumn, ...);
// When user select on any row update the second table items
myTable.getSelectionModel().selectedItemProperty().addListener(new ChangeListener<TableDataModel>()
{
#Override
public void changed(ObservableValue<? extends TableDataModel> arg0, TableDataModel arg1, TableDataModel arg2)
{
dataTable.setItems(arg2.readingData);
}
});
/* The second table */
TableView<ReadData> dataTable = new TableView<ReadData>();
TableColumn valueColumn = new TableColumn("Value");
valueColumn.setCellValueFactory(new PropertyValueFactory<ReadData, Integer>("value"));
// Other column omitted
dataTable.setItems(null);
dataTable.getColumns().addAll(valueColumn, ...);
TableDataModel.java:
private final SimpleStringProperty name;
// Other SimpleStringProperty and its get and set method omitted
public final ObservableList<ReadData> readingData = FXCollections.observableArrayList();
ReadData.java:
// I use XYChart.Data here because I think that this might be useful when I want to show this on the chart
private SimpleObjectProperty<XYChart.Data<Integer, Integer>> value;
// Other property
// Provide this to make below line work
// valueColumn.setCellValueFactory(new PropertyValueFactory<ReadData, Integer>("value"));
public int getValue()
{
return value.get().getYValue();
}
AFAIK you need to put your XYChart.Datas into the XYChart.Series first, then put that series into the chart by chart.getData().add(series). I think you can do this in your myTable selected change listener: Create there a new series (or modify existing, previously created and added one), add all your value.get() values from the ReadData. See the example of LineChart.

Subsonic Single WHERE clause

Is it possible to apply a WHERE clause on a SubSonic query?
For example, I get get a single based on id...
db.Single<Storage>(id);
But how can I get a single based on a simple WHERE clause?
db.Single<Storage>(WHERE columnname == "value");
that's possible:
// Will return a Storage instance with property IsNew = true, if record does not exist
// since an object created with new never can be null
var storage1 = new Storage(1); // id = 1
var storage1 = new Storage(Storag.Columns.ColumnName, "value");
// Will return 0 if record not found (subsonic3 only)
var storage3 = (from s in Storage
where s.ColumnName == "value"
select s).SingleOrDefault();
// Will throw an exception if record not found (subsonic3 only)
var storage3 = (from s in Storage
where s.ColumnName == "value"
select s).Single();
Since db is a partial class you can extend it. Just create a new File within the same namespace (but another folder in your solution). This applies to subsonic 2 but will be similar to subsonic 3, I think.
public static partial class DB
{
public static T Single<T>(String columName, Object columnValue) where T: RecordBase<T>, new()
{
return Select().From<T>()
.Where(columnName).IsEqualTo(columnValue)
.ExecuteSingle<T>();
}
}
Thanks for the above, this was a help and eventually I simplified this to the below...
db.Single<Storage>(s => s.ColumnName == "value");

Auditing in Entity Framework

After going through Entity Framework I have a couple of questions on implementing auditing in Entity Framework.
I want to store each column values that is created or updated to a different audit table.
Right now I am calling SaveChanges(false) to save the records in the DB(still the changes in context is not reset). Then get the added | modified records and loop through the GetObjectStateEntries. But don't know how to get the values of the columns where their values are filled by stored proc. ie, createdate, modifieddate etc.
Below is the sample code I am working on it.
// Get the changed entires( ie, records)
IEnumerable<ObjectStateEntry> changes = context.ObjectStateManager.GetObjectStateEntries(EntityState.Modified);
// Iterate each ObjectStateEntry( for each record in the update/modified collection)
foreach (ObjectStateEntry entry in changes)
{
// Iterate the columns in each record and get thier old and new value respectively
foreach (var columnName in entry.GetModifiedProperties())
{
string oldValue = entry.OriginalValues[columnName].ToString();
string newValue = entry.CurrentValues[columnName].ToString();
// Do Some Auditing by sending entityname, columnname, oldvalue, newvalue
}
}
changes = context.ObjectStateManager.GetObjectStateEntries(EntityState.Added);
foreach (ObjectStateEntry entry in changes)
{
if (entry.IsRelationship) continue;
var columnNames = (from p in entry.EntitySet.ElementType.Members
select p.Name).ToList();
foreach (var columnName in columnNames)
{
string newValue = entry.CurrentValues[columnName].ToString();
// Do Some Auditing by sending entityname, columnname, value
}
}
Here you have two basic options:
Do it at the database level
Do it in the c# code
Doing it at the data base level, means using triggers. In that case there is no difference if you are using enterprise library or another data access technology.
To do it in the C# code you would add a log table to your datamodel, and write the changes to the log table. When you do a save changes both the changes to the data and the information which you wrote to the log table would be saved.
Are you inserting the new record using a stored proc? If not (i.e. you are newing up an object, setting values, inserting on submit and then saving changes the new object id will be automatically loaded into the id property of the object you created. If you are using a stored proc to do the insert then you need to return the ##IDENTITY from the proc as a return value.
EX:
StoreDateContext db = new StoreDataContext(connString);
Product p = new Product();
p.Name = "Hello Kitty Back Scratcher";
p.CategoryId = 5;
db.Products.Add(p);
try
{
db.SaveChanges();
//p.Id is now set
return p.Id;
}
finally
{
db.Dispose;
}

Resources