With Version 2.3.0-alpha03 Room has a prepackagedDatabaseCallback it says:-
This callback will be invoked after the pre-package DB is copied but before Room had a chance to open it and therefore before the RoomDatabase.Callback methods are invoked. This callback can be useful for updating the pre-package DB schema to satisfy Room's schema validation.
So how could I uses this to circumvent the Invalid Schema expected .... found ....?
Could I use this to introduce Triggers, as room doesn't have annotations for the creation of triggers?
Note this is intended as an ask, and answer, your own question
So how could I uses this to circumvent the Invalid Schema expected .... found ....?
The following, albeit long winded shows an example that corrects a schema accordingly.
Could I use this to introduce Triggers, as room doesn't have annotations for the creation of triggers?
Yes, although this hasn't actually been tested but the example caters for the creation of the triggers.
Notes regrading the example
First, it should be noted that when the callback was initially being looked at that there were issues. These have now been addressed BUT requires 2.4.0-beta02 or greater. The issues were:-
Not allowing version 0 in the pre-packaged database (often this would be the case)
Empty Database passed to PrePackagedDatabaseCallback
Database updates applied in PrePackagedDatabaseCallback Lost/Undone
The fix applied being
Correctly open pre-package database during PrepackagedDatabaseCallback invocation.
The actual pre-package database was not being opened because only the file name was being used when creating the temporary open helper instead of the full path. The framework open helper will only open an existing file if the name of the database is a path (starts with '/') otherwise it create a new file. This changes fixes it by using the absolute file path as name.
The Flow
When control is passed to the callback the asset database has been copied. So it's just a matter, in theory, of ALTERing the TABLES and VIEWS (if any) to match the schema that room expects.
Often trying to match what room expects with what room finds frustrates some. The example below will overcome minor/small/often missed issues. It works by:-
Dropping all non-sqlite or non-android components other than the tables (i.e. views, indexes and triggers from the copied assets database),
Renaming the tables to allow the ones that room expects to be created,
Creating new tables using Room's expected schema,
as copied from the java that Room generates
Copying the data from the renamed (asset) tables into the newly created tables
INSERT OR IGNORE has been used so constraint conflicts, such as NOT NULL, UNIQUE, CHECK will not result in an exception (change INSERT OR IGNORE to just INSERT to fail)
Creating the other components that Room expects (views, indexes and triggers (note Room only has triggers for FTS tables)).
Dropping the now redundant renamed tables
Doing a VACUUM to clean up the database,
and finally adding any triggers.
It does expect that the source (asset) has the columns in the correct order, that nulls are are not included in columns that room has a NOT NULL constraint. However, as INSERT OR IGNORE is used then such rows will not be inserted rather than resulting in an exception.
Other than copying come code from the generated java, the process is automated and should cope with most/many assets without modification.
The Code
The vast majority of the code is in the #Database class OtherDatabase. Note that this should cope with many databases with a little tailoring (see comments for changes to make):-
#SuppressLint({"Range"}) /*<<<<< used due to bug/issue with getColumnIndex introduced with SDK 31 */
#Database(
entities = {Person.class, Company.class, CompanyPersonMap.class}, //<<<<< CHANGED ACCORDINGLY
version = OtherDatabase.DATABASE_VERSION, /* note due to shared usage of DBVERSION OtherDatabase used <<<<< CHANGE ACCORDINGLY */
exportSchema = false /* change to suit */
)
#TypeConverters({AllTypeConverters.class})
abstract class OtherDatabase extends RoomDatabase {
public static final String DATABASE_NAME = "otherprepackageddatabasecallbacktest.db"; //<<<<< CHANGE AS REQUIRED
public static final int DATABASE_VERSION = 1; //<<<<< CHANGE AS REQUIRED
public static final String ASSET_FILE_NAME = "prepackageddatabasecallbacktest.db"; //<<<<< CHANGED AS REQUIRED
/**
* sqlite_master table and column names !!!!!DO NOT CHANGE!!!!!
*/
private static final String SQLITE_MASTER_TABLE_NAME = "sqlite_master";
private static final String SQLITE_MASTER_COL_NAME = "name";
private static final String SQLITE_MASTER_COL_TYPE = "type";
private static final String SQLITE_MASTER_COL_TABLE_NAME = "tbl_name";
private static final String SQLITE_MASTER_COL_SQL = "sql";
abstract AllDao getAllDao(); //<<<<< CHANGE ACCORDINGLY
private static volatile OtherDatabase instance = null; //<<<<< CHANGE ACCORDINGLY
public static OtherDatabase /*<<<< CHANGE ACCORDINGLY */ getInstance(Context context) {
if (instance == null) {
instance = Room.databaseBuilder(context, OtherDatabase.class, OtherDatabase.DATABASE_NAME)
.allowMainThreadQueries() /*<<<<< USED FOR BREVITY CONVENIENCE */
.createFromAsset(ASSET_FILE_NAME, prePkgDbCallback) /* 2nd parameter is the THE CALL BACK to invoke */
.build();
}
return instance;
}
/* THE CALLBACK */
static final PrepackagedDatabaseCallback prePkgDbCallback = new PrepackagedDatabaseCallback() {
final static String assetTablePrefix = "asset_"; /* prefix used for renamed tables - should not need to be changed */
private List<SQLiteMasterComponent> sqLiteMasterComponentArray; /* store for sqlite_master extract */
#Override
public void onOpenPrepackagedDatabase(#NonNull SupportSQLiteDatabase db) {
super.onOpenPrepackagedDatabase(db);
sqLiteMasterComponentArray = buildComponentsList(db); /* gets relevant rows from sqlite_master */
dropNonTableComponents(sqLiteMasterComponentArray,db); /* everything except the tables */
renameTableComponents(sqLiteMasterComponentArray, assetTablePrefix,db); /* rename the tables using prefix */
/*<<<<< TAILOR (the db.execSQL's below) AS APPROPRIATE - SEE COMMENTS THAT FOLLOW >>>>>*/
/* copied from the #Database classes generated java e.g. leave indexes till later
_db.execSQL("CREATE TABLE IF NOT EXISTS `Person` (`personid` INTEGER, `firstName` TEXT, `lastName` TEXT, `middleNames` TEXT, `dateOfBirth` INTEGER, PRIMARY KEY(`personid`))");
_db.execSQL("CREATE INDEX IF NOT EXISTS `index_Person_firstName` ON `Person` (`firstName`)");
_db.execSQL("CREATE INDEX IF NOT EXISTS `index_Person_lastName` ON `Person` (`lastName`)");
_db.execSQL("CREATE TABLE IF NOT EXISTS `company` (`companyid` INTEGER, `companyName` TEXT, `city` TEXT, `state` TEXT, `country` TEXT, `notes` TEXT, PRIMARY KEY(`companyid`))");
_db.execSQL("CREATE TABLE IF NOT EXISTS `company_person_map` (`companyid_map` INTEGER NOT NULL, `personid_map` INTEGER NOT NULL, PRIMARY KEY(`companyid_map`, `personid_map`), FOREIGN KEY(`companyid_map`) REFERENCES `company`(`companyid`) ON UPDATE CASCADE ON DELETE CASCADE , FOREIGN KEY(`personid_map`) REFERENCES `Person`(`personid`) ON UPDATE CASCADE ON DELETE CASCADE )");
_db.execSQL("CREATE INDEX IF NOT EXISTS `index_company_person_map_personid_map` ON `company_person_map` (`personid_map`)");
*/
/* Create the tables as per Room definitions - ***** CREATE TABLES COPIED FROM GENERATED JAVA *****
only indexes, views, triggers (for FTS) should be done after the data has been copied so :-
data is loaded faster as no index updates are required.
triggers don't get triggered when loading the data which could result in unexpected results
*/
db.execSQL("CREATE TABLE IF NOT EXISTS `Person` (`personid` INTEGER, `firstName` TEXT, `lastName` TEXT, `middleNames` TEXT, `dateOfBirth` INTEGER, PRIMARY KEY(`personid`));");
db.execSQL("CREATE TABLE IF NOT EXISTS `company` (`companyid` INTEGER, `companyName` TEXT, `city` TEXT, `state` TEXT, `country` TEXT, `notes` TEXT, PRIMARY KEY(`companyid`))");
db.execSQL("CREATE TABLE IF NOT EXISTS `company_person_map` (`companyid_map` INTEGER NOT NULL, `personid_map` INTEGER NOT NULL, PRIMARY KEY(`companyid_map`, `personid_map`), FOREIGN KEY(`companyid_map`) REFERENCES `company`(`companyid`) ON UPDATE CASCADE ON DELETE CASCADE , FOREIGN KEY(`personid_map`) REFERENCES `Person`(`personid`) ON UPDATE CASCADE ON DELETE CASCADE )");
copyData(sqLiteMasterComponentArray, assetTablePrefix,db); /* copy the data from the renamed asset tables to the newly created Room tables */
/* Create the other Room components - ***** CREATE ? COPIED FROM GENERATED JAVA *****
Now that data has been copied create other Room components indexes, views and triggers
(triggers would only be for FTS (full text search))
again sourced from generated Java
*/
db.execSQL("CREATE INDEX IF NOT EXISTS `index_Person_firstName` ON `Person` (`firstName`)");
db.execSQL("CREATE INDEX IF NOT EXISTS `index_Person_lastName` ON `Person` (`lastName`)");
db.execSQL("CREATE INDEX IF NOT EXISTS `index_company_person_map_personid_map` ON `company_person_map` (`personid_map`)");
dropRenamedTableComponents(sqLiteMasterComponentArray, assetTablePrefix,db); /* done with the renamed tables so drop them */
db.execSQL("VACUUM"); /* cleanup the database */
createTriggers(sqLiteMasterComponentArray,db); /* create any triggers */
}
};
static int dropNonTableComponents(List<SQLiteMasterComponent> components, SupportSQLiteDatabase db) {
int rv = 0;
for(SQLiteMasterComponent c: components) {
if (!c.type.equals("table") ) {
db.execSQL("DROP " + c.type + " IF EXISTS " + c.name);
rv++;
}
}
return rv;
}
static int dropRenamedTableComponents(List<SQLiteMasterComponent> components, String prefix, SupportSQLiteDatabase db) {
int rv = 0;
int maxForeignKeyCount = 0;
for (SQLiteMasterComponent c: components) {
if (c.type.equals("table") && c.foreignKeyCount > maxForeignKeyCount) {
maxForeignKeyCount = c.foreignKeyCount;
}
}
for (int i= maxForeignKeyCount; i >= 0; i--) {
for (SQLiteMasterComponent c: components) {
if (c.type.equals("table") && c.foreignKeyCount == i) {
db.execSQL("DROP " + c.type + " IF EXISTS " + prefix + c.name);
rv++;
}
}
}
return rv;
}
static int renameTableComponents(List<SQLiteMasterComponent> components, String prefix, SupportSQLiteDatabase db) {
int rv = 0;
db.execSQL("PRAGMA foreign_keys = ON"); // Just in case turn foreign keys on
for(SQLiteMasterComponent c: components) {
if (c.type.equals("table")) {
db.execSQL("ALTER TABLE " + c.name + " RENAME TO " + prefix + c.name);
rv++;
}
}
return rv;
}
/*
NOTE tables with fewest Foreign Key definitions done first
NOTE makes an assumption that this will not result in FK conflicts
TODO should really be amended to ensure that a table with FK's is only attempted when all of it's parent tables have been loaded
*/
static int copyData(List<SQLiteMasterComponent> components, String prefix, SupportSQLiteDatabase db) {
int rv = 0;
int maxForeignKeyCount = 0;
for (SQLiteMasterComponent c: components) {
if (c.type.equals("table") && c.foreignKeyCount > 0) {
maxForeignKeyCount = c.foreignKeyCount;
}
}
for (int i=0; i <= maxForeignKeyCount; i++) {
for (SQLiteMasterComponent c: components) {
if (c.type.equals("table") && c.foreignKeyCount == i) {
db.execSQL("INSERT OR IGNORE INTO " + c.name + " SELECT * FROM " + prefix + c.name + ";");
rv++;
}
}
}
return rv;
}
static int createTriggers(List<SQLiteMasterComponent> components, SupportSQLiteDatabase db) {
int rv = 0;
for (SQLiteMasterComponent c: components) {
if (c.type.equals("trigger")) {
//TODO should really check if the sql includes IF NOT EXISTSand if not add IF NOT EXISTS
db.execSQL(c.sql);
rv++;
}
}
return rv;
}
/**
* Build the list of required SQLiteMasterComponents to save having to access
* sqlite_master many times.
* #param db the SupportSQliteDatabase to access
* #return the list of SQliteMasterComponents extracted
*/
#SuppressLint("Range")
static List<SQLiteMasterComponent> buildComponentsList(SupportSQLiteDatabase db) {
final String FOREIGN_KEY_FLAG_COLUMN = "foreign_key_flag";
ArrayList<SQLiteMasterComponent> rv = new ArrayList<>();
Cursor csr = db.query("SELECT *," +
/* Column to indicate wherther or not FK constraints appear to have been defined
* NOTE!! can be fooled
* e.g. if a column is defined as `my badly named FOREIGN KEY column ` ....
* */
"(" +
"instr(" + SQLITE_MASTER_COL_SQL + ",'FOREIGN KEY') > 0) + " +
"(instr(" + SQLITE_MASTER_COL_SQL + ",' REFERENCES ')> 0) " +
"AS " + FOREIGN_KEY_FLAG_COLUMN + " " +
"FROM " + SQLITE_MASTER_TABLE_NAME + " " +
/* do not want any sqlite tables or android tables included */
"WHERE lower(" + SQLITE_MASTER_COL_NAME + ") NOT LIKE 'sqlite_%' AND lower(" + SQLITE_MASTER_COL_NAME + ") NOT LIKE 'android_%'");
while (csr.moveToNext()) {
SQLiteMasterComponent component = new SQLiteMasterComponent(
csr.getString(csr.getColumnIndex(SQLITE_MASTER_COL_NAME)),
csr.getString(csr.getColumnIndex(SQLITE_MASTER_COL_TYPE)),
csr.getString(csr.getColumnIndex(SQLITE_MASTER_COL_TABLE_NAME)),
csr.getString(csr.getColumnIndex(SQLITE_MASTER_COL_SQL)),
csr.getInt(csr.getColumnIndex(FOREIGN_KEY_FLAG_COLUMN)),
0);
if (csr.getInt(csr.getColumnIndex(FOREIGN_KEY_FLAG_COLUMN)) > 0) {
component.foreignKeyCount = component.getForeignKeyCount(db);
}
rv.add(component);
}
csr.close();
return (List<SQLiteMasterComponent>) rv;
}
/**
* Class to hold a row from sqlite_master
*/
private static class SQLiteMasterComponent {
private String name;
private String type;
private String owningTable;
private String sql;
private int foreignKeyFlag = 0;
private int foreignKeyCount = 0;
SQLiteMasterComponent(String name, String type, String owningTable, String sql, int foreignKeyFlag, int foreignKeyCount) {
this.name = name;
this.type = type;
this.owningTable = owningTable;
this.sql = sql;
this.foreignKeyFlag = foreignKeyFlag;
this.foreignKeyCount = foreignKeyCount;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getType() {
return type;
}
public void setType(String type) {
this.type = type;
}
public String getOwningTable() {
return owningTable;
}
public void setOwningTable(String owningTable) {
this.owningTable = owningTable;
}
public String getSql() {
return sql;
}
public void setSql(String sql) {
this.sql = sql;
}
public int getForeignKeyFlag() {
return foreignKeyFlag;
}
public void setForeignKeyFlag(int foreignKeyFlag) {
this.foreignKeyFlag = foreignKeyFlag;
}
public boolean isForeignKey() {
return this.foreignKeyFlag > 0;
}
public int getForeignKeyCount() {
return foreignKeyCount;
}
public void setForeignKeyCount(int foreignKeyCount) {
this.foreignKeyCount = foreignKeyCount;
}
/**
* Retrieve the number of rows returned by PRAGMA foreign_key_list
* #param db The SupportSQLiteDatabase to access
* #return The number of rows i.e. number of Foreign Key constraints
*/
private int getForeignKeyCount(SupportSQLiteDatabase db) {
int rv =0;
Cursor csr = db.query("SELECT count(*) FROM pragma_foreign_key_list('" + this.name + "');");
if (csr.moveToFirst()) {
rv = csr.getInt(0);
}
csr.close();
return rv;
}
}
}
Working Example
The Asset database
The asset database contains 3 tables, person, company and company_person_map
person DDL is
CREATE TABLE person (personid INTEGER PRIMARY KEY, firstName TEXT, lastName TEXT, middleNames TEXT, dateOfBirth DATE);
Room expects:-
CREATE TABLE IF NOT EXISTS `Person` (`personid` INTEGER, `firstName` TEXT, `lastName` TEXT, `middleNames` TEXT, `dateOfBirth` INTEGER, PRIMARY KEY(`personid`))
note the type of DATE v INTEGER for the dateOfBirth column which room would not accept so an expected ... found ....
company DDL is
CREATE TABLE company (companyid INTEGER PRIMARY KEY, companyName TEXT, city TEXT, state TEXT, country TEXT, notes TEXT);
Room expects:-
CREATE TABLE IF NOT EXISTS `company` (`companyid` INTEGER, `companyName` TEXT, `city` TEXT, `state` TEXT, `country` TEXT, `notes` TEXT, PRIMARY KEY(`companyid`))
both are comparable room would accept
company_person_map DDL is
CREATE TABLE company_person_map (
companyid_map INTEGER NOT NULL REFERENCES company(companyid) ON DELETE CASCADE ON UPDATE CASCADE,
personid_map INTEGER NOT NULL REFERENCES person(personid) ON DELETE CASCADE ON UPDATE CASCADE,
PRIMARY KEY (companyid_map, personid_map));
Room expects:-
CREATE TABLE IF NOT EXISTS `company_person_map` (
`companyid_map` INTEGER NOT NULL,`personid_map` INTEGER NOT NULL,
PRIMARY KEY(`companyid_map`, `personid_map`),
FOREIGN KEY(`companyid_map`) REFERENCES `company`(`companyid`) ON UPDATE CASCADE ON DELETE CASCADE ,
FOREIGN KEY(`personid_map`) REFERENCES `Person`(`personid`) ON UPDATE CASCADE ON DELETE CASCADE )
although they are significantly different, they are comparable, room would accept
The asset table contains the following data:-
person
company
company_person_map
person 11 is not mapped to a company.
AllDao is
none of the inserts are used, and not all the queries either.
The invoking Activity is :-
public class MainActivity extends AppCompatActivity {
OtherDatabase dbOther;
AllDao daoOther;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
dbOther = OtherDatabase.getInstance(this);
daoOther = dbOther.getAllDao();
for(Person p: daoOther.getAllPeople()) {
Log.d("DBINFO",
"Person is " + p.firstName
+ " " + p.middleNames
+ " " + p.lastName
+ " Date of Birth is " + p.dateOfBirth
+ " ID is " + p.id
);
}
for(CompanyWithPeople cwp: dao.getCompanyWithPeople()) {
logCompany(cwp.company,"DBINFO","");
for(Person p: cwp.person) {
logPerson(p,"DBINFO","\n\t");
}
}
}
void logCompany(Company c, String tag, String lineFeed) {
Log.d(tag,lineFeed + "Company is " + c.companyName
+ " Location is " + c.city + ", " + c.state + ", " + c.country
+ " ID is " + c.companyId
+ "\n\t Notes: " + c.notes
);
}
void logPerson(Person p, String tag, String lineFeed) {
Log.d(tag, lineFeed + p.firstName + " " + p.middleNames
+ " " + p.lastName + " Date of Birth is " + p.dateOfBirth
+ " ID is " + p.id
);
}
}
Result
2021-11-28 19:56:01.928 D/DBINFO: Person is Robert John Smith Date of Birth is Sat Dec 27 17:46:33 GMT+10:00 1969 ID is 1
2021-11-28 19:56:01.929 D/DBINFO: Person is Julie Mary Armstrong Date of Birth is Mon Jan 12 23:22:04 GMT+10:00 1970 ID is 2
2021-11-28 19:56:01.929 D/DBINFO: Person is Andrea Susan Stewart Date of Birth is Mon Jan 05 04:56:09 GMT+10:00 1970 ID is 3
2021-11-28 19:56:01.929 D/DBINFO: Person is Mary Belinda Allway Date of Birth is Mon Jan 12 00:15:21 GMT+10:00 1970 ID is 4
2021-11-28 19:56:01.929 D/DBINFO: Person is Lisa Elizabeth Brooks Date of Birth is Sat Jan 03 03:51:21 GMT+10:00 1970 ID is 5
2021-11-28 19:56:01.930 D/DBINFO: Person is Stephen Colin Cobbs Date of Birth is Tue Jan 06 14:01:55 GMT+10:00 1970 ID is 6
2021-11-28 19:56:01.930 D/DBINFO: Person is Breane Cath Davidson Date of Birth is Thu Jan 01 22:30:14 GMT+10:00 1970 ID is 7
2021-11-28 19:56:01.930 D/DBINFO: Person is Trevor Arthur Frankston Date of Birth is Sat Jan 10 03:47:02 GMT+10:00 1970 ID is 8
2021-11-28 19:56:01.930 D/DBINFO: Person is George Howard Erksine Date of Birth is Sun Jan 11 00:47:02 GMT+10:00 1970 ID is 9
2021-11-28 19:56:01.930 D/DBINFO: Person is Uriah Stanley Jefferson Date of Birth is Mon Dec 29 19:11:31 GMT+10:00 1969 ID is 10
2021-11-28 19:56:01.931 D/DBINFO: Person is Valerie Alana Singleton Date of Birth is Thu Jan 01 11:45:07 GMT+10:00 1970 ID is 11
2021-11-28 19:56:01.931 D/DBINFO: Person is Vladimir Oscar Whitworth Date of Birth is Sat Jan 10 00:29:45 GMT+10:00 1970 ID is 12
2021-11-28 19:56:01.936 D/DBINFO: Company is Allbright Construction Location is Sydney, NSW, Australia ID is 1
Notes:
2021-11-28 19:56:01.936 D/DBINFO: Julie Mary Armstrong Date of Birth is Mon Jan 12 23:22:04 GMT+10:00 1970 ID is 2
2021-11-28 19:56:01.936 D/DBINFO: Mary Belinda Allway Date of Birth is Mon Jan 12 00:15:21 GMT+10:00 1970 ID is 4
2021-11-28 19:56:01.937 D/DBINFO: Stephen Colin Cobbs Date of Birth is Tue Jan 06 14:01:55 GMT+10:00 1970 ID is 6
2021-11-28 19:56:01.937 D/DBINFO: Trevor Arthur Frankston Date of Birth is Sat Jan 10 03:47:02 GMT+10:00 1970 ID is 8
2021-11-28 19:56:01.937 D/DBINFO: Uriah Stanley Jefferson Date of Birth is Mon Dec 29 19:11:31 GMT+10:00 1969 ID is 10
2021-11-28 19:56:01.937 D/DBINFO: Vladimir Oscar Whitworth Date of Birth is Sat Jan 10 00:29:45 GMT+10:00 1970 ID is 12
2021-11-28 19:56:01.937 D/DBINFO: Company is Dextronics Location is Slough, Berkshire, England ID is 2
Notes:
2021-11-28 19:56:01.937 D/DBINFO: Robert John Smith Date of Birth is Sat Dec 27 17:46:33 GMT+10:00 1969 ID is 1
2021-11-28 19:56:01.938 D/DBINFO: Andrea Susan Stewart Date of Birth is Mon Jan 05 04:56:09 GMT+10:00 1970 ID is 3
2021-11-28 19:56:01.938 D/DBINFO: Lisa Elizabeth Brooks Date of Birth is Sat Jan 03 03:51:21 GMT+10:00 1970 ID is 5
2021-11-28 19:56:01.938 D/DBINFO: Breane Cath Davidson Date of Birth is Thu Jan 01 22:30:14 GMT+10:00 1970 ID is 7
2021-11-28 19:56:01.938 D/DBINFO: George Howard Erksine Date of Birth is Sun Jan 11 00:47:02 GMT+10:00 1970 ID is 9
Related
I'm fetching rows from excel sheet in my application that holds attendance records from the bio metric machine. In order to get the best result i have to remove the redundant data. For that I have to manage check in and checkout timings at regular intervals. For instance, First check in time for entering, and then checkout time for lunch, then again check in for returning back, and last check out for going home. Meanwhile the rows in excel contains multiple check ins and check outs as the employee tends to do more that once for both.
I have managed to get records from excel and added to data table. Now for the sequence and sorting part I'm struggling to achieve my desired result. Below is my code.
protected void btnSaveAttendance_Click(object sender, EventArgs e)
{
try
{
if (FileUpload1.HasFile && Path.GetExtension(FileUpload1.FileName) == ".xls")
{
using (var excel = new OfficeOpenXml.ExcelPackage(FileUpload1.PostedFile.InputStream))
{
var tbl = new DataTable();
var ws = excel.Workbook.Worksheets.First();
var hasHeader = true; // adjust accordingly
// add DataColumns to DataTable
foreach (var firstRowCell in ws.Cells[1, 1, 1, ws.Dimension.End.Column])
tbl.Columns.Add(hasHeader ? firstRowCell.Text
: String.Format("Column {0}", firstRowCell.Start.Column));
// add DataRows to DataTable
int startRow = hasHeader ? 2 : 1;
for (int rowNum = startRow; rowNum <= ws.Dimension.End.Row; rowNum++)
{
var wsRow = ws.Cells[rowNum, 1, rowNum, ws.Dimension.End.Column];
DataRow row = tbl.NewRow();
foreach (var cell in wsRow)
row[cell.Start.Column - 1] = cell.Text;
tbl.Rows.Add(row);
}
var distinctNames = (from row in tbl.AsEnumerable()
select row.Field<string>("Employee Code")).Distinct();
DataRow[] dataRows = tbl.Select().OrderBy(u => u["Employee Code"]).ToArray();
var ss = dataRows.Where(p => p.Field<string>("Employee Code") == "55").ToArray();
}
}
}
catch (Exception ex) { }
}
The result i'm getting is:
Employee Code Employee Name Date Time In / Out
55 Alex 12/27/2018 8:59 IN
55 Alex 12/27/2018 8:59 IN
55 Alex 12/27/2018 13:00 OUT
55 Alex 12/27/2018 13:00 OUT
55 Alex 12/27/2018 13:48 IN
55 Alex 12/27/2018 13:49 IN
55 Alex 12/27/2018 18:08 OUT
And I want to have first In and then out and then in and then out. This would iterate four times to generate the result.
Expected result is:
Employee Code Employee Name Date Time In / Out
55 Alex 12/27/2018 8:59 IN
55 Alex 12/27/2018 13:00 OUT
55 Alex 12/27/2018 13:48 IN
55 Alex 12/27/2018 18:08 OUT
Can you try to do groupby in the result like below
ss=ss.GroupBy(x=>x.DateTime).ToArray();
Build a logic, if your result have 2 successive In/Out as a sample like below.
Here In I considered as field name
var tt;
for(int i=0;i<ss.Count();i++)
{
if(ss[i].In=="In" && (tt!=null || tt.LastOrDefault().In!="In"))
tt=ss[i];
else if(ss[i].In=="Out" && (tt!=null || tt.LastOrDefault().In!="Out"))
tt=ss[i];
}
I want to create a "select", that when selecting a year in another "select", I load all the weeks of that year, from Monday to Friday:
Example, if I select 2018 my "select" would be something like this.
1 - 01/01/18 - 07/01/18
2 - 01/08/18 - 01/14/18
3 - 15/01/18 - 01/21/10
......
52 - 12/24/18 - 12/30/18
The data was prepared in the back-end with java. To then send it as an array of string to the front-end
Avoid legacy date-time classes
The modern solution uses java.time classes. These supplanted the terribly flawed legacy classes such as Date & Calendar.
Looping
Determine the first day of year. And the first day of the following year, as our limit.
Year year = Year.of( 2018 );
LocalDate firstOfYear = year.atDay( 1 );
LocalDate firstOfFollowingYear = year.plusYears( 1 ).atDay( 1 );
Use a TemporalAdjuster to move through time. The TemporalAdjusters class (note the plural) happens to offer an adjuster implementation for our needs, previousOrSame. Use that to move back in time to get a Monday, if the 1st of year is not already Monday.
TemporalAdjuster adjuster = TemporalAdjusters.previousOrSame( DayOfWeek.MONDAY );
LocalDate firstMondayOfYear = firstOfYear.with( adjuster );
Define a list to hold our weekly report.
List < String > weeks = new ArrayList <>( 53 );
Define a formatter to generate the text for our weekly report. The DateTimeFormatter class can automatically localize, so we need not hard-code a specific format.
Locale locale = Locale.forLanguageTag( "es-ES" ); // Spain locale.
DateTimeFormatter formatter = DateTimeFormatter.ofLocalizedDate( FormatStyle.SHORT ).withLocale( locale );
Loop through the weeks of the year. We increment one week at a time, until hitting the following year.
int nthWeekOfYear = 0;
LocalDate localDate = firstMondayOfYear;
while ( localDate.isBefore( firstOfFollowingYear ) ) {
nthWeekOfYear++;
String output = nthWeekOfYear + " - " + localDate.format( formatter ) + " - " + localDate.plusDays( 6 ); // Using Fully-Closed weeks, where both beginning and end are inclusive. In contrast to Half-Open.
weeks.add( output );
// Set up next loop.
localDate = localDate.plusWeeks( 1 );
}
Convert from a List to an array, per your directions in the Question.
String[] results = weeks.toArray( new String[ weeks.size() ] );
Dump to console.
System.out.println( "results = " + Arrays.toString( results ) );
Streams
We could rewrite this code by using streams and lambdas. This requires Java 9+.
The LocalDate#datesUntil method produces a Stream of LocalDate objects.
While I do not necessarily recommend this code, I found it interesting that we can write the entire routine as a one-liner.
int inputYear = 2018;
List < String > weeks =
Year
.of( inputYear )
.atDay( 1 )
.with( TemporalAdjusters.previousOrSame( DayOfWeek.MONDAY ) )
.datesUntil( Year.of( inputYear ).plusYears( 1 ).atDay( 1 ) , Period.ofWeeks( 1 ) )
.map( localDate -> localDate.format( DateTimeFormatter.ofLocalizedDate( FormatStyle.SHORT ).withLocale( Locale.forLanguageTag( "es-ES" ) ) ) + " - " + localDate.plusDays( 6 ).format( DateTimeFormatter.ofLocalizedDate( FormatStyle.SHORT ).withLocale( Locale.forLanguageTag( "es-ES" ) ) ) )
.toList(); // Before Java 16, change to .collect( Collectors.toList()); or .collect( Collectors.toUnmodifiableList());
When run.
weeks = [1/1/18 - 7/1/18, 8/1/18 - 14/1/18, 15/1/18 - 21/1/18, 22/1/18 - 28/1/18, 29/1/18 - 4/2/18, 5/2/18 - 11/2/18, 12/2/18 - 18/2/18, 19/2/18 - 25/2/18, 26/2/18 - 4/3/18, 5/3/18 - 11/3/18, 12/3/18 - 18/3/18, 19/3/18 - 25/3/18, 26/3/18 - 1/4/18, 2/4/18 - 8/4/18, 9/4/18 - 15/4/18, 16/4/18 - 22/4/18, 23/4/18 - 29/4/18, 30/4/18 - 6/5/18, 7/5/18 - 13/5/18, 14/5/18 - 20/5/18, 21/5/18 - 27/5/18, 28/5/18 - 3/6/18, 4/6/18 - 10/6/18, 11/6/18 - 17/6/18, 18/6/18 - 24/6/18, 25/6/18 - 1/7/18, 2/7/18 - 8/7/18, 9/7/18 - 15/7/18, 16/7/18 - 22/7/18, 23/7/18 - 29/7/18, 30/7/18 - 5/8/18, 6/8/18 - 12/8/18, 13/8/18 - 19/8/18, 20/8/18 - 26/8/18, 27/8/18 - 2/9/18, 3/9/18 - 9/9/18, 10/9/18 - 16/9/18, 17/9/18 - 23/9/18, 24/9/18 - 30/9/18, 1/10/18 - 7/10/18, 8/10/18 - 14/10/18, 15/10/18 - 21/10/18, 22/10/18 - 28/10/18, 29/10/18 - 4/11/18, 5/11/18 - 11/11/18, 12/11/18 - 18/11/18, 19/11/18 - 25/11/18, 26/11/18 - 2/12/18, 3/12/18 - 9/12/18, 10/12/18 - 16/12/18, 17/12/18 - 23/12/18, 24/12/18 - 30/12/18, 31/12/18 - 6/1/19]
A little more reasonable would be extracting the formatter and the Year.
Locale locale = Locale.forLanguageTag( "es-ES" ); // Spain locale.
DateTimeFormatter formatter = DateTimeFormatter.ofLocalizedDate( FormatStyle.SHORT ).withLocale( locale );
Year year = Year.of( 2018 );
List < String > weeks =
year
.atDay( 1 )
.with( TemporalAdjusters.previousOrSame( DayOfWeek.MONDAY ) )
.datesUntil( year.plusYears( 1 ).atDay( 1 ) , Period.ofWeeks( 1 ) )
.map( localDate -> localDate.format( formatter ) + " - " + localDate.plusDays( 6 ).format( formatter ) )
.toList(); // Before Java 16, change to .collect( Collectors.toList()); or .collect( Collectors.toUnmodifiableList());
ISO 8601
If your definition of week complies with the ISO 8601 definition of week:
First day of week is Monday
Week # 1 of year contains the first Thursday of the calendar year
… then I suggest adding the ThreeTen-Extra library to your project to utilize the YearWeek class.
public class Weeks {
public static void main(String[] args) {
int year = 2018;
int weeks = getNumWeeksForYear(year);
for (int i = 1; i < weeks; i++) {
Calendar calendar = Calendar.getInstance();
calendar.clear();
calendar.set(Calendar.WEEK_OF_YEAR, i);
calendar.set(Calendar.YEAR, year);
Date monday = calendar.getTime();
calendar.add(Calendar.DATE, 6);
Date sunday = calendar.getTime();
System.out.println("week" + i);
System.out.println(monday);
System.out.println(sunday);
}
}
public static int getNumWeeksForYear(int year) {
Calendar c = Calendar.getInstance();
c.set(year, 0, 1);
return c.getMaximum(Calendar.WEEK_OF_YEAR);
}
}
I'm a new Pig user.
I have an existing schema which I want to modify. My source data is as follows with 6 columns:
Name Type Date Region Op Value
-----------------------------------------------------
john ab 20130106 D X 20
john ab 20130106 D C 19
jphn ab 20130106 D T 8
jphn ab 20130106 E C 854
jphn ab 20130106 E T 67
jphn ab 20130106 E X 98
and so on. Each Op value is always C, T or X.
I basically want to split my data in the following way into 7 columns:
Name Type Date Region OpX OpC OpT
----------------------------------------------------------
john ab 20130106 D 20 19 8
john ab 20130106 E 98 854 67
Basically split the Op column into 3 columns: each for one Op value. Each of these columns should contain appropriate value from column Value.
How can I do this in Pig?
One way to achieve the desired result:
IN = load 'data.txt' using PigStorage(',') as (name:chararray, type:chararray,
date:int, region:chararray, op:chararray, value:int);
A = order IN by op asc;
B = group A by (name, type, date, region);
C = foreach B {
bs = STRSPLIT(BagToString(A.value, ','),',',3);
generate flatten(group) as (name, type, date, region),
bs.$2 as OpX:chararray, bs.$0 as OpC:chararray, bs.$1 as OpT:chararray;
}
describe C;
C: {name: chararray,type: chararray,date: int,region: chararray,OpX:
chararray,OpC: chararray,OpT: chararray}
dump C;
(john,ab,20130106,D,20,19,8)
(john,ab,20130106,E,98,854,67)
Update:
If you want to skip order by which adds an additional reduce phase to the computation, you can prefix each value with its corresponding op in tuple v. Then sort the tuple fields by using a custom UDF to have the desired OpX, OpC, OpT order:
register 'myjar.jar';
A = load 'data.txt' using PigStorage(',') as (name:chararray, type:chararray,
date:int, region:chararray, op:chararray, value:int);
B = group A by (name, type, date, region);
C = foreach B {
v = foreach A generate CONCAT(op, (chararray)value);
bs = STRSPLIT(BagToString(v, ','),',',3);
generate flatten(group) as (name, type, date, region),
flatten(TupleArrange(bs)) as (OpX:chararray, OpC:chararray, OpT:chararray);
}
where TupleArrange in mjar.jar is something like this:
..
import org.apache.pig.EvalFunc;
import org.apache.pig.data.Tuple;
import org.apache.pig.data.TupleFactory;
import org.apache.pig.impl.logicalLayer.schema.Schema;
public class TupleArrange extends EvalFunc<Tuple> {
private static final TupleFactory tupleFactory = TupleFactory.getInstance();
#Override
public Tuple exec(Tuple input) throws IOException {
try {
Tuple result = tupleFactory.newTuple(3);
Tuple inputTuple = (Tuple) input.get(0);
String[] tupleArr = new String[] {
(String) inputTuple.get(0),
(String) inputTuple.get(1),
(String) inputTuple.get(2)
};
Arrays.sort(tupleArr); //ascending
result.set(0, tupleArr[2].substring(1));
result.set(1, tupleArr[0].substring(1));
result.set(2, tupleArr[1].substring(1));
return result;
}
catch (Exception e) {
throw new RuntimeException("TupleArrange error", e);
}
}
#Override
public Schema outputSchema(Schema input) {
return input;
}
}
We know String.format() can accept format with such style:
%[argument_index$][flags][width][.precision]conversion
which is well known as C printf() format, and introduced in java 1.5.
My task is, generating complex format including repeated or optional parameters.
For example, with the format:
select %1 from %2 where %3 and give %1->'*' %2->'users' %3->'age>20'
it returned:
select * from users where age>20
the format can be supported by Stirng.format().
However, I expect a format similar to:
select %1{, } from (%2(as %3)){,} (where %4 (and %5))?
when: %1->'*', %2->'users' %3->null, %3->'age>20'
it returned:
select * from users where age>20
when: %1->Stirng{'name','age'} , %2->'users' %3->'u', %4->'age>20', %5->null
it returned:
select name, age from users as u where age>20
when: %1->Stirng{'name','age'} , %2->'users' %3->'u', %4->null, %5->null
it returned:
select name, age from users as u
when: %1->Stirng{'name','age'} , %2->String{'users','order'} %3->{'u','o'}, %4->'age>20', %5->'u.id=o.userid'
it returned:
select name, age from users as u,orders as o where age>20 and u.id=o.userid
I think now you may understand my meanings. Are there a mature library to do such complex 'anti-regexp' work?
Maybe you are looking for a CustomFormatProvider?
class SqlFormatter:IFormatProvider, ICustomFormatter
{
public object GetFormat(Type formatType)
{
return this;
}
public string Format(string format, object arg, IFormatProvider formatProvider)
{
StringBuilder concat = new StringBuilder();
string[] formatParts = format.Split(':');
switch (formatParts[0])
{
case "sqlfield":
string sep = "";
if (formatParts.Length>1)
{
sep = formatParts[1];
}
string[] fields = (string[]) arg;
concat.Append(fields[0]);
for (int fieldIndex = 1; fieldIndex < fields.Length; fieldIndex++)
{
concat.Append(sep);
concat.Append(fields[fieldIndex]);
}
break;
case "sqltables":
concat.Append(arg.ToString());
break;
default:
break;
}
return concat.ToString();
}
}
Use it like this:
String sql = String.Format(
new SqlFormatter()
, "select {0:sqlfield:,} from {1:sqltables} "
, new string[]{"name","lname"}
, "person" );
Will give:
"select name,lname from person "
I'll leave the rest of the implementation (and robustness etc) to you...
i am using LINQ to SQL. My Database has 3 columns Ref, Due_amount, Due_Date.
Data may look like this, for example.
10 02/08/2009 00:00:00 175.0000
10 02/09/2009 00:00:00 175.0000
10 02/10/2009 00:00:00 175.0000
10 02/11/2009 00:00:00 175.0000
10 02/12/2009 00:00:00 175.0000
10 02/01/2010 00:00:00 175.0000
My code below, returns 6 elements and works, however the Date is always 02/08/2009? if i say change row 2's amount to 150.0000 it then returns the correct date of 02/09/2009?
Any ideas?
private static void PopulateInstalments(string Ref, ResponseMessage responseMsg)
{
using (DAO dbContext = new DAO())
{
IEnumerable<profile> instalments = (from instalment in dbContext.profile
where instalment.ref == Ref
select instalment);
foreach (profile instalment in instalments)
{
if (responseMsg.Instalments == null)
responseMsg.Instalments = new ArrayOfInstalments();
Instalment tempInstalment = new Instalment();
tempInstalment.DueAmount = instalment.Due_amount;
tempInstalment.DueDate = instalment.Due_date == null ? "" : instalment.Due_date.ToString();
responseMsg.Instalments.Add(tempInstalment);
}
}
}
Thanks
Richard
ensure primary key column is set in source (SQL Server Database in this case)