ASP.NET, C#, MVC 3, Code First project
I'm trying to import data from an Excel spreadsheet. I've formated all cells as Text.
A sample row in the Import worksheet is as follows.
Account Card ThreeCode Route
04562954830287127 32849321890233127 183 154839254
04562954830287128 32849321890233128 233
04562954830287129 32849321890233129 082
04562954830287130 32849321890233130 428
When I run in debug and drill down into the ds DataSet the Account and Card columns are imported as strings, the 3-Digit and Route columns are imported as doubles. The problem arises with the 3 digit number starting with 0 (082) in data row 3. It gets imported as a System.DBNull and is empty. I need to be able to import 3 digit codes with leading zeros. Is there a way to force the import to be all strings or another way to approach this problem? I searched the web for and haven't found a solution. This will run from a browser so anything to do with the registry, dll or ini files on the local machine is not an option. The import code is below. Thank you in advance for any help.
public ActionResult ExcelToDS(string Path = "C;\File.xls")
{
string strConn = "Provider= Microsoft.Jet.OLEDB.4.0;" + "Data Source=" + Path + "; " + "Extended Properties=Excel 8.0;";
OleDbConnection conn = new OleDbConnection(strConn);
conn.Open(); string strExcel = "";
OleDbDataAdapter myCommand = null;
DataTable dt = new DataTable();
strExcel = "select * from [Import$]";
DataSet ds = null;
myCommand = new OleDbDataAdapter(strExcel, strConn);
ds = new DataSet(); myCommand.Fill(ds, "table1");
Ah yes the joys of the excel driver. What happens is it makes a determination from the first say ten rows on the data type, anything outside of that format becomes null.
Solutions are to use a more robust third party driver usually costing something, or set the registry key to fully sample all of the rows rather than the default 8.
Check out the link here for TypeGuessRows
http://www.connectionstrings.com/excel
HKLM\Software\Wow5432Node\Microsoft\Jet\4.0\Engines\Excel
Set the value TypeGuessRows equal to zero
There doesn't appear to be a way to make this work consistently. The work around I came up with is to add 1000 to the ThreeCode column in the Excel workbook. You are then able to import the data into a dataset. Then when the data is read out you simply strip off the "1" preifx. Here is my inline method to do that.
public static string last3(this string instring)
{ int len = instring.Length - 3; string outstring = instring.Substring(len, 3); return outstring; }
Which you can call in the code with:
card.3Dig = code.last3();
'card' and '3dig' are the class and field being populated. 'code' is the 4 digit dataset data.
Related
Our project recently updated to the newer Oracle.ManagedDataAccess DLL's (v 4.121.2.0) and this error has been cropping up intermittently. We've fixed it a few times without really knowing what we did to fix it.
I'm fairly certain it's caused by CLOB fields being mapped to strings in Entity Framework, and then being selected in LINQ statements that pull entire entities instead of just a limited set of properties.
Error:
Value cannot be null.
Parameter name: byteArray
Stack Trace:
at System.BitConverter.ToString(Byte[] value, Int32 startIndex, Int32 length)
at OracleInternal.TTC.TTCLob.GetLobIdString(Byte[] lobLocator)
at OracleInternal.ServiceObjects.OracleDataReaderImpl.CollectTempLOBsToBeFreed(Int32 rowNumber)
at Oracle.ManagedDataAccess.Client.OracleDataReader.ProcessAnyTempLOBs(Int32 rowNumber)
at Oracle.ManagedDataAccess.Client.OracleDataReader.Read()
at System.Data.Entity.Core.Common.Internal.Materialization.Shaper`1.StoreRead()
Suspect Entity Properties:
'Mapped to Oracle CLOB Column'
<Column("LARGEFIELD")>
Public Property LargeField As String
But I'm confident this is the proper way to map the fields per Oracle's matrix:
ODP.NET Types Overview
There is nothing obviously wrong with the generated SQL statement either:
SELECT
...
"Extent1"."LARGEFIELD" AS "LARGEFIELD",
...
FROM ... "Extent1"
WHERE ...
I have also tried this Fluent code per Ozkan's suggestion, but it does not seem to affect my case.
modelBuilder.Entity(Of [CLASS])().Property(
Function(x) x.LargeField
).IsOptional()
Troubleshooting Update:
After extensive testing, we are quite certain this is actually a bug, not a configuration problem. It appears to be the contents of the CLOB that cause the problem under a very specific set of circumstances. I've cross-posted this on the Oracle Forums, hoping for more information.
After installation of Oracle12 client we ran into the same problem.
In machine.config (C:\Windows\Microsoft.NET\Framework\v4.0.30319\Config) I removed all entries with Oracle.ManagedDataAccess.
In directory C:\Windows\Microsoft.NET\assembly\GAC_MSIL I removed both Oracle.ManagedDataAccess and Policy.4.121.Oracle.ManagedDataAccess.
Then my C# program started working as usually, using the Oracle.ManagedDataAccess dll in it's own directory.
We met this problem in our project an hour ago and found a solution. It is generating this error because of null values in CLOB caolumn. We have a CLOB column and it is Nullable in database. In EntityFramework model it is String but not Nullable. We changed column's Nullable property to True in EF model and it fixed problem.
We have this problem as well on some computers, and we are running the latest Oracle.ManagedDataAccess.dll (4.121.2.20150926 ODAC RELEASE 4).
We found a solution to our problem, and I just wanted to share.
This was our problem that occurred some computers.
Using connection As New OracleConnection(yourConnectionString)
Dim command As New OracleCommand(yourQuery, connection)
connection.Open()
Using reader As OracleDataReader = command.ExecuteReader()
Dim clobField As String = CStr(reader.Item("CLOB_FIELD"))
End Using
connection.Close()
End Using
And here's the solution that made it work on all computers.
Using connection As New OracleConnection(yourConnectionString)
Dim command As New OracleCommand(yourQuery, connection)
connection.Open()
Using reader As OracleDataReader = command.ExecuteReader()
Dim clobField As String = reader.GetOracleClob(0).Value
End Using
connection.Close()
End Using
I spent a lot of time trying to decipher this and found bits of this and that here and there on the internet, but no where had everything in one place, so I'd like to post what I've learned, and how I resolved it, which is much like Ragowit's answer, but I've got the C# code for it.
Background
The error: So I had this error on my while (dr.Read()) line:
Value cannot be null. \r\nParmeter name: byteArray
I ran into very little on the internet about this, except that it was an error with the CLOB field when it was null, and was supposedly fixed in the latest ODAC release, according to this: https://community.oracle.com/thread/3944924
My take on this -- NOT TRUE! It hasn't been updated since October 5, 2015 (http://www.oracle.com/technetwork/topics/dotnet/utilsoft-086879.html), and the 12c package I'm using was downloaded in April 2016.
Full stack trace by someone else with the error that pretty much mirrored mine: http://pastebin.com/24AfFDnq
Value cannot be null.
Parameter name: byteArray
at System.BitConverter.ToString(Byte[] value, Int32 startIndex, Int32 length)
at OracleInternal.TTC.TTCLob.GetLobIdString(Byte[] lobLocator)
at OracleInternal.ServiceObjects.OracleDataReaderImpl.CollectTempLOBsToBeFreed(Int32 rowNumber)
at Oracle.ManagedDataAccess.Client.OracleDataReader.ProcessAnyTempLOBs(Int32 rowNumber)
at Oracle.ManagedDataAccess.Client.OracleDataReader.Read()
at System.Data.Entity.Core.Common.Internal.Materialization.Shaper`1.StoreRead()
'Mapped to Oracle CLOB Column'
<Column("LARGEFIELD")>
Public Property LargeField As String
'Mapped to Oracle BLOB Column'
<Column("IMAGE")>
Public Property FileContents As Byte()
How I encountered it: It was while reading an 11-column table of about 3000 rows. One of the columns was actually an NCLOB (so apparently this is just as susceptible as CLOB), which allowed nulls in the database, and some of its values were empty - it was an optional "Notes" field, after all. It's funny that I didn't get this error on the first or even second row that had an empty Notes field. It didn't error until row 768 finished and it was about to start row 769, according to an int counter variable that started at 0 that I set up and saw after checking how many rows my DataTable had thus far. I found I got the error if I used:
DataSet ds = new DataSet();
OracleDataAdapter adapter = new OracleDataAdapter(cmd);
adapter.Fill(ds);
as well if I used:
DataTable dt = new DataTable();
OracleDataReader dr = cmd.ExecuteReader();
dt.Load(dr);
or if I used:
OracleDataReader dr = cmd.ExecuteReader();
if (dr.HasRows)
{
while (dr.Read())
{
....
}
}
where cmd is the OracleCommand, so it made no difference.
Resolution
The following is basically the code I used to parse through an OracleDataReader values in order to assign them to a DataTable. It's actually not as refined as it could be - I am using it to just return dr[i] to datarow in all cases except when the value is null, and when it is the eleventh column (index = 10, because it starts at 0) and a particular query has been executed so that I know where my NCLOB column is.
public static DataTable GetDataTableManually(string query)
{
OracleConnection conn = null;
try
{
string connString = ConfigurationManager.ConnectionStrings["MyConn"].ConnectionString;
conn = new OracleConnection(connString);
OracleCommand cmd = new OracleCommand(query, conn);
conn.Open();
OracleDataReader dr = cmd.ExecuteReader(CommandBehavior.CloseConnection);
DataTable dtSchema = dr.GetSchemaTable();
DataTable dt = new DataTable();
List<DataColumn> listCols = new List<DataColumn>();
List<DataColumn> listTypes = new List<DataColumn>();
if (dtSchema != null)
{
foreach (DataRow drow in dtSchema.Rows)
{
string columnName = System.Convert.ToString(drow["ColumnName"]);
DataColumn column = new DataColumn(columnName, (Type)(drow["DataType"]));
listCols.Add(column);
listTypes.Add(drow["DataType"].ToString()); // necessary in order to record nulls
dt.Columns.Add(column);
}
}
// Read rows from DataReader and populate the DataTable
if (dr.HasRows)
{
int rowCount = 0;
while (dr.Read())
{
string fieldType = String.Empty;
DataRow dataRow = dt.NewRow();
for (int i = 0; i < dr.FieldCount; i++)
{
if (!dr.IsDBNull[i])
{
fieldType = dr.GetFieldType(i).ToString(); // example only, this is the same as listTypes[i], and neither help us distinguish NCLOB from NVARCHAR2 - both will say System.String
// This is the magic
if (query == "SELECT * FROM Orders" && i == 10)
dataRow[((DataColumn)listCols[i])] = dr.GetOracleClob(i); // <-- our new check!!!!
// Found if you have null Decimal fields, this is
// also needed, and GetOracleDecimal and GetDecimal
// will not help you - only GetFloat does
else if (listTypes[i] == "System.Decimal")
dataRow[((DataColumn)listCols[i])] = dr.GetFloat(i);
else
dataRow[((DataColumn)listCols[i])] = dr[i];
}
else // value was null; we can't always assign dr[i] if DBNull, such as when it is a number or decimal field
{
byte[] nullArray = new byte[0];
switch (listTypes[i])
{
case "System.String": // includes NVARCHAR2, CLOB, NCLOB, etc.
dataRow[((DataColumn)listCols[i])] = String.Empty;
break;
case "System.Decimal":
case "System.Int16": // Boolean
case "System.Int32": // Number
dataRow[((DataColumn)listCols[i])] = 0;
break;
case "System.DateTime":
dataRow[((DataColumn)listCols[i])] = DBNull.Value;
break;
case "System.Byte[]": // Blob
dataRow[((DataColumn)listCols[i])] = nullArray;
break;
default:
dataRow[((DataColumn)listCols[i])] = String.Empty;
break;
}
}
}
dt.Rows.Add(dataRow);
}
ds.Tables.Add(dt);
}
}
catch (Exception ex)
{
// handle error
}
finally
{
conn.Close();
}
// After everything is closed
if (ds.Tables.Count > 0)
return ds.Tables[0]; // there should only be one table if we got results
else
return null;
}
In the same way that I have it assigning specific types of null based on the column type found in the schema table loop, you could add the conditions to the "not null" side of the if...then and do various GetOracle... statements there. I found it was only necessary for this NCLOB instance, though.
To give credit where credit is due, the original codebase is based on the answer given by sarathkumar at Populate data table from data reader .
For me it was simple! I had this error with odac v 4.121.1.0. I have just updated Oracle.ManagedDataAccess to 4.121.2.0 with Nuget and now it is working.
Have you have tried to uninstall and reinstall Oracle.ManagedDataAccess with Nugget?
upgrade Oracle.ManagedDataAccess.dll to version 4.122.1.0 solved.
If you are using vs 2017, we can update via NuGet.
I am using format:
type ="$###,###,##0.00"
for currency and assigning the format type to the worksheet cells
eg.
wrkSheet.Cells[0].Style.Numberformat.Format = formatType;
But this is inserted as text type in excel.
I want this to be inserted as Currency or Number in order to continue to do analysis on the values inserted (sort, sum etc).
Currently as it is text type validations do not hold correct.
Is there any way to force the type in which the formatted values can be inserted?
Your formatting is correct. You needs to covert values to its native types
Use this code, it should work:
using (var package = new ExcelPackage())
{
var worksheet = package.Workbook.Worksheets.Add("Sales list - ");
worksheet.Cells[1, 1].Style.Numberformat.Format = "$###,###,##0.00";
worksheet.Cells[1, 1].Value =Convert.ToDecimal(24558.4780);
package.SaveAs(new FileInfo(path));
}
Indices start from 1 in Excel.
This code
using (var package = new ExcelPackage())
{
var worksheet = package.Workbook.Worksheets.Add("Sales list - ");
worksheet.Cells[1, 1].Style.Numberformat.Format = "$###,###,##0.00";
worksheet.Cells[1, 1].Value = 24558.4780;
package.SaveAs(new FileInfo(path));
}
produces $24 558,48 for me
I'm using the LinqToExcel library. Working great so far, except that I need to start the query at a specific row. This is because the excel spreadsheet from the client uses some images and "header" information at the top of the excel file before the data actually starts.
The data itself will be simple to read and is fairly generic, I just need to know how to tell the ExcelQueryFactory to start at a specific row.
I am aware of the WorksheetRange<Company>("B3", "G10") option, but I don't want to specify an ending row, just where to start reading the file.
Using the latest v. of LinqToExcel with C#
I just tried this code and it seemed to work just fine:
var book = new LinqToExcel.ExcelQueryFactory(#"E:\Temporary\Book1.xlsx");
var query =
from row in book.WorksheetRange("A4", "B16384")
select new
{
Name = row["Name"].Cast<string>(),
Age = row["Age"].Cast<int>(),
};
I only got back the rows with data.
I suppose that you already solved this, but maybe for others - looks like you can use
var excel = new ExcelQueryFactory(path);
var allRows = excel.WorksheetNoHeader();
//start from 3rd row (zero-based indexing), length = allRows.Count() or computed range of rows you want
for (int i = 2; i < length; i++)
{
RowNoHeader row = allRows.ElementAtOrDefault(i);
//process the row - access columns as you want - also zero-based indexing
}
Not as simple as specifying some Range("B3", ...), but also the way.
Hope this helps at least somebody ;)
I had tried this, works fine for my scenario.
//get the sheets info
var faceWrksheet = excel.Worksheet(facemechSheetName);
// get the total rows count.
int _faceMechRows = faceWrksheet.Count();
// append with End Range.
var faceMechResult = excel.WorksheetRange<ExcelFaceMech>("A5", "AS" + _faceMechRows.ToString(), SheetName).
Where(i => i.WorkOrder != null).Select(x => x).ToList();
Have you tried WorksheetRange<Company>("B3", "G")
Unforunatly, at this moment and iteration in the LinqToExcel framework, there does not appear to be any way to do this.
To get around this we are requiring the client to have the data to be uploaded in it's own "sheet" within the excel document. The header row at the first row and the data under it. If they want any "meta data" they will need to include this in another sheet. Below is an example from the LinqToExcel documentation on how to query off a specific sheet.
var excel = new ExcelQueryFactory("excelFileName");
var oldCompanies = from c in repo.Worksheet<Company>("US Companies") //worksheet name = 'US Companies'
where c.LaunchDate < new DateTime(1900, 0, 0)
select c;
Where can I get good tutorial on Entity framework with Stored Procedure in MVC framework?
Is it better to use Enterprise library in this case when I have almost everything written in the stored procedure.
Note: I am using stored procedure because they are really very complex and some of them is over 1000 lines.
MVC is in this case absolutely not related. The way how you call stored procedure from EF will be still the same. I guess you want to use stored procedures without actually using entities and linq-to-entities (main EF features), don't you? Generally you need:
EDMX file (ado.net entity data model) where you run update from database and add all stored procedures you want to use. EDMX file also generates derived ObjectContext and all entities by default.
Next you must go to Model Browser and create Function import for each procedure. Function import will create method on the derived ObjectContext which will allow you call the stored procedure as any other .net method.
During function import you will have to create complex type (it can happen automatically) for result set returned from stored procedure.
You also don't have to use function imports at all and you can execute procedures directly by calling either:
objectContext.ExecuteSqlCommand("storedProcedureName", SqlParameters) for SPs not returning record set
objectContext.ExecuteStoreQuery<ResultType>("storedProcedureName", SqlParameters) for SPs returning record set. ResultType must have properties with same names as columns in result set. It can work only with flat types (no nested objects).
There are some limitations when using stored procedures:
Entity framework doesn't like stored procedures which returns dynamic result sets (based on some condition result set has different columns)
Entity framework doesn't support stored procedures returning multiple result sets - there are EFExtensions which does but it is more like doing ADO.NET directly.
If you are using Entityframwork Code-first,This way you can Use your stored-Procedure, In this Example I have four Input parameters.
var startDateTY = masterSales.PolicyStartDate;
var endateTY = masterSales.PolicyEndDate;
var startDatePY = masterSales.PolicyStartDate.Value.AddYears(-1);
var endatePY = masterSales.PolicyEndDate.Value.AddYears(-1);
var spParameters = new object[4];
spParameters[0] = new SqlParameter()
{
ParameterName = "startDateTY",
Value = startDateTY
};
spParameters[1] = new SqlParameter()
{
ParameterName = "endateTY",
Value = endateTY
};
spParameters[2] = new SqlParameter()
{
ParameterName = "startDatePY",
Value = startDatePY
};
spParameters[3] = new SqlParameter()
{
ParameterName = "endatePY",
Value = endatePY
};
var datalist = objContext.Database.SqlQuery<vMasterSalesAgentReport>("dbo.usp_GetSalesAgentReport #startDateTY,#endateTY,#startDatePY,#endatePY", spParameters).ToList();
store = "sp_selectmark #regid='" + id + "'";
var st = db.ExecuteStoreQuery<Sp>("exec " + store).ToList();
GridView1.DataSource = st;
GridView1.DataBind();
or
string store = "";
store = "sp_inserttbreg #name='" + regobj.name + "',#age='" + regobj.age + "',#place='" + regobj.place + "',#gender='" + regobj.gender + "',#email='" + regobj.email + "',#fon='" + regobj.fon + "'";
I have met a problem about inserting multiple rows in a batch with Subsonic3. My development environment includes:
1. Visual Studio 2010, but use .NET 3.5
2. Active Record Mode in SubSonic 3.0.0.4
3. SQL Server 2005 express
4. Northwind sample database
I am using Active Reecord mode to insert mutiple "Product" into table "Products". If I insert the rows one by one, either call "aProduct.Add()" or call "Insert.Execute()" mutiple times (just like the codes below), it works fine.
private static Product[] CreateProducts(int count)
{
Product[] products = new Product[count];
for (int index = 0; index < products.Length; ++index)
{
products[index] = new Product
{
ProductName = string.Format("cheka-test-{0}", index.ToString()),
Discontinued = (index % 2 == 0),
};
}
return products;
}
private static void SucceedByMultiExecuteInsert()
{
Product[] products = CreateProducts(2);
// -------------------------------- prepare batch
NorthwindDB db = new NorthwindDB();
var inserts = from prod in products
select db.Insert.Into<Product>(x => x.ProductName, x => x.Discontinued).Values(prod.ProductName, prod.Discontinued);
// -------------------------------- batch insert
var selectAll = Product.All();
Console.WriteLine("--- before total rows = {0}", selectAll.Count().ToString());
foreach (Insert insert in inserts)
insert.Execute();
Console.WriteLine("+++ after inserting {0} rows, now total rows = {1}",
products.Length.ToString(), selectAll.Count().ToString());
}
but if I use "BatchQuery" like the codes below,
private static void FailByBatchInsert()
{
Product[] products = CreateProducts(2);
// -------------------------------- prepare batch
NorthwindDB db = new NorthwindDB();
BatchQuery batchquery = new BatchQuery(db.Provider, db.QueryProvider);
var inserts = from prod in products
select db.Insert.Into<Product>(x => x.ProductName, x => x.Discontinued).Values(prod.ProductName, prod.Discontinued);
foreach (Insert insert in inserts)
batchquery.Queue(insert);
// -------------------------------- batch insert
var selectAll = Product.All();
Console.WriteLine("--- before total rows = {0}", selectAll.Count().ToString());
batchquery.Execute();
Console.WriteLine("+++ after inserting {0} rows, now total rows = {1}",
products.Length.ToString(), selectAll.Count().ToString());
}
then it failed with the exception :
"
Unhandled Exception: System.Data.SqlClient.SqlException: Must declare the scalar variable "#ins_ProductName".
Must declare the scalar variable "#ins_ProductName".
"
Please give me some help to solve this problem. Many thanks.
I ran into this problem as well. If you look at the query it's attempting to run, you'll see it doing something like this (this isn't actual code but you'll get the point):
exec_sql N'insert into MyTable (SomeField) Values (#ins_SomeField)',N'#0 varchar(32)','#0=SomeValue'
For some reason it defines the parameters in the query with "#ins_"+FieldName but then passes the parameters as ordinals. I have yet to determine the pattern for why/when it does this but I've lost enough time during this dev cycle futzing with SubSonic to try and diagnose the problem properly.
The work-around I implemented will involve you downloading the 3.0.0.4 source from github and making a change on line 179 of Insert.cs.
Where it reads
ParameterName = _provider.ParameterPrefix + "ins_" + columnName.ToAlphaNumericOnly(),
Changing it to
ParameterName = _provider.ParameterPrefix + Inserts.Count.ToString(),
seemed to do the trick for me. I make no warranties about this solution for you, expressed or implied. It did work for me but your mileage may vary.
I should also note that there's similar logic around the "update" statements as well in Update.cs on lines 181 and 194 but I haven't had these give me problems... yet.
Honestly, I don't think SubSonic is ready for primetime and that's a shame because I really like how Rob set it up. That said, it's in my product for better or worse now so you make the best with what you got.