How to Insert a row and return autoincrement value in sqlite in wp? - windows-phone-7

I have an app using sqlite client. In the app i insert the data in a table and i need the Id which is autoincreament. Is there anyway to get the id in executenonquery? s.th like sqlparameter or ...
I am using the following method to fetch data and i thought rec variable hold the id but it is always rec=1 and i dont know what this is good for?
int x= (System.Windows.Application.Current as App).db.Insert
<CsWidget>(ObjWidget, #"Insert into Tbl_Widget (Name) values(#Name");
public int Insert<T>(T obj, string statement) where T : new()
{
Open();
SQLiteCommand cmd = db.CreateCommand(statement);
int rec = cmd.ExecuteNonQuery(obj);
return rec;
}

Try ExecuteScalar method instead of ExecuteNonQuery. It can work that way depending on your SQLite wrapper. If not the separate query after insert is your only way.

As #gleb.kudr said, a separate query after the insert is the only way.
cmd.ExecuteNonQuery();
cmd.CommandText = "SELECT LAST_INSERT_ROWID()";
object r = cmd.ExecuteScalar();
int id = 0;
int.TryParse( r.ToString(), out id );

Related

Oracle Entity Framework Core pass table parameter to stored procedure

I am trying to pass a parameter to a stored procedure using the Oracle.EntityFrameworkCore package like this:
DataTable table = new DataTable();
table.Columns.Add("keyColumn", typeof(string));
table.Columns.Add("valueColumn", typeof(string));
var row = table.NewRow();
row.ItemArray = new object[]
{
entry.KeyColumn,
entry.ValueColumn
};
table.Rows.Add(row);
var parameter = new OracleParameter("entries",table);
parameter.UdtTypeName = "entry_type_list";
return context.Database.ExecuteSqlCommandAsync(
new RawSqlString( #"EXEC set_entry_list (:entries)" ),
parameter);
The stored procedure and type are defined like this:
CREATE OR REPLACE TYPE entry_type AS OBJECT
(
"keyColumn" NVARCHAR2(3),
"valueColumn" NVARCHAR2(3)
);
CREATE OR REPLACE TYPE entry_type_list AS TABLE OF entry_type;
CREATE OR REPLACE PROCEDURE set_entry_list (entries entry_type_list) AS
BEGIN
REM Doing stuff
END;
But I get an error:
System.ArgumentException: Value does not fall within the expected range.
at Oracle.ManagedDataAccess.Client.OracleParameter..ctor(String parameterName, Object obj)
The only sources for this is an answer how to do this with SQL Server, but no answer for Oracle with EFCore. The issue here seems to be that Oracle only accepts an OracleParameter whereas others use SqlParameter.
If I use the SqlParameter type like this:
var parameter = new SqlParameter("entries", SqlDbType.Structured);
parameter.TypeName = "entry_type_list";
parameter.Value = table;
I get this error:
System.InvalidCastException: Unable to cast object of type'System.Data.SqlClient.SqlParameter' to type 'Oracle.ManagedDataAccess.Client.OracleParameter'.
I also did try setting parameter.OracleDbType to different values like Blob, RefCursor, Clob or XmlType, setting parameter.DbType to Object or setting CollectionType to PLSQLAssociativeArray with no success. Also passing a list or an array of objects instead of a table did not succeed.
I currently have no idea what else I could try.
Any method to pass a big amount of entities to a stored procedure in a performant way would help. I use them with the merge-command so I need to be able to convert those parameters to a table.
I now found a solution using a temporary table and using this one as my input parameter.
As I can't pass a complete table, but an array of simple objects I have to fill this table by passing one array for each column:
var keyColumn = new OracleParameter( "keyColumn", OracleDbType.Decimal );
keyColumn.Value = values.Select( c => c.KeyColumn).ToArray();
var valueColumn = new OracleParameter( "valueColumn", OracleDbType.Decimal );
valueColumn = values.Select( c => c.ValueColumn).ToArray();
using ( var transaction = this.dbContext.Database.BeginTransaction( IsolationLevel.ReadCommitted) )
{
var connection = this.dbContext.Database.GetDbConnection() as OracleConnection;
OracleCommand cmd = connection.CreateCommand();
cmd.CommandText = #"
INSERT INTO TMP_TABLE
(
""keyColumn"",
""valueColumn""
)
VALUES (
:keyColumn,
:valueColumn)";
cmd.Parameters.Add( keyColumn );
cmd.Parameters.Add( valueColumn );
cmd.ArrayBindCount = values.Length;
var insertCount = await cmd.ExecuteNonQueryAsync();
cmd = connection.CreateCommand();
cmd.CommandType = CommandType.StoredProcedure;
cmd.CommandText = "dbo.stored_procedure";
var result = await cmd.ExecuteNonQueryAsync();
transaction.Commit();
}
I created the temp table like this:
CREATE
GLOBAL TEMPORARY TABLE "dbo"."TMP_TABLE"
ON COMMIT DELETE ROWS
AS SELECT * FROM "dbo"."REAL_TABLE" WHERE 0=1;
And changed my stored procedure to use it:
CREATE OR REPLACE PROCEDURE stored_procedure AS
BEGIN
REM use the "dbo"."TMP_TABLE"
END;
This answer helped me with the approach of bulk inserting with one array per column. The thread also contains some further discussion about the topic and a more generic approach.

EF + ODP.NET + CLOB = Value Cannot be Null - Parameter name: byteArray?

Our project recently updated to the newer Oracle.ManagedDataAccess DLL's (v 4.121.2.0) and this error has been cropping up intermittently. We've fixed it a few times without really knowing what we did to fix it.
I'm fairly certain it's caused by CLOB fields being mapped to strings in Entity Framework, and then being selected in LINQ statements that pull entire entities instead of just a limited set of properties.
Error:
Value cannot be null.
Parameter name: byteArray
Stack Trace:
at System.BitConverter.ToString(Byte[] value, Int32 startIndex, Int32 length)
at OracleInternal.TTC.TTCLob.GetLobIdString(Byte[] lobLocator)
at OracleInternal.ServiceObjects.OracleDataReaderImpl.CollectTempLOBsToBeFreed(Int32 rowNumber)
at Oracle.ManagedDataAccess.Client.OracleDataReader.ProcessAnyTempLOBs(Int32 rowNumber)
at Oracle.ManagedDataAccess.Client.OracleDataReader.Read()
at System.Data.Entity.Core.Common.Internal.Materialization.Shaper`1.StoreRead()
Suspect Entity Properties:
'Mapped to Oracle CLOB Column'
<Column("LARGEFIELD")>
Public Property LargeField As String
But I'm confident this is the proper way to map the fields per Oracle's matrix:
ODP.NET Types Overview
There is nothing obviously wrong with the generated SQL statement either:
SELECT
...
"Extent1"."LARGEFIELD" AS "LARGEFIELD",
...
FROM ... "Extent1"
WHERE ...
I have also tried this Fluent code per Ozkan's suggestion, but it does not seem to affect my case.
modelBuilder.Entity(Of [CLASS])().Property(
Function(x) x.LargeField
).IsOptional()
Troubleshooting Update:
After extensive testing, we are quite certain this is actually a bug, not a configuration problem. It appears to be the contents of the CLOB that cause the problem under a very specific set of circumstances. I've cross-posted this on the Oracle Forums, hoping for more information.
After installation of Oracle12 client we ran into the same problem.
In machine.config (C:\Windows\Microsoft.NET\Framework\v4.0.30319\Config) I removed all entries with Oracle.ManagedDataAccess.
In directory C:\Windows\Microsoft.NET\assembly\GAC_MSIL I removed both Oracle.ManagedDataAccess and Policy.4.121.Oracle.ManagedDataAccess.
Then my C# program started working as usually, using the Oracle.ManagedDataAccess dll in it's own directory.
We met this problem in our project an hour ago and found a solution. It is generating this error because of null values in CLOB caolumn. We have a CLOB column and it is Nullable in database. In EntityFramework model it is String but not Nullable. We changed column's Nullable property to True in EF model and it fixed problem.
We have this problem as well on some computers, and we are running the latest Oracle.ManagedDataAccess.dll (4.121.2.20150926 ODAC RELEASE 4).
We found a solution to our problem, and I just wanted to share.
This was our problem that occurred some computers.
Using connection As New OracleConnection(yourConnectionString)
Dim command As New OracleCommand(yourQuery, connection)
connection.Open()
Using reader As OracleDataReader = command.ExecuteReader()
Dim clobField As String = CStr(reader.Item("CLOB_FIELD"))
End Using
connection.Close()
End Using
And here's the solution that made it work on all computers.
Using connection As New OracleConnection(yourConnectionString)
Dim command As New OracleCommand(yourQuery, connection)
connection.Open()
Using reader As OracleDataReader = command.ExecuteReader()
Dim clobField As String = reader.GetOracleClob(0).Value
End Using
connection.Close()
End Using
I spent a lot of time trying to decipher this and found bits of this and that here and there on the internet, but no where had everything in one place, so I'd like to post what I've learned, and how I resolved it, which is much like Ragowit's answer, but I've got the C# code for it.
Background
The error: So I had this error on my while (dr.Read()) line:
Value cannot be null. \r\nParmeter name: byteArray
I ran into very little on the internet about this, except that it was an error with the CLOB field when it was null, and was supposedly fixed in the latest ODAC release, according to this: https://community.oracle.com/thread/3944924
My take on this -- NOT TRUE! It hasn't been updated since October 5, 2015 (http://www.oracle.com/technetwork/topics/dotnet/utilsoft-086879.html), and the 12c package I'm using was downloaded in April 2016.
Full stack trace by someone else with the error that pretty much mirrored mine: http://pastebin.com/24AfFDnq
Value cannot be null.
Parameter name: byteArray
at System.BitConverter.ToString(Byte[] value, Int32 startIndex, Int32 length)
at OracleInternal.TTC.TTCLob.GetLobIdString(Byte[] lobLocator)
at OracleInternal.ServiceObjects.OracleDataReaderImpl.CollectTempLOBsToBeFreed(Int32 rowNumber)
at Oracle.ManagedDataAccess.Client.OracleDataReader.ProcessAnyTempLOBs(Int32 rowNumber)
at Oracle.ManagedDataAccess.Client.OracleDataReader.Read()
at System.Data.Entity.Core.Common.Internal.Materialization.Shaper`1.StoreRead()
'Mapped to Oracle CLOB Column'
<Column("LARGEFIELD")>
Public Property LargeField As String
'Mapped to Oracle BLOB Column'
<Column("IMAGE")>
Public Property FileContents As Byte()
How I encountered it: It was while reading an 11-column table of about 3000 rows. One of the columns was actually an NCLOB (so apparently this is just as susceptible as CLOB), which allowed nulls in the database, and some of its values were empty - it was an optional "Notes" field, after all. It's funny that I didn't get this error on the first or even second row that had an empty Notes field. It didn't error until row 768 finished and it was about to start row 769, according to an int counter variable that started at 0 that I set up and saw after checking how many rows my DataTable had thus far. I found I got the error if I used:
DataSet ds = new DataSet();
OracleDataAdapter adapter = new OracleDataAdapter(cmd);
adapter.Fill(ds);
as well if I used:
DataTable dt = new DataTable();
OracleDataReader dr = cmd.ExecuteReader();
dt.Load(dr);
or if I used:
OracleDataReader dr = cmd.ExecuteReader();
if (dr.HasRows)
{
while (dr.Read())
{
....
}
}
where cmd is the OracleCommand, so it made no difference.
Resolution
The following is basically the code I used to parse through an OracleDataReader values in order to assign them to a DataTable. It's actually not as refined as it could be - I am using it to just return dr[i] to datarow in all cases except when the value is null, and when it is the eleventh column (index = 10, because it starts at 0) and a particular query has been executed so that I know where my NCLOB column is.
public static DataTable GetDataTableManually(string query)
{
OracleConnection conn = null;
try
{
string connString = ConfigurationManager.ConnectionStrings["MyConn"].ConnectionString;
conn = new OracleConnection(connString);
OracleCommand cmd = new OracleCommand(query, conn);
conn.Open();
OracleDataReader dr = cmd.ExecuteReader(CommandBehavior.CloseConnection);
DataTable dtSchema = dr.GetSchemaTable();
DataTable dt = new DataTable();
List<DataColumn> listCols = new List<DataColumn>();
List<DataColumn> listTypes = new List<DataColumn>();
if (dtSchema != null)
{
foreach (DataRow drow in dtSchema.Rows)
{
string columnName = System.Convert.ToString(drow["ColumnName"]);
DataColumn column = new DataColumn(columnName, (Type)(drow["DataType"]));
listCols.Add(column);
listTypes.Add(drow["DataType"].ToString()); // necessary in order to record nulls
dt.Columns.Add(column);
}
}
// Read rows from DataReader and populate the DataTable
if (dr.HasRows)
{
int rowCount = 0;
while (dr.Read())
{
string fieldType = String.Empty;
DataRow dataRow = dt.NewRow();
for (int i = 0; i < dr.FieldCount; i++)
{
if (!dr.IsDBNull[i])
{
fieldType = dr.GetFieldType(i).ToString(); // example only, this is the same as listTypes[i], and neither help us distinguish NCLOB from NVARCHAR2 - both will say System.String
// This is the magic
if (query == "SELECT * FROM Orders" && i == 10)
dataRow[((DataColumn)listCols[i])] = dr.GetOracleClob(i); // <-- our new check!!!!
// Found if you have null Decimal fields, this is
// also needed, and GetOracleDecimal and GetDecimal
// will not help you - only GetFloat does
else if (listTypes[i] == "System.Decimal")
dataRow[((DataColumn)listCols[i])] = dr.GetFloat(i);
else
dataRow[((DataColumn)listCols[i])] = dr[i];
}
else // value was null; we can't always assign dr[i] if DBNull, such as when it is a number or decimal field
{
byte[] nullArray = new byte[0];
switch (listTypes[i])
{
case "System.String": // includes NVARCHAR2, CLOB, NCLOB, etc.
dataRow[((DataColumn)listCols[i])] = String.Empty;
break;
case "System.Decimal":
case "System.Int16": // Boolean
case "System.Int32": // Number
dataRow[((DataColumn)listCols[i])] = 0;
break;
case "System.DateTime":
dataRow[((DataColumn)listCols[i])] = DBNull.Value;
break;
case "System.Byte[]": // Blob
dataRow[((DataColumn)listCols[i])] = nullArray;
break;
default:
dataRow[((DataColumn)listCols[i])] = String.Empty;
break;
}
}
}
dt.Rows.Add(dataRow);
}
ds.Tables.Add(dt);
}
}
catch (Exception ex)
{
// handle error
}
finally
{
conn.Close();
}
// After everything is closed
if (ds.Tables.Count > 0)
return ds.Tables[0]; // there should only be one table if we got results
else
return null;
}
In the same way that I have it assigning specific types of null based on the column type found in the schema table loop, you could add the conditions to the "not null" side of the if...then and do various GetOracle... statements there. I found it was only necessary for this NCLOB instance, though.
To give credit where credit is due, the original codebase is based on the answer given by sarathkumar at Populate data table from data reader .
For me it was simple! I had this error with odac v 4.121.1.0. I have just updated Oracle.ManagedDataAccess to 4.121.2.0 with Nuget and now it is working.
Have you have tried to uninstall and reinstall Oracle.ManagedDataAccess with Nugget?
upgrade Oracle.ManagedDataAccess.dll to version 4.122.1.0 solved.
If you are using vs 2017, we can update via NuGet.

Google Cloud SQL + RETURN_GENERATED_KEYS

I've got a table in my Google Cloud SQL database with an auto-incrementing column.
How do I execute an INSERT query via google-apps-script/JDBC and get back the value for the newly incremented column?
For example, my column is named ticket_id. I want to INSERT and have the new ticket_id value be returned in the result set.
In other words, if I have the following structure, what would I need to modify or how, so that I can do something like rs = stmt.getGeneratedKeys();
var conn = Jdbc.getCloudSqlConnection("jdbc:google:rdbms:.......
var stmt = conn.createStatement();
//build my INSERT sql statement
var sql = "insert into ......
var rs = stmt.executeUpdate(sql);
I see that there is a JDBC statement class with a member called RETURN_GENERATED_KEYS but I have so far not been smart enough to figure out how to properly manipulate that and get what I need. Is RETURN_GENERATED_KEYS a constant, is it an attribute, or how can I make use of it?
It seems like the documentation with the Apps Script JDBC service is a bit lacking. I've created an internal task item for that. Thankfully, Apps Script JDBC API follows the Java JDBC API pretty closely. The key is to get the result set back using the stmt.getGeneratedKeys() call.
I built a sample table using the animals example from the MySQL docs and this sample below works nicely against that and logs the next incremented ID.
function foo() {
var conn = Jdbc.getCloudSqlConnection("jdbc:google:rdbms://<instance>/<db>");
var stmt = conn.createStatement();
var sql = "INSERT INTO animals (name) VALUES ('dog')";
var count = stmt.executeUpdate(sql,1)//pass in any int for auto inc IDs back
var rs = stmt.getGeneratedKeys();
//if you are only expecting one row back, no need for while loop
// just do rs.next();
while(rs.next()) {
Logger.log(rs.getString(1));
}
rs.close();
stmt.close();
conn.close();
}

Best Practice Checking for duplicate rows before inserting list of items

I have a an array of objects that I want to enter into the database.
My method call looks like this.
public void Add(CardElement[] cardElements){
foreach (var cardElement in cardElements)
{
Data.Entry(cardElement).State = System.Data.EntityState.Added;
}
Data.SaveChanges();
}
The database table resembles this
MS SQL = Table mytable Columns a,b,c,d,e,f
Unique Constraint a,b,c
The data I want to insert resembles this.
var obj [] = new [] {
new MyObject () { a = 1, b =1, c = 1 },
new MyObject () { a = 1, b =1, c = 2 }
new MyObject () { a = 1, b =1, c = 3 }
};
So, I want to check the database for these three rows before I add them to the database.
I could do something like but I assume this should cause some extra trips to the database.
private bool checkExists()...
foreach (var cardElement in cardElements)
{
var exists = (from ce in Data.CardElements
where ce.CardId == cardElement.CardId
where ce.Area == cardElement.Area
where ce.ElementName == cardElement.ElementName
select ce).Any();
if(exists return true)
}
return false
So, how could I handle this more gracefully?
Is it even worth trying to accomplish this using linq?
Should I write some stored procedures for performance?
I agree that you should let the db make the decision.
Please have a look at using UPSERT as stated in this post
Why not just attempt the insert and let the database tell you if any unique constraint violations have occurred (using try/catch)?
The problem is that even if you query data somebody else can insert the record between your query and saving changes. You will still have to handle exception for violating unique constraint despite your additional queries - and yes, every check will do additional trip to database.
If your main concern is performance use stored procedure where you can additionally use table hint to lock table for inserts during initial check for existence.

LINQ Select with multiple tables fields being writeable

I'm new to LINQ and I'm doing pretty well until now, but now stuck with this.
I've a LINQ object bounded to a DataGridView to let the user edit is contains.
for simple one table query, it go fine, but how to build a LINQ query with multiple table, so the result will still be read/write?
Here a example of what I mean:
GMR.Data.GMR_Entities GMR = new GMR.Data.GMR_Entities();
var dt = from Msg in GMR.tblMessages
join lang in GMR.tblDomVals on 1 equals 1//on Msg.pLangueID equals lang.ID
select Msg;
// select new {lang.DescrFr, Msg.Message,Msg.pLangueID } ;
this.dataGridView1.DataSource = dt;
In this simple query, if I return only "Msg" with the select statement, the grid can be edited. But if I replace the select statement with select new {lang.DescrFr, Msg.Message,Msg.pLangueID } ; the grid will be readable only.
I can easily understand that this is due because the query result is a anonymous type.
But is there a way to let the table tblMessage being writable?
try creating your own class, for example
public class MsgLangInfo
{
public string langDescFr{get;set;}
public int pLangueID{get;set;}
}
And at the select statement create an object of this class with new like below
select new MsgLangInfo {
langDescFr = lang.DescrFr,
langDescFr = Msg.Message,Msg.pLangueID
} ;
This way you can avoid the anonymous type problem.
You need to select the originals rows and explicitly set the grid columns.

Resources