I'm trying to create a datatable in microsoft ax x++.
I need to return some information back to a webservice. Problem is i'm having trouble getting this code to work. it runs to the point where i try to add a column because apparently that function doesn't work....
Does anyone have the code in full to create a datatable in x++. this code works up until the column code.. because basically what i'm doing is query some information and returning it back to a webservice.. unless theres another way to return multiple information back to a webservice that uses c#
System.Data.DataTable dt = new System.Data.DataTable("MyTable");
System.Data.DataColumnCollection columns = dt.get_Columns();
System.Data.DataColumn ProductName;
System.Data.DataColumn QtyOrdered;
System.Data.DataColumn ProductID;
System.Data.DataRow row;
ProductID = new System.Data.DataColumn("ProductID", System.Type::GetType("System.Int32"));
ProductName = new System.Data.DataColumn("ProductName", System.Type::GetType("System.String"));
QtyOrdered = new System.Data.DataColumn("QtyOrdered", System.Type::GetType("System.String"));
// dt.Columns.Add(ProductID);
// dt.Columns.Add(ProductName);
// dt.Columns.Add(QtyOrdered);
row=dt.NewRow();
Did you try replacing lines
// dt.Columns.Add(ProductID);
// dt.Columns.Add(ProductName);
// dt.Columns.Add(QtyOrdered);
with
columns.Add(ProductID);
columns.Add(ProductName);
columns.Add(QtyOrdered);
?
P.S. To populate the row:
System.Data.DataRowCollection rows = dt.get_Rows();
...
row = dt.NewRow();
row.set_Item("ProductID", 1);
row.set_Item("ProductName", "PN");
row.set_Item("QtyOrdered", "QO");
rows.Add(row);
I suggest you to read some MSDN articles such as .NET Interop from X++, How To Use X++ Syntax for CLR Arrays, etc. - it should help you understand how to use .NET assemblies from X++.
Related
I am not able to fetch a max value from a number field in AppMaker. The field is filled with unique integers from 1 and up. In SQL I would have asked like this:
SET #tKey = (SELECT MAX(ID) FROM GiftCard);
In AppMaker I have done the following (with a bit help from other contributors in this forum) until now, and it returns tKey = "NaN":
var tKey = google.script.run.MaxID();
function MaxID() {
var ID_START_FROM = 11000;
var lock = LockService.getScriptLock();
lock.waitLock(3000);
var query = app.models.GiftCard.newQuery();
query.sorting.ID._descending();
query.limit = 1;
var records = query.run();
var next_id = records.length > 0 ? records[0].ID : ID_START_FROM;
lock.releaseLock();
return next_id;
}
There is also a maxValue() function in AppMaker. However, it seems not to work in that way I use it. If maxvalue() is better to use, please show :-)
It seems that you are looking in direction of auto incremented fields. The right way to achieve it would be using Cloud SQL database. MySQL will give you more flexibility with configuring your ids:
ALTER TABLE GiftCard AUTO_INCREMENT = 11000;
In case you strongly want to stick to Drive Tables you can try to fix your script as follow:
google.script.run
.withSuccessHandler(function(maxId) {
var tKey = maxId;
})
.withFailureHandler(function(error) {
// TODO: handle error
})
.MaxID();
As a side note I would also recommend to set your ID in onBeforeCreate model event as an extra security layer instead of passing it to client and reading back since it can be modified by malicious user.
You can try using Math.max(). Take into consideration the example below:
function getMax() {
var query = app.models.GiftCard.newQuery();
var allRecords = query.run();
allIds = [];
for( var i=0; i<allRecords.length;i++){
allIds.push(allRecords[i].ID);
}
var maxId = Math.max.apply(null, allIds);
return maxId;
}
Hope it helps!
Thank you for examples! The Math.max returned an undefined value. Since this simple case is a "big" issue, I will solve this in another way. This value is meant as a starting value for a sequence only. An SQL base is better yes!
Our project recently updated to the newer Oracle.ManagedDataAccess DLL's (v 4.121.2.0) and this error has been cropping up intermittently. We've fixed it a few times without really knowing what we did to fix it.
I'm fairly certain it's caused by CLOB fields being mapped to strings in Entity Framework, and then being selected in LINQ statements that pull entire entities instead of just a limited set of properties.
Error:
Value cannot be null.
Parameter name: byteArray
Stack Trace:
at System.BitConverter.ToString(Byte[] value, Int32 startIndex, Int32 length)
at OracleInternal.TTC.TTCLob.GetLobIdString(Byte[] lobLocator)
at OracleInternal.ServiceObjects.OracleDataReaderImpl.CollectTempLOBsToBeFreed(Int32 rowNumber)
at Oracle.ManagedDataAccess.Client.OracleDataReader.ProcessAnyTempLOBs(Int32 rowNumber)
at Oracle.ManagedDataAccess.Client.OracleDataReader.Read()
at System.Data.Entity.Core.Common.Internal.Materialization.Shaper`1.StoreRead()
Suspect Entity Properties:
'Mapped to Oracle CLOB Column'
<Column("LARGEFIELD")>
Public Property LargeField As String
But I'm confident this is the proper way to map the fields per Oracle's matrix:
ODP.NET Types Overview
There is nothing obviously wrong with the generated SQL statement either:
SELECT
...
"Extent1"."LARGEFIELD" AS "LARGEFIELD",
...
FROM ... "Extent1"
WHERE ...
I have also tried this Fluent code per Ozkan's suggestion, but it does not seem to affect my case.
modelBuilder.Entity(Of [CLASS])().Property(
Function(x) x.LargeField
).IsOptional()
Troubleshooting Update:
After extensive testing, we are quite certain this is actually a bug, not a configuration problem. It appears to be the contents of the CLOB that cause the problem under a very specific set of circumstances. I've cross-posted this on the Oracle Forums, hoping for more information.
After installation of Oracle12 client we ran into the same problem.
In machine.config (C:\Windows\Microsoft.NET\Framework\v4.0.30319\Config) I removed all entries with Oracle.ManagedDataAccess.
In directory C:\Windows\Microsoft.NET\assembly\GAC_MSIL I removed both Oracle.ManagedDataAccess and Policy.4.121.Oracle.ManagedDataAccess.
Then my C# program started working as usually, using the Oracle.ManagedDataAccess dll in it's own directory.
We met this problem in our project an hour ago and found a solution. It is generating this error because of null values in CLOB caolumn. We have a CLOB column and it is Nullable in database. In EntityFramework model it is String but not Nullable. We changed column's Nullable property to True in EF model and it fixed problem.
We have this problem as well on some computers, and we are running the latest Oracle.ManagedDataAccess.dll (4.121.2.20150926 ODAC RELEASE 4).
We found a solution to our problem, and I just wanted to share.
This was our problem that occurred some computers.
Using connection As New OracleConnection(yourConnectionString)
Dim command As New OracleCommand(yourQuery, connection)
connection.Open()
Using reader As OracleDataReader = command.ExecuteReader()
Dim clobField As String = CStr(reader.Item("CLOB_FIELD"))
End Using
connection.Close()
End Using
And here's the solution that made it work on all computers.
Using connection As New OracleConnection(yourConnectionString)
Dim command As New OracleCommand(yourQuery, connection)
connection.Open()
Using reader As OracleDataReader = command.ExecuteReader()
Dim clobField As String = reader.GetOracleClob(0).Value
End Using
connection.Close()
End Using
I spent a lot of time trying to decipher this and found bits of this and that here and there on the internet, but no where had everything in one place, so I'd like to post what I've learned, and how I resolved it, which is much like Ragowit's answer, but I've got the C# code for it.
Background
The error: So I had this error on my while (dr.Read()) line:
Value cannot be null. \r\nParmeter name: byteArray
I ran into very little on the internet about this, except that it was an error with the CLOB field when it was null, and was supposedly fixed in the latest ODAC release, according to this: https://community.oracle.com/thread/3944924
My take on this -- NOT TRUE! It hasn't been updated since October 5, 2015 (http://www.oracle.com/technetwork/topics/dotnet/utilsoft-086879.html), and the 12c package I'm using was downloaded in April 2016.
Full stack trace by someone else with the error that pretty much mirrored mine: http://pastebin.com/24AfFDnq
Value cannot be null.
Parameter name: byteArray
at System.BitConverter.ToString(Byte[] value, Int32 startIndex, Int32 length)
at OracleInternal.TTC.TTCLob.GetLobIdString(Byte[] lobLocator)
at OracleInternal.ServiceObjects.OracleDataReaderImpl.CollectTempLOBsToBeFreed(Int32 rowNumber)
at Oracle.ManagedDataAccess.Client.OracleDataReader.ProcessAnyTempLOBs(Int32 rowNumber)
at Oracle.ManagedDataAccess.Client.OracleDataReader.Read()
at System.Data.Entity.Core.Common.Internal.Materialization.Shaper`1.StoreRead()
'Mapped to Oracle CLOB Column'
<Column("LARGEFIELD")>
Public Property LargeField As String
'Mapped to Oracle BLOB Column'
<Column("IMAGE")>
Public Property FileContents As Byte()
How I encountered it: It was while reading an 11-column table of about 3000 rows. One of the columns was actually an NCLOB (so apparently this is just as susceptible as CLOB), which allowed nulls in the database, and some of its values were empty - it was an optional "Notes" field, after all. It's funny that I didn't get this error on the first or even second row that had an empty Notes field. It didn't error until row 768 finished and it was about to start row 769, according to an int counter variable that started at 0 that I set up and saw after checking how many rows my DataTable had thus far. I found I got the error if I used:
DataSet ds = new DataSet();
OracleDataAdapter adapter = new OracleDataAdapter(cmd);
adapter.Fill(ds);
as well if I used:
DataTable dt = new DataTable();
OracleDataReader dr = cmd.ExecuteReader();
dt.Load(dr);
or if I used:
OracleDataReader dr = cmd.ExecuteReader();
if (dr.HasRows)
{
while (dr.Read())
{
....
}
}
where cmd is the OracleCommand, so it made no difference.
Resolution
The following is basically the code I used to parse through an OracleDataReader values in order to assign them to a DataTable. It's actually not as refined as it could be - I am using it to just return dr[i] to datarow in all cases except when the value is null, and when it is the eleventh column (index = 10, because it starts at 0) and a particular query has been executed so that I know where my NCLOB column is.
public static DataTable GetDataTableManually(string query)
{
OracleConnection conn = null;
try
{
string connString = ConfigurationManager.ConnectionStrings["MyConn"].ConnectionString;
conn = new OracleConnection(connString);
OracleCommand cmd = new OracleCommand(query, conn);
conn.Open();
OracleDataReader dr = cmd.ExecuteReader(CommandBehavior.CloseConnection);
DataTable dtSchema = dr.GetSchemaTable();
DataTable dt = new DataTable();
List<DataColumn> listCols = new List<DataColumn>();
List<DataColumn> listTypes = new List<DataColumn>();
if (dtSchema != null)
{
foreach (DataRow drow in dtSchema.Rows)
{
string columnName = System.Convert.ToString(drow["ColumnName"]);
DataColumn column = new DataColumn(columnName, (Type)(drow["DataType"]));
listCols.Add(column);
listTypes.Add(drow["DataType"].ToString()); // necessary in order to record nulls
dt.Columns.Add(column);
}
}
// Read rows from DataReader and populate the DataTable
if (dr.HasRows)
{
int rowCount = 0;
while (dr.Read())
{
string fieldType = String.Empty;
DataRow dataRow = dt.NewRow();
for (int i = 0; i < dr.FieldCount; i++)
{
if (!dr.IsDBNull[i])
{
fieldType = dr.GetFieldType(i).ToString(); // example only, this is the same as listTypes[i], and neither help us distinguish NCLOB from NVARCHAR2 - both will say System.String
// This is the magic
if (query == "SELECT * FROM Orders" && i == 10)
dataRow[((DataColumn)listCols[i])] = dr.GetOracleClob(i); // <-- our new check!!!!
// Found if you have null Decimal fields, this is
// also needed, and GetOracleDecimal and GetDecimal
// will not help you - only GetFloat does
else if (listTypes[i] == "System.Decimal")
dataRow[((DataColumn)listCols[i])] = dr.GetFloat(i);
else
dataRow[((DataColumn)listCols[i])] = dr[i];
}
else // value was null; we can't always assign dr[i] if DBNull, such as when it is a number or decimal field
{
byte[] nullArray = new byte[0];
switch (listTypes[i])
{
case "System.String": // includes NVARCHAR2, CLOB, NCLOB, etc.
dataRow[((DataColumn)listCols[i])] = String.Empty;
break;
case "System.Decimal":
case "System.Int16": // Boolean
case "System.Int32": // Number
dataRow[((DataColumn)listCols[i])] = 0;
break;
case "System.DateTime":
dataRow[((DataColumn)listCols[i])] = DBNull.Value;
break;
case "System.Byte[]": // Blob
dataRow[((DataColumn)listCols[i])] = nullArray;
break;
default:
dataRow[((DataColumn)listCols[i])] = String.Empty;
break;
}
}
}
dt.Rows.Add(dataRow);
}
ds.Tables.Add(dt);
}
}
catch (Exception ex)
{
// handle error
}
finally
{
conn.Close();
}
// After everything is closed
if (ds.Tables.Count > 0)
return ds.Tables[0]; // there should only be one table if we got results
else
return null;
}
In the same way that I have it assigning specific types of null based on the column type found in the schema table loop, you could add the conditions to the "not null" side of the if...then and do various GetOracle... statements there. I found it was only necessary for this NCLOB instance, though.
To give credit where credit is due, the original codebase is based on the answer given by sarathkumar at Populate data table from data reader .
For me it was simple! I had this error with odac v 4.121.1.0. I have just updated Oracle.ManagedDataAccess to 4.121.2.0 with Nuget and now it is working.
Have you have tried to uninstall and reinstall Oracle.ManagedDataAccess with Nugget?
upgrade Oracle.ManagedDataAccess.dll to version 4.122.1.0 solved.
If you are using vs 2017, we can update via NuGet.
I'm using the LinqToExcel library. Working great so far, except that I need to start the query at a specific row. This is because the excel spreadsheet from the client uses some images and "header" information at the top of the excel file before the data actually starts.
The data itself will be simple to read and is fairly generic, I just need to know how to tell the ExcelQueryFactory to start at a specific row.
I am aware of the WorksheetRange<Company>("B3", "G10") option, but I don't want to specify an ending row, just where to start reading the file.
Using the latest v. of LinqToExcel with C#
I just tried this code and it seemed to work just fine:
var book = new LinqToExcel.ExcelQueryFactory(#"E:\Temporary\Book1.xlsx");
var query =
from row in book.WorksheetRange("A4", "B16384")
select new
{
Name = row["Name"].Cast<string>(),
Age = row["Age"].Cast<int>(),
};
I only got back the rows with data.
I suppose that you already solved this, but maybe for others - looks like you can use
var excel = new ExcelQueryFactory(path);
var allRows = excel.WorksheetNoHeader();
//start from 3rd row (zero-based indexing), length = allRows.Count() or computed range of rows you want
for (int i = 2; i < length; i++)
{
RowNoHeader row = allRows.ElementAtOrDefault(i);
//process the row - access columns as you want - also zero-based indexing
}
Not as simple as specifying some Range("B3", ...), but also the way.
Hope this helps at least somebody ;)
I had tried this, works fine for my scenario.
//get the sheets info
var faceWrksheet = excel.Worksheet(facemechSheetName);
// get the total rows count.
int _faceMechRows = faceWrksheet.Count();
// append with End Range.
var faceMechResult = excel.WorksheetRange<ExcelFaceMech>("A5", "AS" + _faceMechRows.ToString(), SheetName).
Where(i => i.WorkOrder != null).Select(x => x).ToList();
Have you tried WorksheetRange<Company>("B3", "G")
Unforunatly, at this moment and iteration in the LinqToExcel framework, there does not appear to be any way to do this.
To get around this we are requiring the client to have the data to be uploaded in it's own "sheet" within the excel document. The header row at the first row and the data under it. If they want any "meta data" they will need to include this in another sheet. Below is an example from the LinqToExcel documentation on how to query off a specific sheet.
var excel = new ExcelQueryFactory("excelFileName");
var oldCompanies = from c in repo.Worksheet<Company>("US Companies") //worksheet name = 'US Companies'
where c.LaunchDate < new DateTime(1900, 0, 0)
select c;
An old question for Linq 2 Entities. I'm just asking it again, in case someone has came up with the solution.
I want to perform query that does this:
UPDATE dbo.Products WHERE Category = 1 SET Category = 5
And I want to do it with Entity Framework 4.3.1.
This is just an example, I have a tons of records I just want 1 column to change value, nothing else. Loading to DbContext with Where(...).Select(...), changing all elements, and then saving with SaveChanges() does not work well for me.
Should I stick with ExecuteCommand and send direct query as it is written above (of course make it reusable) or is there another nice way to do it from Linq 2 Entities / Fluent.
Thanks!
What you are describing isnt actually possible with Entity Framework. You have a few options,
You can write it as a string and execute it via EF with .ExecuteSqlCommand (on the context)
You can use something like Entity Framework Extended (however from what ive seen this doesnt have great performance)
You can update an entity without first fetching it from db like below
using (var context = new DBContext())
{
context.YourEntitySet.Attach(yourExistingEntity);
// Update fields
context.SaveChanges();
}
If you have set-based operations, then SQL is better suited than EF.
So, yes - in this case you should stick with ExecuteCommand.
I don't know if this suits you but you can try creating a stored procedure that will perform the update and then add that procedure to your model as a function import. Then you can perform the update in a single database call:
using(var dc = new YourDataContext())
{
dc.UpdateProductsCategory(1, 5);
}
where UpdateProductsCategory would be the name of the imported stored procedure.
Yes, ExecuteCommand() is definitely the way to do it without fetching all the rows' data and letting ChangeTracker sort it out. Just to provide an example:
Will result in all rows being fetched and an update performed for each row changed:
using (YourDBContext yourDB = new YourDBContext()) {
yourDB.Products.Where(p => p.Category = 1).ToList().ForEach(p => p.Category = 5);
yourDB.SaveChanges();
}
Just a single update:
using (YourDBContext yourDB = new YourDBContext()) {
var sql = "UPDATE dbo.Products WHERE Category = #oldcategory SET Category = #newcategory";
var oldcp = new SqlParameter { ParameterName = "oldcategory", DbType = DbType.Int32, Value = 1 };
var newcp = new SqlParameter { ParameterName = "newcategory", DbType = DbType.Int32, Value = 5 };
yourDB.Database.ExecuteSqlCommand(sql, oldcp, newcp);
}
I have met a problem about inserting multiple rows in a batch with Subsonic3. My development environment includes:
1. Visual Studio 2010, but use .NET 3.5
2. Active Record Mode in SubSonic 3.0.0.4
3. SQL Server 2005 express
4. Northwind sample database
I am using Active Reecord mode to insert mutiple "Product" into table "Products". If I insert the rows one by one, either call "aProduct.Add()" or call "Insert.Execute()" mutiple times (just like the codes below), it works fine.
private static Product[] CreateProducts(int count)
{
Product[] products = new Product[count];
for (int index = 0; index < products.Length; ++index)
{
products[index] = new Product
{
ProductName = string.Format("cheka-test-{0}", index.ToString()),
Discontinued = (index % 2 == 0),
};
}
return products;
}
private static void SucceedByMultiExecuteInsert()
{
Product[] products = CreateProducts(2);
// -------------------------------- prepare batch
NorthwindDB db = new NorthwindDB();
var inserts = from prod in products
select db.Insert.Into<Product>(x => x.ProductName, x => x.Discontinued).Values(prod.ProductName, prod.Discontinued);
// -------------------------------- batch insert
var selectAll = Product.All();
Console.WriteLine("--- before total rows = {0}", selectAll.Count().ToString());
foreach (Insert insert in inserts)
insert.Execute();
Console.WriteLine("+++ after inserting {0} rows, now total rows = {1}",
products.Length.ToString(), selectAll.Count().ToString());
}
but if I use "BatchQuery" like the codes below,
private static void FailByBatchInsert()
{
Product[] products = CreateProducts(2);
// -------------------------------- prepare batch
NorthwindDB db = new NorthwindDB();
BatchQuery batchquery = new BatchQuery(db.Provider, db.QueryProvider);
var inserts = from prod in products
select db.Insert.Into<Product>(x => x.ProductName, x => x.Discontinued).Values(prod.ProductName, prod.Discontinued);
foreach (Insert insert in inserts)
batchquery.Queue(insert);
// -------------------------------- batch insert
var selectAll = Product.All();
Console.WriteLine("--- before total rows = {0}", selectAll.Count().ToString());
batchquery.Execute();
Console.WriteLine("+++ after inserting {0} rows, now total rows = {1}",
products.Length.ToString(), selectAll.Count().ToString());
}
then it failed with the exception :
"
Unhandled Exception: System.Data.SqlClient.SqlException: Must declare the scalar variable "#ins_ProductName".
Must declare the scalar variable "#ins_ProductName".
"
Please give me some help to solve this problem. Many thanks.
I ran into this problem as well. If you look at the query it's attempting to run, you'll see it doing something like this (this isn't actual code but you'll get the point):
exec_sql N'insert into MyTable (SomeField) Values (#ins_SomeField)',N'#0 varchar(32)','#0=SomeValue'
For some reason it defines the parameters in the query with "#ins_"+FieldName but then passes the parameters as ordinals. I have yet to determine the pattern for why/when it does this but I've lost enough time during this dev cycle futzing with SubSonic to try and diagnose the problem properly.
The work-around I implemented will involve you downloading the 3.0.0.4 source from github and making a change on line 179 of Insert.cs.
Where it reads
ParameterName = _provider.ParameterPrefix + "ins_" + columnName.ToAlphaNumericOnly(),
Changing it to
ParameterName = _provider.ParameterPrefix + Inserts.Count.ToString(),
seemed to do the trick for me. I make no warranties about this solution for you, expressed or implied. It did work for me but your mileage may vary.
I should also note that there's similar logic around the "update" statements as well in Update.cs on lines 181 and 194 but I haven't had these give me problems... yet.
Honestly, I don't think SubSonic is ready for primetime and that's a shame because I really like how Rob set it up. That said, it's in my product for better or worse now so you make the best with what you got.