Below is the code where I try to do a simple Linq-to-entities framework query, and I want to also access the results one by one:
inctDomainContext innn = new inctDomainContext();
var exx = from c in innn.cordonnes select c;
foreach (var i in exx) {
//doing something here but the programe doesn't enter the loop
}
Why doesn't the program enter into foreach loop?
it appears you're working with WCF Ria Services in Silverlight. This is totally different from how things work when using EntityFramework directly. In your case, you have to "load" the data, before being able to access it.
To do this, you have to call the "Load" method on the domain context and pass in the query you need (in your case GetCoordonneQuery()), and then you can pass a callback to be executed when the load asynchronous call is finished. The callback will have access to the results of the query. Here's an example:
....
context.Load(GetCoordonneQuery(),OnLoadCoordonneCompleted,null)
....
void OnLoadCoordonneCompleted(LoadOperation<Coordonne> loadOp)
{
foreach(var coordonne in loadOp.Entities)
{
//do something with the data
}
}
when the OnLoadCoordonneCompleted is called (i.e: when the asynchronous load call is finished), the context.Coordonnes will be loaded and contain the data you want.
Hope this helps
Are you sure there is data in there?
Try this:
inctDomainContext innn = new inctDomainContext();
bool exxAny = innn.cordonnes.Any();
Then if exxAny is false, there is no data in the collection and hence, the foreach does nothing.
Related
I have some data stored on Client side by Session.set(...) (which then is rendered into a template).
This data is changing dynamically... on Server side, how can i synchronize it, so client would update templates any time data is changing on the server? Best method would be Publish/Subscribe, but it's designed for use with database.
this is what i end up so far:
if (Meteor.isClient) {
Session.setDefault('dynamicArray', [{text: "item1"},{text: "item2"}]);
Template.body.helpers({
dynamicData: function(){
return Session.get('dynamicArray');
}
});
// place for code to sync dynamicArray with server
}
if (Meteor.isServer) {
Meteor.startup(function () {
var dynamicArray = [{text: "item3"},{text: "item4"},{text: "item5"}];
// place for code to publish dynamicArray for client
});
}
Regarding your comment, you will need to creata a DynamicData Collection first, located outside the .isClient and .isServer conditionals. From there, .find() will allow you to collect data from the server in the form of a cursor, which can be iterated through using {{#each dynamicData}}. An example of how you might set up the collection and the helper is as follows:
DynamicData = new Collection('dynamicData'); //Sets up new Collection
if (Meteor.isClient) {
Template.body.helpers({
dynamicData: function(){
return DynamicData.find({}, {fields: {dynamicArray: [item1, item2, item3]})
}
});
}
Of course, this depends on how the document(s) you are retrieving are structured and what you are using them for. For instance, if you're only looking to return a single dynamicArray you might be better off using:
return DynamicData.findOne({}, {fields: {dynamicArray: [item1, item2, item3]}).dynamicArray;
...since this will return the array [item1, item2, item3] directly. This seems to be what you're looking for, since I had used the same method to replace an initial over-reliance on session data to sync information. Rather, the key point is to make server info available to the client through the helpers, which will bypass the need to sync via session data. Hope this helps.
I'm experiencing with MongoDB with Node.js using the plugin node-mongodb-native. A problem I'm experiencing is the amount of nested callbacks. I'm trying to simplify a few things by lessening the code required for a query.
Instead of this ...
db.collection("test", function(err, collection) {
collection.find(...).toArray(function(err, results) {
// ...
});
});
... I was thinking of building an object which acts as a cache of collections so that the first callback is not necessary. I'm using the following code for building the object:
var collections = {};
["test", "foo"].forEach(function(name) {
db.collection(name, function(err, coll) {
collections[name] = coll;
});
});
With it, I'm able to clean up the first code snippet to:
collections.test.find(...).toArray(function(err, results) {
// ...
});
I was wondering whether this is a good practice. It works just fine, but I guess the callback of getting a collection is there for a reason. Does it make sense to build a collection cache as I'm doing now?
That completely depends on what a collection object is.
- Is it live?
- Is it connected to the database?
- Does it do any internal caching?
- Does it reflect new data?
Without knowing those details I recommend you create a lazy evaluation proxy.
Mongo.collection("test").find(...).toArray(function(err, results) {
// ...
});
The idea here is that you internally store the find command and when you call toArray you get the collection and invoke the find command on it, then invoke toArray.
This means your getting a new collection every time and avoid the "is caching safe" problem but still have a nice API.
I'm using Linq and having trouble doing something that I believe should be trivial. I want to return data from one layer so it can be used independently of linq in another layer.
Suppose I have a Data Access Layer. It knows about the entity framework and how to interact with it. But, it doesn't care who accesses it. The one interesting requirement I have is that the queries in the entity framework return projected data that is not part of the Entity Model itself. Please don't ask me to change this part of the requirement and make POCOs for each return type, as it is not the best design given the problem I am trying to solve. Below is an example.
public class ChartData
{
public function <<returnType??>> GetData()
{
MyEntities context = new MyEntities();
var results = from context.vManyColumnsOfData as v
where v.CompanyName = "acme"
select new {Year = v.SalesYear, Income = v.Income};
return ??;
}
}
Then, I would like to have an ASP.Net UI layer be able to call into the Data Access Layer to get the data in order to bind it to a control. The UI layer should have no notion of where the data came from. It should only know that it has the data it needs to bind. Below is an example.
protected void chart_Load(object sender, EventArgs e)
{
// set some chart properties
chart.Skin = "Default";
...
// Set the data source
ChartData dataMgr = new ChartData();
<<returnType?>> data = dataMgr.GetData();
chart.DataSource = data;
chart.DataBind();
}
What is the best way to send linq projected data back to another layer?
If you don't need to use the projected type statically, just return IEnumerable<object>.
Please don't ask me to change this part of the requirement and make
POCOs for each return type, as it is not the best design given the
problem I am trying to solve.
I feel like I should rightly ignore this, as the best thing to do is to return a defined type. Anonymous types are useful when they are wholly contained within the method that creates them. Once you start passing them around, it is time to go ahead and give them the proper class treatment.
However, to live within your imposed limitations, you can return IEnumerable<object> from the method and use that or var at the callsite and rely upon the dynamic binding of the control to get at the data. It's not going to help you if you need to deal with the object programmatically, but it will serve fine for databinding.
You can not return an anonymous type, so basically for this you will need POCO's even though you don't want them.
"not the best design given the problem I am trying to solve"
Could you explain what you are trying to achieve a little more? It might be possible to return some type of list containing a dictionary of items (ie rows and columns). Think something like an untyped dataset (yuck)
Your GetData method can use IEnumerable (the "old" non-generic interface) as its return type.
Any dynamic resolution (e.g. ASP.NET or XAML bindings) should work as expected, which seems to be what you want to do.
However, if you want to use the results in your code, you will probably have to resort to .NET 4's dynamic keyword.
The following example can be run in LINQPad (in "C# Program" mode) and illustrates this:
void Main()
{
var v = GetData();
foreach (dynamic element in v)
{
((string)element.Name).Dump();
}
}
public IEnumerable GetData()
{
return from i in Enumerable.Range(1, 10)
select new
{
Name = "Item " + i,
Value = i
};
}
Keep in mind that, design-wise, coding like this will make most people frown and can affect performance.
I need to search the CouchDB based on several criteria entered in a form. Name, an array of Tags and so on. I would then need various views to index on these fields. Ultimately, all the results will be collated in data.js and provided to mustache.html. Say there are 3 views - docsByName, docsByTags, docsById.
What I don't know is, how to query all these views in query.js. Can this be done and how ?
Or should the approach be of that to write one view that makes multiple emits for each search somehow ?
Thank you.
From what you say I assume you are using Evently, so I will quote from Evently primer:
The async function is the main star, which in this case makes an Ajax request (but it can do anything it wants). Another important thing to note is that the first argument to the async function is a callback which you use to tell Evently when you are done with your asynchronous action. [...] Whatever you pass to the callback function then becomes the first item passed to the data function.
In short: put your Ajax requests in async.js.
As a side note: Evently is only one of the possible choices to write a couchapp and it is not clear if it is maintained. However it works and it is easy to rearrange the code to not use it.
EDIT: here is a sample async function (cut&paste from an old program):
function(cb, e) {
var app = $$(this).app
;
app.db.openDoc('SOMEDOCID', {
error: function(code, error, reason) {
alert("Error("+code+" "+error+"): "+reason);
}
, success: function(doc) {
app.view('SOMEVIEWNAME', {
include_docs: true
, error: function(code, error, reason) {
alert("Error("+code+" "+error+"): "+reason);
}
, success: function(resp) {
resp.doc = doc;
cb(resp);
}
});
}
});
}
Can someone pitch in their opinion about pros/cons between wrapping the DataContext in an using statement or not in LINQ-SQL in terms of factors as performance, memory usage, ease of coding, right thing to do etc.
Update: In one particular application, I experienced that, without wrapping the DataContext in using block, the amount of memory usage kept on increasing as the live objects were not released for GC. As in, in below example, if I hold the reference to List of q object and access entities of q, I create an object graph that is not released for GC.
DataContext with using
using (DBDataContext db = new DBDataContext())
{
var q =
from x in db.Tables
where x.Id == someId
select x;
return q.toList();
}
DataContext without using and kept alive
DBDataContext db = new DBDataContext()
var q =
from x in db.Tables
where x.Id == someId
select x;
return q.toList();
Thanks.
A DataContext can be expensive to create, relative to other things. However if you're done with it and want connections closed ASAP, this will do that, releasing any cached results from the context as well. Remember you're creating it no matter what, in this case you're just letting the garbage collector know there's more free stuff to get rid of.
DataContext is made to be a short use object, use it, get the unit of work done, get out...that's precisely what you're doing with a using.
So the advantages:
Quicker closed connections
Free memory from the dispose (Cached objects in the content)
Downside - more code? But that shouldn't be a deterrent, you're using using properly here.
Look here at the Microsoft answer: http://social.msdn.microsoft.com/Forums/en-US/adodotnetentityframework/thread/2625b105-2cff-45ad-ba29-abdd763f74fe
Short version of if you need to use using/.Dispose():
The short answer; no, you don't have to, but you should...
Well, It's an IDisposable, so I guess it's not a bad idea. The folks at MSFT have said that they made DataContexts as lightweight as possible so that you may create them with reckless abandon, so you're probably not gaining much though.....
First time DataContext will get object from DB.
Next time you fire a query to get the same object (same parameters).: You’ll see query in a profiler but your object in DataContext will not be replaced with new one from DB !!
Not to mention that behind every DataContext is identity map of all objects you are asking from DB (you don’t want to keep this around).
Entire idea of DataContext is Unit Of Work with Optimistic Concurrency.
Use it for short transaction (one submit only) and dispose.
Best way to not forget dispose is using ().
I depends on the complexity of your Data Layer. If every call is a simple single query, then each call can be wrapped in the Using like in your question and that would be fine.
If, on the other hand, your Data Layer can expect multiple sequential calls from the Business Layer, the you'd wind up repeatedly creating/disposing the DataContext for each larger sequence of calls. not ideal.
What I've done is to create my Data Layer object as IDisposible. When it's created, the DataContext is created (or really, once the first call to a method is made), and when the Data Layer object disposes, it closes and disposes the DataContext.
here's what it looks like:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Configuration;
namespace PersonnelDL
{
public class PersonnelData : IDisposable
{
#region DataContext management
/// <summary>
/// Create common datacontext for all data routines to the DB
/// </summary>
private PersonnelDBDataContext _data = null;
private PersonnelDBDataContext Data
{
get
{
if (_data == null)
{
_data = new PersonnelDBDataContext(ConfigurationManager.ConnectionStrings["PersonnelDB"].ToString());
_data.DeferredLoadingEnabled = false; // no lazy loading
//var dlo = new DataLoadOptions(); // dataload options go here
}
return _data;
}
}
/// <summary>
/// close out data context
/// </summary>
public void Dispose()
{
if (_data != null)
_data.Dispose();
}
#endregion
#region DL methods
public Person GetPersonByID(string userid)
{
return Data.Persons.FirstOrDefault(p => p.UserID.ToUpper().Equals(userid.ToUpper()));
}
public List<Person> GetPersonsByIDlist(List<string> useridlist)
{
var ulist = useridlist.Select(u => u.ToUpper().Trim()).ToList();
return Data.Persons.Where(p => ulist.Contains(p.UserID.ToUpper())).ToList();
}
// more methods...
#endregion
}
}
In one particular application, I experienced that, without wrapping the DataContext in using block, the amount of memory usage kept on increasing as the live objects were not released for GC. As in, in below example, if I hold the reference to List<Table> object and access entities of q, I create an object graph that is not released for GC.
DBDataContext db = new DBDataContext()
var qs =
from x in db.Tables
where x.Id == someId
select x;
return qs.toList();
foreach(q in qs)
{
process(q);
// cannot dispose datacontext here as the 2nd iteration
// will throw datacontext already disposed exception
// while accessing the entity of q in process() function
//db.Dispose();
}
process(Table q)
{
// access entity of q which uses deferred execution
// if datacontext is already disposed, then datacontext
// already disposed exception is thrown
}
Given this example, I cannot dispose the datacontext because all the Table instances in list variable qs **share the same datacontext. After Dispose(), accessing the entity in process(Table q) throws a datacontext already disposed exception.
The ugly kluge, for me, was to remove all the entity references for q objects after the foreach loop. The better way is to of course use the using statement.
As far as my experience goes, I would say use the using statement.