I'm using the C# driver for MongoDB and trying to edit some MongoDB elements. When deserializing BSON, I'm using the [IgnoreExtraElements] tag to filter out fields I don't really care about editing. The problem is when I'm trying to serialize the elements back into the Mongo database. Instead of changing only the fields I've edited, serializing elements back overwrites the whole object.
For example, I'm changing a Word element with C# properties:
[BsonId]
public ObjectId _id;
public string word;
In MongoDB it also has the element "conjugations", which is a kind of complicated array I don't want to serialize or mess with. But when I try something similar to the below code, it blanks out the conjugations array.
MongoWord word = collection.FindOneAs<MongoWord>(Query.EQ("word","hello"));
word.word = "world";
collection.Save(word);
How can I avoid overwriting extra fields in the Json database? Right now I'm trying to write an Update.Set builder query and only update the fields I've changed using Reflection/Generics.. Is there something easy like a reverse [IgnoreExtraElements] or update setting that I'm missing here?
That's why IgnoreExtraElements must be specified manually. You are essentially opting-in to potentially losing data.
The correct way to handle this is to actually support extra elements. You can see this section in the documentation for how to do this: https://mongodb.github.io/mongo-csharp-driver/2.12/reference/bson/mapping/#supporting-extra-elements
This is the function I ended up using. I didn't want to use [BsonExtraElements] since it seemed unnecessary to pull down ExtraElements just to save them again if they hadn't been edited. Instead I'm using the mongo UpdateBuilder to only update the fields that I've changed/brought down from mongo. The solution is problematic if I need to clear fields by setting them to null.
/// <summary>
/// Update a Mongo object without overwriting columns C# think are null or doesn't know about
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="model">The mongo object to update</param>
/// <param name="collection">The collection the object should go in</param>
/// <param name="objectId">The Bson ObjectId of the existing object in collection</param>
public void UpdateNotNullColumns<T>(T model, MongoCollection<T> collection, ObjectId objectId)
{
if (objectId==default(ObjectId))
{
return;
}
else
{
//Build an update query with all the non null fields using reflection
Type type = model.GetType();
FieldInfo[] fields = type.GetFields();
UpdateBuilder builder = new UpdateBuilder();
foreach(var field in fields)
{
BsonValue bsonValue = BsonValue.Create(field.GetValue(model));
if(bsonValue!=null)
{
Type bsonType = bsonValue.GetType();
builder.Set(field.Name, bsonValue);
}
}
//Actually update!
collection.Update(Query.EQ("_id", objectId), builder);
}
}
Related
Using ElasticSearch.Net v6.0.2
Given the Indexed item
{
"PurchaseFrequency": 76,
"purchaseFrequency": 80
}
and the POCO Object
public class Product
{
public int PurchaseFrequency { get; set; }
}
and the setting
this.DefaultFieldNameInferrer(x => x);
Nest is returning a PurchaseFrequency = 80 even though this is the wrong field.
How can I get NEST to pull the correct cased field from ElasticSearch?
I don't think that this is going to be easily possible because this behaviour is defined in Json.NET, which NEST uses internally (not a direct dependency in 6.x, it's IL-merged into the assembly).
For example,
JsonConvert.DeserializeAnonymousType("{\"a\":1, \"A\":2}", new { a = 0 })
deserializes the anonymous type property a value to 2. But
JsonConvert.DeserializeAnonymousType("{\"A\":2, \"a\":1}", new { a = 0 })
deserializes the anonymous type property a value to 1 i.e. the order of properties as they appear in the returned JSON has a bearing on the final value assigned to a property on an instance of a type.
If you can, avoid JSON property names that differ only in case. If you can't, then you'd need to hook up the JsonNetSerializer in the NEST.JsonSerializer nuget package and write a custom JsonConverter for your type which only honours the exact casing expected.
I am using the following function to Query an EF6 model using a bunch of query expressions that I have built programmatically:
/// <summary>
/// Common application of generated queries to select the required resources
/// </summary>
/// <typeparam name="T">The EF model type</typeparam>
/// <param name="source">The source of models (a DbSet)</param>
/// <param name="queries">A collection of query pairs (database, local) to be applied as Where clauses</param>
/// <returns></returns>
protected static IEnumerable<string> ApplyCriteriaBase<T>(IQueryable<T> source, IEnumerable<(Expression, Expression)> queries) where T : DbResource
{
// Apply the database queries sequentially as Where clauses
var resources = queries
.Select(p => p.Item1)
.Aggregate(source, (ds, qry) => ds.Where(qry as Expression<Func<T, bool>>))
.Distinct();
// If there are no local queries then don't bother fetching the whole object, just select the id in the generated SQL
if(queries.All(q => q.Item2 == null))
return resources
.Select(a => a.Id.ToString());
// Apply local queries
// AsEnumerable() escapes from the IQueryable monad so the subsequent queries are executed locally on IEnumerable
// This requires fetching the whole object because we dont know which bit is to be tested by the local query
// Note that IEnumerable doesn't understand query expressions so we have to compile them to code and then invoke them
return queries
.Select(p => p.Item2)
.Where(q => q != null)
.Aggregate(
resources.ToList() as IEnumerable<T>, // .AsEnumerable() doesn't work
(ds, qry) => ds.Where((qry as Expression<Func<T, bool>>).Compile().Invoke))
.Select(a => a.Id.ToString());
}
The idea is that queries contains 2-tuples of query expressions where the first of each is to be run against the database (MySql) and the second, if present, is to be run against the EF models returned.
It is my understanding that AsEnumerable() should permit the second query to be executed locally and lazily without pulling all of the results into memory. Unfortunately this fails with the error:
"There is already an open DataReader associated with this Connection
which must be closed first."
The fix shown in the example, converting the result stream to a List, works correctly but is strict.
Why doesn't AsEnumerable() work and is there another way to process the results lazily? Some instances of DbResource are very large and I only want to accumulate the Id of each resource.
Thanks,
Andy
I'm a bit puzzled with the NHibernate's IsDirty() method.
Directly after getting a (very large) complex object from my database, NHibernate's ISession.IsDirty() gives 'true'.
IFacadeDAL fd = new FacadeDAL();
// Session's not dirty
IProject proj = fd.GetByID<IProject, string>("123611-3640");
// Session is dirty
However, if i call Commit() like so:
using (ITransaction trans = Facade.Session.Transaction)
{
trans.Begin();
Facade.Session.Save(entity);
trans.Commit();
return true;
}
this results in no sql (exept for "exec sp_reset_connection").
I have read that due to 'mapping-choices' you can get "ghosts" in your session (causing the session to say it's dirty), but wouldn't it then also try to update something? Also, if this is caused e.g. by "converting" an sql bit to a c# bool i don't think i can change it... (no clue if that could be a cause for ghosts, though).
Update 2:
There are several (sql server) views and tables involved here. This is the (very) simplified class:
public class Project : IProject
{
private string id;
private List<IPlantItem> plantItems;
public Project() { }
public virtual string ID
{
get { return id; }
}
public virtual IEnumerable<IPlantItem> PlantItems
{
get { return plantItems; }
}
}
'PlantItem is being stored in a table. So i expect when i change anything in a PlantItem, IsDirty should change to 'true'.
My question is: is there a way to check if the session at that point, on flush() (or in my case on commit() for that matter) would generated actual sql statements? And if not: is there another way of (manually) storing some sort of a snapshot of the session to compare the current session to?
Update 1: I should really also mention these aspects:
that my FlushMode is set to 'None'.
that the underlying data of 'IProject'-object itself is based on a sql-view and therefore has most properties in the mapping set to update="false"
that when i actually change something in an object and use the same method for saving, sql update statements are being sent (and thus all is committed just fine)
In my experience Ghosts can be caused by the database being a nullable int and the mapping an ordinary int.
When the entity gets hydrated the nullable db int is converted to zero and hence it is now dirty.
Another way to get dirty records is by specifying a wrong type in the XML mapping, e.g.
public enum Sex
{
Unspecified,
Male,
Female
}
...
public virtual Sex Sex { get; set; }
and specify an int in the mapping.
<property name="Sex" type="int"/>
See this link to test your mappings which explains in more details.
If some of your entities is dirty - and therefore the ISession is dirty - they you have a mismatch between the properties and the database. For example, imagine you have a column in a table that is nullable, but in your code it is set as not null (an int, for example). NHibernate will consider it dirty, because its current value (0 in case of an integer) is different from the value that came from the db (null). Look for "Ghost properties NHibernate" in Google.
I am developing an ASP.NET MVC3 application in C#.
I am trying to implement in my application a "narrow-down" functionality applied the result-set obtained from a search.
In short, after I perform a search and the results displayed in the center of the page, I would like to have on the left/right side of the page a CheckBoxList helper for each property of the search result. The CheckBox of each CheckBoxList represent the distinct values of the property.
For instance if I search Product and it has a Color property with values blue, red and yellow, I create a CheckBoxList with text Color and three CheckBox-es one for each color.
After a research on the Web I found this Dynamic LINQ library made available by Scott Guthrie. Since the most recent example/tutorial I found is from 2009, I was wondering whether this library is actually good (and maintained) or not.
In the latter case is jQuery the best way to implement such functionality?
You can solve it by building the needed predicate expressions dynamically, using purely .NET framework.
See code sample below. Depending on the criteria, this will filter on multiple properties. I've used IQuerable because this will enable both In-Memory as remote scenario's such as Entity Framework. If you're going with Entity Framework, you could also just build an EntitySQL string dynamically. I expect that will perform better.
There is a small portion of reflection involved (GetProperty). But this could be improved by performing caching inside the BuildPredicate method.
public class Item
{
public string Color { get; set; }
public int Value { get; set; }
public string Category { get; set; }
}
class Program
{
static void Main(string[] args)
{
var list = new List<Item>()
{
new Item (){ Category = "Big", Color = "Blue", Value = 5 },
new Item (){ Category = "Small", Color = "Red", Value = 5 },
new Item (){ Category = "Big", Color = "Green", Value = 6 },
};
var criteria = new Dictionary<string, object>();
criteria["Category"] = "Big";
criteria["Value"] = 5;
var query = DoDynamicWhere(list.AsQueryable(), criteria);
var result = query.ToList();
}
static IQueryable<T> DoDynamicWhere<T>(IQueryable<T> list, Dictionary<string, object> criteria)
{
var temp = list;
//create a predicate for each supplied criterium and filter on it.
foreach (var key in criteria.Keys)
{
temp = temp.Where(BuildPredicate<T>(key, criteria[key]));
}
return temp;
}
//Create i.<prop> == <value> dynamically
static Expression<Func<TType, bool>> BuildPredicate<TType>(string property, object value)
{
var itemParameter = Expression.Parameter(typeof(TType), "i");
var expression = Expression.Lambda<Func<TType, bool>>(
Expression.Equal(
Expression.MakeMemberAccess(
itemParameter,
typeof(TType).GetProperty(property)),
Expression.Constant(value)
),
itemParameter);
return expression;
}
}
I don't really get why would you need the Dynamic LINQ here? Are the item properties not known at compile-time? If you can access a given item properties by name, eg. var prop = myitem['Color'], you don't need Dynamic LINQ.
It depends on how you render the results. There is a lot of ways to achieve the desired behavior, in general:
Fully client-side. If you do everything client-side (fetching data, rendering, paging) - jQuery would be the best way to go.
Server-side + client-side. If you render results on the server, you may add HTML attributes (for each property) to each search result markup and filter those client-side. The only problem in this case can be paging (if you do paging server-side, you will be able to filter the current page only)
Fully server-side. Post the form with search parameters and narrow down the search results using LINQ - match the existing items' properties with form values.
EDIT
If I were you (and would need to filter results server-side), I'd do something like:
var filtered = myItems.Where(i => i.Properties.Match(formValues))
where Match is an extension method that checks if a given list of properties matches provided values. Simple as this - no Dynamic LINQ needed.
EDIT 2
Do you need to map the LINQ query to the database query (LINQ to SQL)? That would complicate things a bit, but is still doable by chaining multiple .Where(...) clauses. Just loop over the filter properties and add .Where(...) to the query from previous iteration.
you may have a look at PredicateBuilder from the author of C# 4.0 in a Nutshell
As already pointed out by #Piotr Szmyd probabbly you don't need dynamic Linq. Iterating over all properties of T doesn'require dynamic linq. Dynamic Linq is mainly usefull to build complete queries on the client side and send it in string format to the server.
However now, it become obsolete, since Mvc 4 supports client side queries through Api Controllers returning an IQueryable.
If you just need to iterate over all properties of T you can do it with reflection and by building the LambdaExpressions that will compose the filtering criterion. You can do it with the static methods of the Expression class.
By using such static methods you can build dynamically expressions like m => m.Name= "Nick" with a couple instructions...than you put in and them...done you get and expression you can apply to an exixting IQueryable
LINQ implementation still has not changed so there should be no problem using the dynamic LINQ library. It simply creates LINQ expressions from strings.
You can use AJAX to call action methods that run the LINQ query and return JSON data. JQuery would populate HTML from the returned data.
Here's my problem. A class which defines an order has a property called PaymentStatus, which is an enum defined like so:
public enum PaymentStatuses : int
{
OnDelivery = 1,
Paid = 2,
Processed = 3,
Cleared = 4
}
And later on, in the class itself, the property definition is very simple:
public PaymentStatuses? PaymentStatus { get; set; }
However, if I try to save an order to the Azure Table Storage, I get the following exception:
System.InvalidOperationException: The type Order+PaymentStatuses' has no settable properties.
At this point I thought using enum isn't possible, but a quick Google search returned this: http://social.msdn.microsoft.com/Forums/en-US/windowsazure/thread/7eb1a2ca-6c1b-4440-b40e-012db98ccb0a
This page lists two answers, one of which seems to ignore the problems and suggests that using an enum in Azure Storage is fine.
Now, I don't NEED to store the enum in the Azure Table Storage as such, I could just as well store a corresponding int, however, I do need this property to be exposed in the WCF service.
I've tried making the property use get and set to return the enum from a stored integer, and remove this property from Azure by using the WritingEntity event on my DataContext, but I get that exception before the event for this entity is fired.
At this point, I'm at a loss, I don't know what else I can do to have this property in WCF as an enum, but have Azure store just the int.
Enum is not supported. Even though it is defined like an int, it is really not an integral type supported by Table Storage. Here is the list of types supported. An enum is just a string expression of an integral number with an object-oriented flavor.
You can store int in table storage and then convert it using Enum.Parse.
Here's a simple workaround:
public int MyEnumValue { get; set; } //for use by the Azure client libraries only
[IgnoreProperty] public MyEnum MyEnum
{
get { return (MyEnum) MyEnumValue; }
set { MyEnumValue = (int) value; }
}
It would have been nicer if a simple backing value could have been employed rather than an additional (public!) property - without the hassle of overriding ReadEntity/WriteEntity of course. I opened a user voice ticket that would facilitate that, so you might want to upvote it.
ya i was having this same problem
i changed my property which was earlier enum to int. now this int property parses the incoming int and saves it into a variale of the same enum type so now the code that was
public CompilerOutputTypes Type
{get; set;}
is chaged to
private CompilerOutputTypes type;
public int Type
{
get {return (int)type;}
set { type = (CompilerOutputTypes)value; }
}
Just suggestions...
I remember that in WCF you have to mark enums with special attributes: http://msdn.microsoft.com/en-us/library/aa347875.aspx
Also, when you declare PaymentStatuses? PaymentStatus, you are declaring Nullable<PaymentStatuses> PaymentStatus. The ? sintax is just syntactic sugar. Try to remove the ? and see what happen (you could add a PaymentStatuses.NoSet = 0 , because the default value for an Int32 is 0).
Good luck.
Parvs solution put me on the right track but I had some minor adjustments.
private string _EnumType;
private EnumType _Type;
//*********************************************
//*********************************************
public string EnumType
{
get { return _Type.ToString(); }
set
{
_EnumType = value;
try
{
_Type = (EnumType)Enum.Parse(typeof(EnumType), value);
}
catch (Exception)
{
_EnumType = "Undefined";
_Type = [mynamespace].EnumType.Undefined;
}
}
}
I have come across a similar problem and have implemented a generic object flattener/recomposer API that will flatten your complex entities into flat EntityProperty dictionaries and make them writeable to Table Storage, in the form of DynamicTableEntity.
Same API will then recompose the entire complex object back from the EntityProperty dictionary of the DynamicTableEntity.
This is relevant to your question because the ObjectFlattenerRecomposer API supports flattening property types that are normally not writeable to Azure Table Storage like Enum, TimeSpan, all Nullable types, ulong and uint by converting them into writeable EntityProperties.
The API also handles the conversion back to the original complex object from the flattened EntityProperty Dictionary. All that the client needs to do is to tell the API, I have this EntityProperty Dictionary that I just read from Azure Table (in the form of DynamicTableEntity.Properties), can you convert it to an object of this specific type. The API will recompose the full complex object with all of its properties including 'Enum' properties with their original correct values.
All of this flattening and recomposing of the original object is done transparently to the client (user of the API). Client does not need to provide any schema or any knowledge to the ObjectFlattenerRecomposer API about the complex object that it wants to write, it just passes the object to the API as 'object' to flatten it. When converting it back, the client only needs to provide the actual type of object it wants the flattened EntityProperty Dictionary to be converted to. The generic ConvertBack method of the API will simply recompose the original object of Type T and return it to the client.
See the usage example below. The objects do not need to implement any interface like 'ITableEntity' or inherit from a particular base class either. They do not need to provide a special set of constructors.
Blog: https://doguarslan.wordpress.com/2016/02/03/writing-complex-objects-to-azure-table-storage/
Nuget Package: https://www.nuget.org/packages/ObjectFlattenerRecomposer/
Usage:
//Flatten object (ie. of type Order) and convert it to EntityProperty Dictionary
Dictionary<string, EntityProperty> flattenedProperties = EntityPropertyConverter.Flatten(order);
// Create a DynamicTableEntity and set its PK and RK
DynamicTableEntity dynamicTableEntity = new DynamicTableEntity(partitionKey, rowKey);
dynamicTableEntity.Properties = flattenedProperties;
// Write the DynamicTableEntity to Azure Table Storage using client SDK
//Read the entity back from AzureTableStorage as DynamicTableEntity using the same PK and RK
DynamicTableEntity entity = [Read from Azure using the PK and RK];
//Convert the DynamicTableEntity back to original complex object.
Order order = EntityPropertyConverter.ConvertBack<Order>(entity.Properties);