Azure Table Storage, WCF Service and Enum - visual-studio-2010

Here's my problem. A class which defines an order has a property called PaymentStatus, which is an enum defined like so:
public enum PaymentStatuses : int
{
OnDelivery = 1,
Paid = 2,
Processed = 3,
Cleared = 4
}
And later on, in the class itself, the property definition is very simple:
public PaymentStatuses? PaymentStatus { get; set; }
However, if I try to save an order to the Azure Table Storage, I get the following exception:
System.InvalidOperationException: The type Order+PaymentStatuses' has no settable properties.
At this point I thought using enum isn't possible, but a quick Google search returned this: http://social.msdn.microsoft.com/Forums/en-US/windowsazure/thread/7eb1a2ca-6c1b-4440-b40e-012db98ccb0a
This page lists two answers, one of which seems to ignore the problems and suggests that using an enum in Azure Storage is fine.
Now, I don't NEED to store the enum in the Azure Table Storage as such, I could just as well store a corresponding int, however, I do need this property to be exposed in the WCF service.
I've tried making the property use get and set to return the enum from a stored integer, and remove this property from Azure by using the WritingEntity event on my DataContext, but I get that exception before the event for this entity is fired.
At this point, I'm at a loss, I don't know what else I can do to have this property in WCF as an enum, but have Azure store just the int.

Enum is not supported. Even though it is defined like an int, it is really not an integral type supported by Table Storage. Here is the list of types supported. An enum is just a string expression of an integral number with an object-oriented flavor.
You can store int in table storage and then convert it using Enum.Parse.

Here's a simple workaround:
public int MyEnumValue { get; set; } //for use by the Azure client libraries only
[IgnoreProperty] public MyEnum MyEnum
{
get { return (MyEnum) MyEnumValue; }
set { MyEnumValue = (int) value; }
}
It would have been nicer if a simple backing value could have been employed rather than an additional (public!) property - without the hassle of overriding ReadEntity/WriteEntity of course. I opened a user voice ticket that would facilitate that, so you might want to upvote it.

ya i was having this same problem
i changed my property which was earlier enum to int. now this int property parses the incoming int and saves it into a variale of the same enum type so now the code that was
public CompilerOutputTypes Type
{get; set;}
is chaged to
private CompilerOutputTypes type;
public int Type
{
get {return (int)type;}
set { type = (CompilerOutputTypes)value; }
}

Just suggestions...
I remember that in WCF you have to mark enums with special attributes: http://msdn.microsoft.com/en-us/library/aa347875.aspx
Also, when you declare PaymentStatuses? PaymentStatus, you are declaring Nullable<PaymentStatuses> PaymentStatus. The ? sintax is just syntactic sugar. Try to remove the ? and see what happen (you could add a PaymentStatuses.NoSet = 0 , because the default value for an Int32 is 0).
Good luck.

Parvs solution put me on the right track but I had some minor adjustments.
private string _EnumType;
private EnumType _Type;
//*********************************************
//*********************************************
public string EnumType
{
get { return _Type.ToString(); }
set
{
_EnumType = value;
try
{
_Type = (EnumType)Enum.Parse(typeof(EnumType), value);
}
catch (Exception)
{
_EnumType = "Undefined";
_Type = [mynamespace].EnumType.Undefined;
}
}
}

I have come across a similar problem and have implemented a generic object flattener/recomposer API that will flatten your complex entities into flat EntityProperty dictionaries and make them writeable to Table Storage, in the form of DynamicTableEntity.
Same API will then recompose the entire complex object back from the EntityProperty dictionary of the DynamicTableEntity.
This is relevant to your question because the ObjectFlattenerRecomposer API supports flattening property types that are normally not writeable to Azure Table Storage like Enum, TimeSpan, all Nullable types, ulong and uint by converting them into writeable EntityProperties.
The API also handles the conversion back to the original complex object from the flattened EntityProperty Dictionary. All that the client needs to do is to tell the API, I have this EntityProperty Dictionary that I just read from Azure Table (in the form of DynamicTableEntity.Properties), can you convert it to an object of this specific type. The API will recompose the full complex object with all of its properties including 'Enum' properties with their original correct values.
All of this flattening and recomposing of the original object is done transparently to the client (user of the API). Client does not need to provide any schema or any knowledge to the ObjectFlattenerRecomposer API about the complex object that it wants to write, it just passes the object to the API as 'object' to flatten it. When converting it back, the client only needs to provide the actual type of object it wants the flattened EntityProperty Dictionary to be converted to. The generic ConvertBack method of the API will simply recompose the original object of Type T and return it to the client.
See the usage example below. The objects do not need to implement any interface like 'ITableEntity' or inherit from a particular base class either. They do not need to provide a special set of constructors.
Blog: https://doguarslan.wordpress.com/2016/02/03/writing-complex-objects-to-azure-table-storage/
Nuget Package: https://www.nuget.org/packages/ObjectFlattenerRecomposer/
Usage:
//Flatten object (ie. of type Order) and convert it to EntityProperty Dictionary
Dictionary<string, EntityProperty> flattenedProperties = EntityPropertyConverter.Flatten(order);
// Create a DynamicTableEntity and set its PK and RK
DynamicTableEntity dynamicTableEntity = new DynamicTableEntity(partitionKey, rowKey);
dynamicTableEntity.Properties = flattenedProperties;
// Write the DynamicTableEntity to Azure Table Storage using client SDK
//Read the entity back from AzureTableStorage as DynamicTableEntity using the same PK and RK
DynamicTableEntity entity = [Read from Azure using the PK and RK];
//Convert the DynamicTableEntity back to original complex object.
Order order = EntityPropertyConverter.ConvertBack<Order>(entity.Properties);

Related

Elastic NEST De-serializing the wrong field

Using ElasticSearch.Net v6.0.2
Given the Indexed item
{
"PurchaseFrequency": 76,
"purchaseFrequency": 80
}
and the POCO Object
public class Product
{
public int PurchaseFrequency { get; set; }
}
and the setting
this.DefaultFieldNameInferrer(x => x);
Nest is returning a PurchaseFrequency = 80 even though this is the wrong field.
How can I get NEST to pull the correct cased field from ElasticSearch?
I don't think that this is going to be easily possible because this behaviour is defined in Json.NET, which NEST uses internally (not a direct dependency in 6.x, it's IL-merged into the assembly).
For example,
JsonConvert.DeserializeAnonymousType("{\"a\":1, \"A\":2}", new { a = 0 })
deserializes the anonymous type property a value to 2. But
JsonConvert.DeserializeAnonymousType("{\"A\":2, \"a\":1}", new { a = 0 })
deserializes the anonymous type property a value to 1 i.e. the order of properties as they appear in the returned JSON has a bearing on the final value assigned to a property on an instance of a type.
If you can, avoid JSON property names that differ only in case. If you can't, then you'd need to hook up the JsonNetSerializer in the NEST.JsonSerializer nuget package and write a custom JsonConverter for your type which only honours the exact casing expected.

GraphQL Java: Using #Batched DataFetcher

I know how to retrieve a bean from a service in a datafetcher:
public class MyDataFetcher implements DataFetcher {
...
#Override
public Object get(DataFetchingEnvironment environment) {
return myService.getData();
}
}
But schemas with nested lists should use a BatchedExecutionStrategy and create batched DataFetchers with get() methods annotated #Batched (see graphql-java doc).
But where do I put my getData() call then?
///// Where to put this code?
List list = myService.getData();
/////
public class MyDataFetcher implements DataFetcher {
#Batched
public Object get(DataFetchingEnvironment environment) {
return list.get(environment.getIndex()); // where to get the index?
}
}
WARNING: The original BatchedExecutionStrategy has been deprecated and will get removed. The current preferred solution is the Data Loader library. Also, the entire execution engine is getting replaced in the future, and the new one will again support batching "natively". You can already use the new engine and the new BatchedExecutionStrategy (both in nextgen packages) but they have limited support for instrumentations. The answer below applies equally to both the legacy and the nextgen execution engine.
Look at it like this. Normal DataFetcherss receive a single object as source (DataFetchingEnvironment#getSource) and return a single object as a result. For example, if you had a query like:
{
user (name: "John") {
company {
revenue
}
}
Your company resolver (fetcher) would get a User object as source, and would be expected to somehow return a Company based on that e.g.
User owner = (User) environment.getSource();
Company company = companyService.findByOwner(owner);
return company;
Now, in the exact same scenario, if your DataFetcher was batched, and you used BatchedExecutionStrategy, instead of receiving a User and returning a Company, you'd receive a List<User> and would return a List<Company> instead.
E.g.
List<User> owners = (List<User>) environment.getSource();
List<Company> companies = companyService.findByOwners(owners);
return companies;
Notice that this means your underlying logic must have a way to fetch multiple things at once, otherwise it wouldn't be batched. So your myService.getData call would need to change, unless it can already fetch data for multiple source object in one go.
Also notice that batched resolution makes sense in nested queries only, as the top level resolver can already fetch a list of object, without the need for batching.

Handle NullObjects across two models web api

I have two models in my asp.net web api. One is the database model, and the other is the model the end user passes in and we map the properties like this:
public static IndividualProc ToIndividualnternal(this AssignmentExternal item) {
return new IndividualProc() {
IndividualID = (int)item.person.id,
EventID = (int)item.event.id,
EventScehduleID = item.schedule.id,
EventGroupID = item.group.id
};
}
The problem is that when the user passes a null, I get an exception; "Nullable object must have a value". How can I cast the nullable properties so that this exception is not caused?
What's nullable? For the purpose of this answer I'm going to assume this is a nullable int:
item.event.id
and this is a regular int:
EventID
In that case, clearly you can't directly cast the former to the latter because there's no way for int to handle the case of a null value. The Nullable<t> structure has properties to help check for that value:
EventID = item.event.id.HasValue ? item.event.id.Value : default(int)
This will check if the nullable int has a value and, if it does, use that value. If it doesn't, use the default value for int (which is 0).

NHibernate: How to know if, on Flush() SQL will be sent?

I'm a bit puzzled with the NHibernate's IsDirty() method.
Directly after getting a (very large) complex object from my database, NHibernate's ISession.IsDirty() gives 'true'.
IFacadeDAL fd = new FacadeDAL();
// Session's not dirty
IProject proj = fd.GetByID<IProject, string>("123611-3640");
// Session is dirty
However, if i call Commit() like so:
using (ITransaction trans = Facade.Session.Transaction)
{
trans.Begin();
Facade.Session.Save(entity);
trans.Commit();
return true;
}
this results in no sql (exept for "exec sp_reset_connection").
I have read that due to 'mapping-choices' you can get "ghosts" in your session (causing the session to say it's dirty), but wouldn't it then also try to update something? Also, if this is caused e.g. by "converting" an sql bit to a c# bool i don't think i can change it... (no clue if that could be a cause for ghosts, though).
Update 2:
There are several (sql server) views and tables involved here. This is the (very) simplified class:
public class Project : IProject
{
private string id;
private List<IPlantItem> plantItems;
public Project() { }
public virtual string ID
{
get { return id; }
}
public virtual IEnumerable<IPlantItem> PlantItems
{
get { return plantItems; }
}
}
'PlantItem is being stored in a table. So i expect when i change anything in a PlantItem, IsDirty should change to 'true'.
My question is: is there a way to check if the session at that point, on flush() (or in my case on commit() for that matter) would generated actual sql statements? And if not: is there another way of (manually) storing some sort of a snapshot of the session to compare the current session to?
Update 1: I should really also mention these aspects:
that my FlushMode is set to 'None'.
that the underlying data of 'IProject'-object itself is based on a sql-view and therefore has most properties in the mapping set to update="false"
that when i actually change something in an object and use the same method for saving, sql update statements are being sent (and thus all is committed just fine)
In my experience Ghosts can be caused by the database being a nullable int and the mapping an ordinary int.
When the entity gets hydrated the nullable db int is converted to zero and hence it is now dirty.
Another way to get dirty records is by specifying a wrong type in the XML mapping, e.g.
public enum Sex
{
Unspecified,
Male,
Female
}
...
public virtual Sex Sex { get; set; }
and specify an int in the mapping.
<property name="Sex" type="int"/>
See this link to test your mappings which explains in more details.
If some of your entities is dirty - and therefore the ISession is dirty - they you have a mismatch between the properties and the database. For example, imagine you have a column in a table that is nullable, but in your code it is set as not null (an int, for example). NHibernate will consider it dirty, because its current value (0 in case of an integer) is different from the value that came from the db (null). Look for "Ghost properties NHibernate" in Google.

How do I store a comma-separated list in Orchard CMS?

Using Orchard CMS, I am dealing with a record and a part proxy, but cannot figure out how to save it into the DB. In fact, I confess I don't even know how to get the items I'm trying to save into this paradigm. I was originally using enum's for choices:
MyEmum.cs:
public enum Choices { Choice1, Choice2, Choice3, Choice4 }
MyRecord.cs:
public virtual string MyProperty { get; set; }
MyPart.cs:
public IEnumerable<string> MyProperty
{
get
{
if (String.IsNullOrWhiteSpace(Record.MyProperty)) return new string[] { };
return Record
.MyProperty
.Split(new[] { '.' }, StringSplitOptions.RemoveEmptyEntries)
.Select(r => r.Trim())
.Where(r => !String.IsNullOrEmpty(r));
}
set { Record.MyProperty = value == null ? null : String.Join(",", value); }
}
Now, in my service class, I tried something like:
public MyPart Create(MyPartRecord record)
{
MyPart part = Services.ContentManager.Create<MyPart>("My");
...
part.MyProperty = record.MyProperty; //getting error here
...
return part;
}
However, I am getting the following error: Cannot implicitly convert 'string' to System.Collections.Generic.IEnumerable<string>'
Essentially, I am trying to save choices from a checkboxlist (one or more selections) as a comma-separated list in the DB.
And this doesn't even get me over the problem of how do I use the enum. Any thoughts?
For some background:
I understand that the appropriate way to handle this relationship would be to create a separate table and use IList<MyEnum>. However, this is a simple list that I do not intend to manipulate with edits (in fact, no driver is used in this scenario as I handle this on the front-end with a controller and routes). I am just capturing data and redisplaying it in the Admin view for statistical/historical purposes. I may even consider getting rid of the Part (considering the following post: Bertrand's Blog Post.
It should be:
part.MyProperty = new[] {"foo", "bar"};
for example. The part's setter will store the value on the record's property as a comma-separated string, which will get persisted into the DB.
If you want to use enum values, you should use the Parse and ToString APIs that .NET provide on Enum.

Resources