Microsoft CRM 4.0 and magic strings - dynamics-crm

Is there a method/tool/technique for developing with Microsoft CRM 4.0 that keeps the developer from having to use strings for entity names and attributes?

We've built our own model classes and store entity names, attribute names, and picklist values there. It's just a bunch of enums and constant strings, but at least it's using a centralized constant so we can know when something breaks.

We use our own mapper which translates objects into dynamic entities. This is all configured by attributes on the classes or types. You can find a project which uses a similar approach here: http://xrm.codeplex.com
On the other hand, you have the possibility to create early bound types. See Code Generation Using the CrmSvcUtil Tool.

Related

External POCO classes to Aspnetboilerplate AbpEntities. i.e. no inheritace possible

We have a pretty common situation and I'd like to understand the best-practice or trade-offs in Aspnetboilerplate/AspNetZero.com to handle this best.
We import a package (NuGet) of pure C# classes (POCO). These are shared across several system. In our AspNetZero server, we want these to be first class persistent objects. However, they can't inherit from Entity, since they come from the Nuget. What is the best practice here?
My ideas to date (not being the expert here, of course):
If we were to use these classes as EF Navigation Properties in Apb Entities, i.e. always use them as complex-type properties of an Abp Entity class, it could do the trick. In this scenario, one would not even need to define a DbSet, although one could (see: https://learn.microsoft.com/en-us/aspnet/mvc/overview/getting-started/getting-started-with-ef-using-mvc/creating-an-entity-framework-data-model-for-an-asp-net-mvc-application )
Alternatively, if we just reference these complex types from an Apb Entity, doesn't EF generate Entity-Proxies for these and automatically make them into EF Navigation Properties (see: https://blogs.msdn.microsoft.com/adonet/2009/12/22/poco-proxies-part-1/ ) or is this not an option the default Abp flow. We'd like to avoid too much custom code (risk).
Any other way via delegation?
Thanks for any Tips/Example/Info!

Validate a GraphQL schema against another reference schema

I'm not quite sure the wording I should be searching for on this.
I have a GraphQL schema which wraps a group of services using graphql-link-schema to perform the data resolution on the client side. The schema is intended to be built against a separate reference schema. How can I programmatically validate that my implementation matches the reference?
For bonus points- is it possible to determine whether a schema is a superset of another?
Thanks in advance (:
It's an interesting use case, but it's a bit unclear how validation like that would work. What causes validation to fail? Any differences between the two schemas? Extra types? Extra fields on existing types? Differences in return types? Differences in arguments or argument types?
Depending on your answer to the above questions, though, you may be able to cobble together your own validation function using the utility functions available here. Outside the main findBreakingChanges function, some of the utility functions available in that module:
findRemovedTypes
findTypesThatChangedKind
findFieldsThatChangedTypeOnObjectOrInterfaceTypes
findFieldsThatChangedTypeOnInputObjectTypes
findTypesRemovedFromUnions
findValuesRemovedFromEnums
findArgChanges
findInterfacesRemovedFromObjectTypes
If you have a reference or base schema available, though, rather than validating against it, you might also consider extending it when building the second schema. In doing so, you would effectively guarantee that the second schema matches the first except in whatever ways you intentionally deviate from it (by extending existing types, etc.). You could use extendSchema for relatively simply changes, or something like graphql-tool's mergeSchemas for more complicated changes.

Schema Name capitalization for custom entities/fields in Dynamics CRM 365

MS recommends Pascal-case for the Schema Names, but then they don't obey the rule themselves. The custom entities and the primary fields are created by default with all-lowercase schema names, while the custom fields are Pascal-case by default. Even more, the built-in statuscode and statecode for the custom entities are all-lowercase.
Questions:
are the schema names important down the road? There are quite a lot of external integrations coming for our CRM (C#, likely early-bound). For now I'm trying to keep it as clean as possible just to avoid potential future issues, but some colleagues think I'm over-worried and it's not worth the time.
do you know any good reason why MS doesn't obey their own rules in some cases?
I reject the pascal case advice. In my opinion, scheme name should be all lower case. This way it matches to the logical name. It prevents a lot of confusion and mistyped names, in the future.
As you decided to use C# Early bound classes, you will be using crmsvcutil or some Early Bound Generator which will pull all the schema name as it is from CRM Metadata.
If the schema name changes (like on drop & recreate with different datatype) the next class file will fetch it & build error will notify you.
By seeing the revisions nothing is going to change in near future & MS not even worrying about the rule-break.
To mention, next generation web api expects Schema Name in certain things like Navigation Properties whereas if you are going with Late binding, flat system converted lowercase Name (Logical Name) will be used.

User Settings: What are my choices?

I'm trying to find out what my choices are when I'm going to use user (persistent) settings.
In vs Studio this is possible in the properties of your project but I'm getting to know the limits there:
Only values are allowed that can be converted to string.
Collections (e.g items in a Listbox, with a name and value) cannot be saved.
What I would like to know, how do you implement user settings with collections, and how do you make user settings?
Emerion
If I understand correctly I think you're probably looking for serialization, and since you mention values that can't be converted to string I assume that you'd probably want binary serialization.
The System.Runtime.Serialization namespace contains classes to help you with this and here's an article that might be useful: Serialization in the .NET Framework

One Model to Rule Them All - VS2010 UML, ADO.NET Entity Data Model, and T4

I worked on a fairly large project a while back where we modeled the classes in Enterprise Architect and generated the (partial) POCO classes (complete with model-driven business rule validations), persistence (NHibernate mapping file) and DDL. Based on certain model attributes we could flag alternate generation strategies or indicate that a particular portion would be entirely hand-coded.
There was a good deal of initial investment, but it paid large dividends over the lifetime of a 15 developer, 3 year project.
I'm investigating doing something similar with the current Microsoft technology stack. The place I'm stuck is that class modeling is done with the VS 2010 UML tools, but logical data modeling is done with Entity Data Modeler.
Is it a reasonable path to use VS 2010 UML as the "single source of truth" and code generate the edmx files based on the class model? That's the inverse of the common path to create the entity model and use a POCO generator to generate classes. However, a good class model can be used to generate much more than just the properties so I tend to view it as a better choice than the entity model.
Entity Data Modeler is limited to a single diagram per model and becomes unusable in non-trivial scenarios. You can use UML profiles to extend class models for logical data modeling. It requires a significant investment of effort and time which may be justified on a 3-year 15-developer project.
It's always going to be a problem, as each modeling layer maps two disparate worlds. To have fully aware code, your generation system must have access to all mapping models. IOW, you can't simply declare one to be the "master", as each layer is a "real" perspective of the solution.
Yes, this is possible. No, there is nothing built in. To do this you'd need to write a VSIX which would consume the model and emit EDMX/code. This isn't necessarily hard, but you'd have to do it yourself. You'd also need a pattern or attributes for handling the modeling aspects which you might not have in your diagrams, just like you have to do for specifying key fields and the like when doing code-first modeling.

Resources