CRM: What is the difference between update Plugin and Update in XrmServiceToolkit - dynamics-crm

Is there any difference between updating an entity using a Plugin vs Updating an entity using XrmServiceToolkit?
var entityA= new XrmServiceToolkit.Soap.BusinessEntity("entA", id);
entityA.attributes["attrA"] = { value: attrValue1, type: "OptionSetValue" };
entityA.attributes["attrB"] = { value: attrValue2, type: "Money" };
XrmServiceToolkit.Soap.Update(entityA);
I know plugin can be used to connect to external databases but for a very basic update, is there any difference?
Thank you!

Operations in plugins are seemless integrated with the business logic of your CRM platform. Plugins are invoked in any scenario, regardless if they are triggered by a webpage (Javascript calls, e.g. using XrmServiceToolkit), workflow, external systems, integration tools or even other plugins.
An update done on your web page by Javascript only works on that form. If you only need it there, it's fine. If you need to cover other scenarios as well, you may have to look for another solution.

Related

ARTEFACT download from nexus failing because the data is cached

I am using curl to publish to maven:
const curlOptions = [
'--silent',
'--output', '/dev/stderr',
'--write-out', '"%{http_code}"',
'--upload-file', fileLocation,
'--noproxy', options.noproxy ? options.noproxy : '127.0.0.1',
'--fail'
];
const curlCmd = ['curl', curlOptions.join(' '), targetUri].join(' ');
const childProcess = exec(curlCmd, execOptions, function (error) {
if (error) {
console.log(chalk.red(error));
}
});
This works for the upload but the artefact gets cached and I cannot get the artefact from curl without going to nexus and running rebuild metadata on the affected artifact.
Can i programmatically invalidate the cache?
To answer your question directly, it should be possible to invalidate cache in later versions of NXRM3 using the REST API and the "/beta/repositories/{repositoryName}/invalidate-cache" endpoint.
It should also be possible to do the same and run the scheduled task (rebuild metadata; /v1/tasks/{id}/run endpoint) though that seems a less desirable route as that is generally used for repair.
You can see more on the REST API in the NXRM3 documentation though the intent of some is that it's self documenting by using the Swagger UI in the application. Note at time of this answer, only those with the nx-admin privilege can access the Swagger UI (though people with proper permission can use the endpoints). You can find the Swagger UI in the Admin section under System -> API.
That being said, I think there's likely something else going on. I don't think it should be necessary to invalidate your cache like that each time. I didn't want to go too far from the question however. I encourage you to look at community.sonatype.com for answers to what else might be going on and ask there if you don't see one.

GraphQL requesting fields that don't exist without error

I'm trying to handle backwards compatibility with my GraphQL API.
We have on-premise servers that get periodically updated based off of when they connect to the internet. We have a Mobile app that talks to the on-premise server.
Problem
We get into an issue where the Mobile app is up to date and the on-premise server isn't. When a change in the Schema occurs, it causes issues.
Example
Product version 1
type Product {
name: String
}
Product version 2
type Product {
name: String
account: String
}
New version of mobile app asks for:
product(id: "12345") {
name
account
}
Because account is not valid in version 1, I get the error:
"Cannot query field \"account\" on type \"Product\"."
Does anyone know how I can avoid this issue so I don't recieve this particular error. I'm totally fine with account coming back with Null or just some other plan of attack for updating Schema's. But having it completely blow up with no response is not good
Your question did not specify what you're actually using on the backend. But it should be possible to customize the validation rules a GraphQL service uses in any implementation based on the JavaScript reference implementation. Here's how you do it in GraphQL.js:
const { execute, parse, specifiedRules, validate } = require('graphql')
const validationRules = specifiedRules.filter(rule => rule.name !== 'FieldsOnCorrectType')
const document = parse(someQuery)
const errors = validate(schema, document, validationRules)
const data = await execute({ schema, document })
By omitting the FieldsOnCorrectType rule, you won't get any errors and unrecognized fields will simply be left off the response.
But you really shouldn't do that.
Modifying the validation rules will result in spec-breaking changes to your server, which can cause issues with client libraries and other tools you use.
This issue really boils down to your deployment process. You should not push new versions of the client that depend on a newer version of the server API until that version is deployed to the server. Period. That would hold true regardless of whether you're using GraphQL, REST, SOAP, etc.

How to use .NetCore 1.0 and IdentityServer3 together?

I am trying to make my application work with our IdentityServer3. The unsolvable to me seems that there is no IdentityServer3.AccessTokenValidation package for .NetCore. I have to validate the token (other software works in that manner, but using .NetFramework with no trouble), what are my options or perhaps I didn't research properly?
I would really love to see
app.UseIdentityServerBearerTokenAuthentication(new IdentityServerBearerTokenAuthenticationOptions
{
Authority = "https://identity.identityserver.io",
RequiredScopes = new[] { "api1", "api2" }
});
For Asp.Net Core use IdentityServer4.AccessTokenValidation. Just grab it via your NuGet package manager. Remember IDS3 and IDS4 are just implementations of a common set of protocols. Your OP can be written in Asp.Net Core (eg with IDS4) and your WebApi can still be using MVC5 (using IDS3 AccessTokenValidation) and vice versa scenario applies. In the IDS4 version of this middleware you will need to use ScopeName and AdditionalScopes to achieve your goals

Fine Uploader Basic To S3

Does anyone know if Fine Uploader supports it's uploaderType: 'basic' mode in conjunction with an S3 endpoint?
Their documentation is a box of christmas lights and I can't make heads or tails about which options work with which versions of the uploader.
Using this code, and not including the #qq-template they provide, I get the error below:
var uploader = new qq.s3.FineUploader({
uploaderType: 'basic',
element: document.getElementById("fineUploader"),
request: {
endpoint: "mybucket.s3.amazonaws.com",
accessKey: "MY_AWS_PUBLIC_ACCESS_KEY"
},
signature: {
endpoint: "/s3/signtureHandler"
},
uploadSuccess: {
endpoint: "success.html"
}
});
Error: Cannot find template script at ID 'qq-template'!
However, according to their docs (Fine Uploader Getting Started) it seems as though this is the correct way to get rid of the UI and handle that myself. Except it doesn't work.
Thanks for any help.
You are confusing the jQuery plug-in workflow with the no-dependency workflow. Just like the traditional endpoint handler, you simply need to make use of the FineUploaderBasic constructor. As the documentation illustrates, all S3 endpoint handler modules are appropriately namespaced:
var uploader = new qq.s3.FineUploaderBasic({...
Fine Uploader supports a wide variety of workflow, endpoints, and features. It's tough to fit that all into the documentation in a way that is intuitive for 100% of our users. However, for the most part, the current setup has been well received. If you have a specific suggestion for improvement, please open up an issue in the GitHub project's issue tracker. We will soon make it easier for users to edit the documentation as well.

Frameworks using Redis

I would like to know if there are any MVC framework compatible with Redis as a database. (Not just as a caching datastore).
Thanks
I would not expect any MVC framework to be tied to a database. Your implementation of the Model would provide access to whatever backing store (either directly or via one or more layers) was appropriate. You should be looking at the clients that Redis supports, with those you should be able to utilise MVC frameworks on any of the support client platforms.
+1 for Padrino.
Another great option is Monk. It includes Ohm(its actually written by some of the same guys) and is based on Sinatra. Its really easy to get started with and very flexible.
In Ruby you can use Ohm as ORM. If you want an MVC framework, it can be plugged to Padrino.
try to investigate cqrs architecture with event sourcing.
And you can download example of this from github.it is Ruby on Rails application with Redis DB
You should definitely check out my C# ServiceStack.Redis Client. The client provides a typed API that can store any type and other high-level functionality, i.e. Strong-typed messaging API, Transactional Support, Pipelining, etc.
Here's is an mini clone of Stack Overflow built with it, using only one page of C#:
Sample Code from Redis StackOverflow:
public User GetOrCreateUser(User user)
{
if (user.DisplayName.IsNullOrEmpty())
throw new ArgumentNullException("DisplayName");
var userIdAliasKey = "id:User:DisplayName:" + user.DisplayName.ToLower();
using (var redis = RedisManager.GetClient())
{
//Get a typed version of redis client that works with <User>
var redisUsers = redis.As<User>();
//Find user by DisplayName if exists
var userKey = redis.GetValue(userIdAliasKey);
if (userKey != null)
return redisUsers.GetValue(userKey);
//Generate Id for New User
if (user.Id == default(long))
user.Id = redisUsers.GetNextSequence();
redisUsers.Store(user);
//Save reference to User key using the DisplayName alias
redis.SetEntry(userIdAliasKey, user.CreateUrn());
return redisUsers.GetById(user.Id);
}
}
grails has redis support in GORM through the redis plugin. Any domain class can be stored in redis (or any one of the other supported nosql stores) instead of a relational database.

Resources