Caching an aggregate of data with Service stack ToOptimizedResultUsingCache - caching

I am currently using Service stack ICacheClient to cache in memory.
Note: the code below is some what pseudo code as I needed to remove customer specific names.
Lets say I have the following aggregate:
BlogPost
=> Comments
I want to do this following:
// So I need to go get the blogPost and cache it:
var blogPostExpiration = new TimeSpan(0, 0, 30);
var blogPostCacheKey = GenerateUniqueCacheKey<BlogPostRequest>(request);
blogPostResponse = base.RequestContext.ToOptimizedResultUsingCache<BlogPostResponse>(base.CacheClient, blogPostCacheKey, blogPostExpiration, () =>
_client.Execute((request)));
// Then, annoyingly I need to decompress it to json to get the response back into my domain entity structure: BlogPostResponse
string blogJson = StreamExtensions.Decompress(((CompressedResult)blogPostResponse).Contents, CompressionTypes.Default);
response = ServiceStack.Text.StringExtensions.FromJson<BlogPostResponse>(blogJson);
// Then I do the same so get the comments:
var commentsExpiration = new TimeSpan(0, 0, 30);
var commentsCacheKey = GenerateUniqueCacheKey<CommentsRequest>(request);
var commentsResponse = base.RequestContext.ToOptimizedResultUsingCache<CommentsResponse>(base.CacheClient, commentsCacheKey, commentsExpiration, () =>
_client.Execute((request)));
// And decompress again as above
string commentsJson = StreamExtensions.Decompress(((CompressedResult)commentsResponse).Contents, CompressionTypes.Default);
var commentsResponse = ServiceStack.Text.StringExtensions.FromJson<CommentsResponse>(commentsJson);
// The reason for the decompression becomes clear here as I need to attach my Comments only my domain emtity.
if (commentsResponse != null && commentsResponse.Comments != null)
{
response.Comments = commentsResponse.Comments;
}
What I want to know is there a shorter way to do the follow:
Get my data and cache it, get it back into my domain entity format without having to write all the above lines of code. I dont want to go through the following pain!:
Domain entity => json => decompress => domain entity.
Seems like a lot of wasted energy.
Any sample code or pointers to a better explanation of ToOptimizedResultUsingCache would be much appreciated.

Ok so im going to answer my own question. It seems that methods (extension methods) like ToOptimizedResult and ToOptimizedResultUsingCache are there to give you stuff like compression and caching for free.
But, if you want more control you just use the cache as you would normally:
// Generate cache key
var applesCacheKey = GenerateUniqueCacheKey<ApplesRequest>(request);
// attempt to get match details from cache
applesResponse = CacheClient.Get<ApplesDetailResponse>(applesDetailCacheKey);
// if there was nothing in cache then
if (applesResponse == null)
{
// Get data from storage
applesResponse = _client.Execute(request);
// Add the data to cache
CacheClient.Add(applesCacheKey, applesResponse, applesExpiration);
}
After you build up you aggregate and put it into cache you can compress the whole thing:
return base.RequestContext.ToOptimizedResult(applesResponse);
If you want to compress globally you can follow this post:
Enable gzip/deflate compression
Hope this makes sense.
RuSs

Related

How to get query sys_id of current.sys_id Service Portal (ServiceNow)

I have a question regarding a small issue that I'm having. I've created a widget that will live on the Service Portal to allow an admin to Accept or Reject requests.
The data for the widget is pulling from the Approvals (approval_approver) table. Under my GlideRecord, I have a query that checks for the state as requested. (Ex. addQuery('state', 'requested'))
To narrow down the search, I tried entering addQuery('sys_id', current.sys_id). When I use this query, my script breaks and I get an error on the Service Portal end.
Here's a sample of the GlideRecord script I've written to Accept.
[//Accept Request
if(input && input.action=="acceptApproval") {
var inRec1 = new GlideRecord('sysapproval_approver');
inRec1.addQuery('state', 'requested');
//inRec1.get('sys_id', current.sys_id);
inRec1.query();
if(inRec1.next()) {
inRec1.setValue('state', 'Approved');
inRec1.setValue('approver', gs.getUserID());
gs.addInfoMessage("Accept Approval Processed");
inRec1.update();
}
}][1]
I've research the web, tried using $sp.getParameter() as a work-around and no change.
I would really appreciate any help or insight on what I can do different to get script to work and filter the right records.
If I understand your question correctly, you are asking how to get the sysId of the sysapproval_approver record from the client-side in a widget.
Unless you have defined current elsewhere in your server script, current is undefined. Secondly, $sp.getParameter() is used to retrieve URL parameters. So unless you've included the sysId as a URL parameter, that will not get you what you are looking for.
One pattern that I've used is to pass an object to the client after the initial query that gets the list of requests.
When you're ready to send input to the server from the client, you can add relevant information to the input object. See the simplified example below. For the sake of brevity, the code below does not include error handling.
// Client-side function
approveRequest = function(sysId) {
$scope.server.get({
action: "requestApproval",
sysId: sysId
})
.then(function(response) {
console.log("Request approved");
});
};
// Server-side
var requestGr = new GlideRecord();
requestGr.addQuery("SOME_QUERY");
requestGr.query(); // Retrieve initial list of requests to display in the template
data.requests = []; // Add array of requests to data object to be passed to the client via the controller
while(requestsGr.next()) {
data.requests.push({
"number": requestsGr.getValue("number");
"state" : requestsGr.getValue("state");
"sysId" : requestsGr.getValue("sys_id");
});
}
if(input && input.action=="acceptApproval") {
var sysapprovalGr = new GlideRecord('sysapproval_approver');
if(sysapprovalGr.get(input.sysId)) {
sysapprovalGr.setValue('state', 'Approved');
sysapprovalGr.setValue('approver', gs.getUserID());
sysapprovalGr.update();
gs.addInfoMessage("Accept Approval Processed");
}
...

Is it possible to track changes to Entity Metadata in Dynamics CRM?

Is there any way to track changes to Metadata, like new fields, new entities and so on?
It is difficult to control a very large project in the same environment, so sometimes there are some customization that should not be deployed to productions (Mostly are mistakes or test in a development environment).
And there is a way to know who did that customization?
I am looking to know every possible change, not any in particular.
You have to use the RetrieveMetadataChangesRequest and it is not possible to know who made the change.
This is available only from Microsoft Dynamics CRM 2011 Update Rollup 12
This request is intended to be used to cache information from the metadata and be able to work offline, but we can use it to track changes to metadata in complex projects and complex teams
Examples on internet are not very friendly so this is how you can use the request:
The request can be completed only with filling one parameter
RetrieveMetadataChangesRequest req = new RetrieveMetadataChangesRequest()
{
ClientVersionStamp = null
};
var response = (RetrieveMetadataChangesResponse)service.Execute(req);
The first time you executed this request ClientVersionStamp needs to be null, because there was no request made to the metadata before and there is no ClientVersionStamp. This parameter is the last time you query for metadata changes and if it is null it will bring all customization from all time, so probably this request won't complete on time so we need to tune up.
var EntityFilter = new MetadataFilterExpression(LogicalOperator.And);
EntityFilter.Conditions.Add(new MetadataConditionExpression("SchemaName", MetadataConditionOperator.Equals, "ServiceAppointment"));
var entityQueryExpression = new EntityQueryExpression()
{
Criteria = EntityFilter
};
RetrieveMetadataChangesRequest req = new RetrieveMetadataChangesRequest()
{
Query = entityQueryExpression,
ClientVersionStamp = null
};
var response = (RetrieveMetadataChangesResponse)service.Execute(req);
This will query all metadata changes for "ServiceAppointment", feel free to use the entity you want, but what we need is the ServerTimeStamp from the response, it will looks like "22319800!09/13/2017 16:17:46", if you try to send this time stamp first, it will throw an exception, so it is necessary to query first to get a server time stamp.
Now you can use the request and the time stamp to retrieve all new changes since "22319800!09/13/2017 16:17:46"
RetrieveMetadataChangesRequest req = new RetrieveMetadataChangesRequest()
{
Query = entityQueryExpression,
ClientVersionStamp = #"22319800!09/13/2017 16:17:46"
};
var response = (RetrieveMetadataChangesResponse)service.Execute(req);
You can filter the query to match your needs, only search for specific entities, labels, relationship, keys and attributes or specific properties.
EntityQueryExpression entityQueryExpression = new EntityQueryExpression()
{
Criteria = EntityFilter,
Properties = EntityProperties,
RelationshipQuery = new RelationshipQueryExpression()
{
Properties = RelationshipProperties,
Criteria = RelationshipFilter
},
AttributeQuery = new AttributeQueryExpression()
{
Properties = AttributeProperties,
Criteria = AttributeFilter
}
};
Use this request and implement it the way you need.
A couple more options:
Register a plugin on Publish and Publish All, and track who did
the publish and when. That may help you narrow down who was making
changes, although someone could technically make a change without
publishing it, so not perfect information.
If you're using Dynamics OnPremise, the Metadata tables sometimes store information about who made a change that is not visible through a Metadata retrieve. I've found this to be very spotty though, not all Metadata has a Modified By user stored.

Getting file contents when using DropzoneJS

I really love the DropZoneJS component and am currently wrapping it in an EmberJS component (you can see demo here). In any event, the wrapper works just fine but I wanted to listen in on one of Dropzone's events and introspect the file contents (not the meta info like size, lastModified, etc.). The file type I'm dealing with is an XML file and I'd like to look "into" it to validate before sending it.
How can one do that? I would have thought the contents would hang off of the file object that you can pick up on many of the events but unless I'm just missing something obvious, it isn't there. :(
This worked for me:
Dropzone.options.PDFDrop = {
maxFilesize: 10, // Mb
accept: function(file, done) {
var reader = new FileReader();
reader.addEventListener("loadend", function(event) { console.log(event.target.result);});
reader.readAsText(file);
}
};
could also use reader.reaAsBinaryString() if binary data!
Ok, I've answer my own question and since others appear interested I'll post my answer here. For a working demo of this you can find it here:
https://ui-dropzone.firebaseapp.com/demo-local-data
In the demo I've wrapped the Dropzone component in the EmberJS framework but if you look at the code you'll find it's just Javascript code, nothing much to be afraid of. :)
The things we'll do are:
Get the file before the network request
The key thing we need become familiar with is the HTML5 API. Good news is it is quite simple. Take a look at this code and maybe that's all you need:
/**
* Replaces the XHR's send operation so that the stream can be
* retrieved on the client side instead being sent to the server.
* The function name is a little confusing (other than it replaces the "send"
* from Dropzonejs) because really what it's doing is reading the file and
* NOT sending to the server.
*/
_sendIntercept(file, options={}) {
return new RSVP.Promise((resolve,reject) => {
if(!options.readType) {
const mime = file.type;
const textType = a(_textTypes).any(type => {
const re = new RegExp(type);
return re.test(mime);
});
options.readType = textType ? 'readAsText' : 'readAsDataURL';
}
let reader = new window.FileReader();
reader.onload = () => {
resolve(reader.result);
};
reader.onerror = () => {
reject(reader.result);
};
// run the reader
reader[options.readType](file);
});
},
https://github.com/lifegadget/ui-dropzone/blob/0.7.2/addon/mixins/xhr-intercept.js#L10-L38
The code above returns a Promise which resolves once the file that's been dropped into the browser has been "read" into Javascript. This should be very quick as it's all local (do be aware that if you're downloading really large files you might want to "chunk" it ... that's a more advanced topic).
Hook into Dropzone
Now we need to find somewhere to hook into in Dropzone to read the file contents and stop the network request that we no longer need. Since the HTML5 File API just needs a File object you'll notice that Dropzone provides all sorts of hooks for that.
I decided on the "accept" hook because it would give me the opportunity to download the file and validate all in one go (for me it's mainly about drag and dropping XML's and so the content of the file is a part of the validation process) and crucially it happens before the network request.
Now it's important you realise that we're "replacing" the accept function not listening to the event it fires. If we just listened we would still incur a network request. So to **overload* accept we do something like this:
this.accept = this.localAcceptHandler; // replace "accept" on Dropzone
This will only work if this is the Dropzone object. You can achieve that by:
including it in your init hook function
including it as part of your instantiation (e.g., new Dropzone({accept: {...})
Now we've referred to the "localAcceptHandler", let me introduce it to you:
localAcceptHandler(file, done) {
this._sendIntercept(file).then(result => {
file.contents = result;
if(typeOf(this.localSuccess) === 'function') {
this.localSuccess(file, done);
} else {
done(); // empty done signals success
}
}).catch(result => {
if(typeOf(this.localFailure) === 'function') {
file.contents = result;
this.localFailure(file, done);
} else {
done(`Failed to download file ${file.name}`);
console.warn(file);
}
});
}
https://github.com/lifegadget/ui-dropzone/blob/0.7.2/addon/mixins/xhr-intercept.js#L40-L64
In quick summary it does the following:
read the contents of the file (aka, _sendIntercept)
based on mime type read the file either via readAsText or readAsDataURL
save the file contents to the .contents property of the file
Stop the send
To intercept the sending of the request on the network but still maintain the rest of the workflow we will replace a function called submitRequest. In the Dropzone code this function is a one liner and what I did was replace it with my own one-liner:
this._finished(files,'locally resolved, refer to "contents" property');
https://github.com/lifegadget/ui-dropzone/blob/0.7.2/addon/mixins/xhr-intercept.js#L66-L70
Provide access to retrieved document
The last step is just to ensure that our localAcceptHandler is put in place of the accept routine that dropzone supplies:
https://github.com/lifegadget/ui-dropzone/blob/0.7.2/addon/components/drop-zone.js#L88-L95
using the FileReader() solution is working amazingly good for me:
Dropzone.autoDiscover = false;
var dz = new Dropzone("#demo-upload",{
autoProcessQueue:false,
url:'upload.php'
});
dz.on("drop",function drop(e) {
var files = [];
for (var i = 0; i < e.dataTransfer.files.length; i++) {
files[i] = e.dataTransfer.files[i];
}
var reader = new FileReader();
reader.onload = function(event) {
var line = event.target.result.split('\n');
for ( var i = 0; i < line.length; i++){
console.log(line);
}
};
reader.readAsText(files[files.length-1]);

Breeze Fetch Strategy always goes remote

Im having a problem where breeze always goes to the server even though I've specified FetchStrategy.FromLocalCache. I created a test script below. Th initial query goes remote as as expected. The second query also goes remote(FetchStrategy.FromLocalCache). The third query(ExecuteQueryLocally) goes to local cache. From developer tools I can see there are 2 network requests (not including metadata). What am I doing wrong?
getCategories = function (observable) {
var query = breeze.EntityQuery
.from("Categories")
.orderBy('Order');
manager.executeQuery(query) //goes remote
.then(fetchSucceeded)
.fail(queryFailed);
function fetchSucceeded(data) {
// observable(data.results);
getCategoriesLocal(observable);
}
},
getCategoriesLocal = function (observable) {
var query = breeze.EntityQuery
.from("Categories")
.orderBy('Order');
query.using(breeze.FetchStrategy.FromLocalCache);
manager.executeQuery(query) //also goes remote
.then(fetchSucceeded)
.fail(queryFailed);
function fetchSucceeded(data) {
d = manager.executeQueryLocally(query); //goes local
observable(d);
return;
}
},
Instead of
query.using(breeze.FetchStrategy.FromLocalCache);
you need to reassign it, i.e.
query = query.using(breeze.FetchStrategy.FromLocalCache);
In breeze all EntityQueries are immutable which means that any time you apply a change to an EntityQuery you get new one. This is by design, so that no query can get changed under you by a later modification.
Alternatively you can simply use
manager.executeQuery(query.using(breeze.FetchStrategy.FromLocalCache));

Dynamic site, access session in global

Hello i am currently developing a kind of wiki system for my school, this system uses sub domains to find what course the wiki belongs to. example math1.wiki.com will be the course Math 1.
Now all these wikis use the same database and are given a wiki id, to find what data to load.
Here is the code i use to find the wiki id.
Global.asax
protected void Session_Start()
{
var database = new DataContext();
IWikiRepository rep = new WikiRepository(database);
IWikiService service = new WikiService(rep);
var domain = HttpContext.Current.Request.Url.Authority;
var port = "";
if (domain.Contains(':'))
{
var tmp = domain.Split(':');
domain = tmp[0];
port = tmp[1];
}
var split = domain.Split('.');
var subdomain = split[0];
// if (subdomain == "localhost")
// subdomain = "wiki1";
var wiki = service.GetSite(subdomain);
if (wiki == null)
{
Response.StatusCode = 404;
return;
}
Session["CurrentWiki"] = wiki;
}
This is all fine, but i want to make the mvc system send a 404 request if no wiki was found for the subdomain. But this can not only be done in session_start() as it only runs once per session i have therefor tryed using Application_BeginRequest, but sadly do i not have access to the sessions in the method.
Do any one know how i can implement this?
why can't you just make a custom function that you call at the start of the main page to determine if the wiki exists and if not, redirect/error page/whatever. If it's a custom function, you can just recall it when necessary.

Resources