Im having a problem where breeze always goes to the server even though I've specified FetchStrategy.FromLocalCache. I created a test script below. Th initial query goes remote as as expected. The second query also goes remote(FetchStrategy.FromLocalCache). The third query(ExecuteQueryLocally) goes to local cache. From developer tools I can see there are 2 network requests (not including metadata). What am I doing wrong?
getCategories = function (observable) {
var query = breeze.EntityQuery
.from("Categories")
.orderBy('Order');
manager.executeQuery(query) //goes remote
.then(fetchSucceeded)
.fail(queryFailed);
function fetchSucceeded(data) {
// observable(data.results);
getCategoriesLocal(observable);
}
},
getCategoriesLocal = function (observable) {
var query = breeze.EntityQuery
.from("Categories")
.orderBy('Order');
query.using(breeze.FetchStrategy.FromLocalCache);
manager.executeQuery(query) //also goes remote
.then(fetchSucceeded)
.fail(queryFailed);
function fetchSucceeded(data) {
d = manager.executeQueryLocally(query); //goes local
observable(d);
return;
}
},
Instead of
query.using(breeze.FetchStrategy.FromLocalCache);
you need to reassign it, i.e.
query = query.using(breeze.FetchStrategy.FromLocalCache);
In breeze all EntityQueries are immutable which means that any time you apply a change to an EntityQuery you get new one. This is by design, so that no query can get changed under you by a later modification.
Alternatively you can simply use
manager.executeQuery(query.using(breeze.FetchStrategy.FromLocalCache));
Related
I have a question regarding a small issue that I'm having. I've created a widget that will live on the Service Portal to allow an admin to Accept or Reject requests.
The data for the widget is pulling from the Approvals (approval_approver) table. Under my GlideRecord, I have a query that checks for the state as requested. (Ex. addQuery('state', 'requested'))
To narrow down the search, I tried entering addQuery('sys_id', current.sys_id). When I use this query, my script breaks and I get an error on the Service Portal end.
Here's a sample of the GlideRecord script I've written to Accept.
[//Accept Request
if(input && input.action=="acceptApproval") {
var inRec1 = new GlideRecord('sysapproval_approver');
inRec1.addQuery('state', 'requested');
//inRec1.get('sys_id', current.sys_id);
inRec1.query();
if(inRec1.next()) {
inRec1.setValue('state', 'Approved');
inRec1.setValue('approver', gs.getUserID());
gs.addInfoMessage("Accept Approval Processed");
inRec1.update();
}
}][1]
I've research the web, tried using $sp.getParameter() as a work-around and no change.
I would really appreciate any help or insight on what I can do different to get script to work and filter the right records.
If I understand your question correctly, you are asking how to get the sysId of the sysapproval_approver record from the client-side in a widget.
Unless you have defined current elsewhere in your server script, current is undefined. Secondly, $sp.getParameter() is used to retrieve URL parameters. So unless you've included the sysId as a URL parameter, that will not get you what you are looking for.
One pattern that I've used is to pass an object to the client after the initial query that gets the list of requests.
When you're ready to send input to the server from the client, you can add relevant information to the input object. See the simplified example below. For the sake of brevity, the code below does not include error handling.
// Client-side function
approveRequest = function(sysId) {
$scope.server.get({
action: "requestApproval",
sysId: sysId
})
.then(function(response) {
console.log("Request approved");
});
};
// Server-side
var requestGr = new GlideRecord();
requestGr.addQuery("SOME_QUERY");
requestGr.query(); // Retrieve initial list of requests to display in the template
data.requests = []; // Add array of requests to data object to be passed to the client via the controller
while(requestsGr.next()) {
data.requests.push({
"number": requestsGr.getValue("number");
"state" : requestsGr.getValue("state");
"sysId" : requestsGr.getValue("sys_id");
});
}
if(input && input.action=="acceptApproval") {
var sysapprovalGr = new GlideRecord('sysapproval_approver');
if(sysapprovalGr.get(input.sysId)) {
sysapprovalGr.setValue('state', 'Approved');
sysapprovalGr.setValue('approver', gs.getUserID());
sysapprovalGr.update();
gs.addInfoMessage("Accept Approval Processed");
}
...
I am trying to get a Reservation object which contains a pointer to Restaurant.
In Parse Cloud code, i am able to get the restaurants objects associated with Reservations via query.include('Restaurant') in log just before response.success. However, the Restaurants reverted back to pointer when i receive the response on client app.
I tried reverted JSSDK version to 1.4.2 & 1.6.7 as suggested in some answers, but it doesn't work for me.
Parse.Cloud.define('getreservationsforuser', function(request, response) {
var user = request.user
console.log(user)
var query = new Parse.Query('Reservations')
query.equalTo('User', user)
query.include('Restaurant')
query.find({
success : function(results) {
console.log(JSON.stringify(results))
response.success(results)
},
error : function (error) {
response.error(error)
}
})
})
response :
..."restaurant":{"__type":"Pointer",
"className":"Restaurants",
"objectId":"kIIYe7Z0tD"},...
You can't directly send the pointer objects back from cloud code even though you have included it. You need to manually copy the content of that pointer object to a javascript object. Like below:
var restaurant = {}
restaurant["id"] = YOUR_POINTER_OBJECT.id;
restaurant["createdAt"] = YOUR_POINTER_OBJECT.createdAt;
restaurant["custom_field"] = YOUR_POINTER_OBJECT.get("custom_field");
ps: in your code you seem do nothing else other than directly send the response back. I think parse REST api might be a better choice in that case.
It turned out that my code implementation was correct.
Parse.Cloud.afterSave(function(request) {
var type = request.object.get("type");
switch (type) {
case 'inspiration':
var query = new Parse.Query("Inspiration");
break;
case 'event':
var query = new Parse.Query("Event");
break;
case 'idea':
var query = new Parse.Query("Idea");
break;
case 'comment':
break;
default:
return;
}
if (query) {
query.equalTo("shares", request.object.id);
query.first({
success: function(result) {
result.increment("sharesCount");
result.save();
},
error: function(error) {
throw "Could not save share count: " + error.message;
}
});
}
});
For some reason request.object.id is not returning the object id from the newly created record. I've tested this code out throughly and have isolated it down to the request.object.id variable. I've even successfully ran it with using a pre-existing object ID and it worked fine. Am I using the wrong variable for the object ID?
Thanks in advanced for any help!
Had this exact problem a few weeks ago.
It turned out to be a bug in Parse's newest Javascript SDK. Please have a look at your CloudCode folder - it should contain a global.json file where you can specify the JavaScript SDK version. By default, it states "latest", change it to "1.4.2" and upload your CloudCode folder again.
In case the global.json file is missing in your cloud code folder, please have a look at this thread, where I described how to create it manually.
Thanks for the reply. I found out another work around for this for version 1.6.5. I should probably also mention that my use case for this code is to increment a count column (comments count) when a new relation has been added to a particular record (post).
Instead of implementing an afterSave method on my relation class (comment), I instead implemented a beforeSave method on my class (Post) and used request.object.dirtyKeys() to get my modified columns. From there I check to see if my dirty key was comments and if it is I increment my count column. It works pretty well actually.
I am currently using Service stack ICacheClient to cache in memory.
Note: the code below is some what pseudo code as I needed to remove customer specific names.
Lets say I have the following aggregate:
BlogPost
=> Comments
I want to do this following:
// So I need to go get the blogPost and cache it:
var blogPostExpiration = new TimeSpan(0, 0, 30);
var blogPostCacheKey = GenerateUniqueCacheKey<BlogPostRequest>(request);
blogPostResponse = base.RequestContext.ToOptimizedResultUsingCache<BlogPostResponse>(base.CacheClient, blogPostCacheKey, blogPostExpiration, () =>
_client.Execute((request)));
// Then, annoyingly I need to decompress it to json to get the response back into my domain entity structure: BlogPostResponse
string blogJson = StreamExtensions.Decompress(((CompressedResult)blogPostResponse).Contents, CompressionTypes.Default);
response = ServiceStack.Text.StringExtensions.FromJson<BlogPostResponse>(blogJson);
// Then I do the same so get the comments:
var commentsExpiration = new TimeSpan(0, 0, 30);
var commentsCacheKey = GenerateUniqueCacheKey<CommentsRequest>(request);
var commentsResponse = base.RequestContext.ToOptimizedResultUsingCache<CommentsResponse>(base.CacheClient, commentsCacheKey, commentsExpiration, () =>
_client.Execute((request)));
// And decompress again as above
string commentsJson = StreamExtensions.Decompress(((CompressedResult)commentsResponse).Contents, CompressionTypes.Default);
var commentsResponse = ServiceStack.Text.StringExtensions.FromJson<CommentsResponse>(commentsJson);
// The reason for the decompression becomes clear here as I need to attach my Comments only my domain emtity.
if (commentsResponse != null && commentsResponse.Comments != null)
{
response.Comments = commentsResponse.Comments;
}
What I want to know is there a shorter way to do the follow:
Get my data and cache it, get it back into my domain entity format without having to write all the above lines of code. I dont want to go through the following pain!:
Domain entity => json => decompress => domain entity.
Seems like a lot of wasted energy.
Any sample code or pointers to a better explanation of ToOptimizedResultUsingCache would be much appreciated.
Ok so im going to answer my own question. It seems that methods (extension methods) like ToOptimizedResult and ToOptimizedResultUsingCache are there to give you stuff like compression and caching for free.
But, if you want more control you just use the cache as you would normally:
// Generate cache key
var applesCacheKey = GenerateUniqueCacheKey<ApplesRequest>(request);
// attempt to get match details from cache
applesResponse = CacheClient.Get<ApplesDetailResponse>(applesDetailCacheKey);
// if there was nothing in cache then
if (applesResponse == null)
{
// Get data from storage
applesResponse = _client.Execute(request);
// Add the data to cache
CacheClient.Add(applesCacheKey, applesResponse, applesExpiration);
}
After you build up you aggregate and put it into cache you can compress the whole thing:
return base.RequestContext.ToOptimizedResult(applesResponse);
If you want to compress globally you can follow this post:
Enable gzip/deflate compression
Hope this makes sense.
RuSs
With Kendo grid batch editing turned on, I know that you can hook into the create, update and destroy commands where Kendo will send 3 separate commands to the server when you click on Save Changes.
I was wondering if there was any way to send all three sets of updates as a single call to the server -like a transaction. Or even send each in a specified order, with a check for success before sending the next .
The only way I could come up with was to have a custom Save Changes implementation which ,when invoked, would lookup the grid datasource to find out all rows that have been added (isNew() for added rows), deleted (_destroyed for deleted rows), updated (isDirty for updated rows) and then craft my own call to a server endpoint using ajax using the identified datasets.
Telerik posted a work-around in their code library recently: http://www.kendoui.com/code-library/mvc/grid/save-all-changes-with-one-request.aspx. Unfortunately the work-around is rather bare-bones. It gives a good example of how to capture destroyed, dirty, and new records but finishes with some hand waving to handle any errors in the response and synchronizing the data source on success. Also note that there is no check to ensure there are destroyed, dirty, or new records before making the ajax request.
Here is the relevant code. Download the full example from the link above to see how the grid is setup and to ensure you have the latest version.
function sendData() {
var grid = $("#Grid").data("kendoGrid"),
parameterMap = grid.dataSource.transport.parameterMap;
//get the new and the updated records
var currentData = grid.dataSource.data();
var updatedRecords = [];
var newRecords = [];
for (var i = 0; i < currentData.length; i++) {
if (currentData[i].isNew()) {
//this record is new
newRecords.push(currentData[i].toJSON());
} else if(currentData[i].dirty) {
updatedRecords.push(currentData[i].toJSON());
}
}
//this records are deleted
var deletedRecords = [];
for (var i = 0; i < grid.dataSource._destroyed.length; i++) {
deletedRecords.push(grid.dataSource._destroyed[i].toJSON());
}
var data = {};
$.extend(data, parameterMap({ updated: updatedRecords }), parameterMap({ deleted: deletedRecords }), parameterMap({ new: newRecords }));
$.ajax({
url: "/Home/UpdateCreateDelete",
data: data,
type: "POST",
error: function () {
//Handle the server errors using the approach from the previous example
},
success: function () {
alert("update on server is completed");
grid.dataSource._destroyed = [];
//refresh the grid - optional
grid.dataSource.read();
}
})
}
Maybe you can enable the batch property of the Datasource
batch Boolean(default: false)
If set to true the data source will batch CRUD operation requests. For example updating two data items would cause one HTTP request instead of two. By default the data source makes a HTTP request for every CRUD operation.
Source : Datasource API
After six years we have an answer, check submit function to execute single request to save all changes: https://docs.telerik.com/kendo-ui/api/javascript/data/datasource/configuration/transport.submit