Xamarin.Android Couchbase.Lite Map Reduce - couchbase-lite

I simply want to create a View that uses Map-Reduce to do this: Say I have Documents for the Automobile Industry. I would like the user to query for a particular Make - say Ford for example. I would like the user to provide the Ford value via an EditText, Tap a Button, and the "Count" be shown in a TextView. So, to clarify, I want to do a count of a certain type of Document using Map-Reduce. I have searched for over 100 hundred hours on this and have not found not one single example - REAL example I mean. (I have read all the docs, only generic examples - no actual examples)
I am an experienced programmer 15+ yrs exp - all I need is one example, and I am good to go.
Can someone please assist me with this?
Thanks,
Don

Here is my Actual Code:
string lMS = "MS:5"; // just to show what type of value I am using
var msCount = dbase.GetView ("count_ms");
msCount.SetMapReduce ((doc, emit) => {
if (doc.ContainsKey ("DT") && doc["DT"].Equals ("P")) {
if (doc.ContainsKey ("MS") && doc["MS"].Equals (_ms))
{
emit (doc ["id"], 1);
}
}
},
(keys, values, rereduce) => values.ToList().Count, "1");
var mscView = dbase.GetView ("count_ms");
var query = mscView.CreateQuery ();
query.StartKey = "MS:1";
query.EndKey = "MS:9999";
var queryResults = query.Run ();
var nr = queryResults.Count; // shows a value of 1 - wrong - should be 40
// the line below is to allow me to put a stop statement to read line above
var dummyForStop = nr;

Try setting something like
var docsByMakeCount = _database.GetView("docs_by_make_count");
docsByMakeCount.SetMapReduce((doc, emit) =>
{
if (doc.ContainsKey("Make"))
{
emit(doc["Make"], doc);
}
},
(keys, values, rereduce) => values.ToList().Count
, "1");
when you create your view.
and when you use it:
var docsByMake = _database.GetView("docs_by_make_count");
var query = docsByCity.CreateQuery();
query.StartKey = Make;
query.EndKey = Make;
var queryResults = query.Run();
MessageBox.Show(string.Format("{0} documents has been retrieved for that query", queryResults.Count));
if (queryResults.Count == 0) return;
var documents = queryResults.Select(result => JsonConvert.SerializeObject(result.Value, Formatting.Indented)).ToArray();
var commaSeperaterdDocs = "[" + string.Join(",", documents) + "]";
DocumentText = commaSeperaterdDocs;
In my case Make and DocumentText are properties.
There are some optimizations to be made here, like rereducing, but it's the straight forward way.

Related

UWP: calculate expected size of screenshot using multiple monitor setup

Right, parsing the clipboard, I am trying to detect if a stored bitmap in there might be the result of a screenshot the user took.
Everything is working fine as long as the user only has one monitor. Things become a bit more involved with two or more.
I am using the routing below to grab all the displays in use. Now, since I have no idea how they are configured to hang together, I do not know how to calculate the size of the screenshot (that Windows would produce) from that information.
I explicitly do not want to take a screenshot myself to compare. It's a privacy promise by my app.
Any ideas?
Here is the code for the size extractor, run in the UI thread.
public static async Task<Windows.Graphics.SizeInt32[]> GetMonitorSizesAsync()
{
Windows.Graphics.SizeInt32[] result = null;
var selector = DisplayMonitor.GetDeviceSelector();
var devices = await DeviceInformation.FindAllAsync(selector);
if (devices?.Count > 0)
{
result = new Windows.Graphics.SizeInt32[devices.Count];
int i = 0;
foreach (var device in devices)
{
var monitor = await DisplayMonitor.FromInterfaceIdAsync(device.Id);
result[i++] = monitor.NativeResolutionInRawPixels;
}
}
return result;
}
Base on Raymonds comment, here is my current solution in release code...
IN_UI_THREAD is a convenience property to see if the code executes in the UI thread,
public static Windows.Foundation.Size GetDesktopSize()
{
Int32 desktopWidth = 0;
Int32 desktopHeight = 0;
if (IN_UI_THREAD)
{
var regions = Windows.UI.ViewManagement.ApplicationView.GetForCurrentView()?.WindowingEnvironment?.GetDisplayRegions()?.Where(r => r.IsVisible);
if (regions?.Count() > 0)
{
// Get grab the most left and the most right point.
var MostLeft = regions.Min(r => r.WorkAreaOffset.X);
var MostTop = regions.Min(r => r.WorkAreaOffset.Y);
// The width is the distance between the most left and the most right point
desktopWidth = (int)regions.Max(r => r.WorkAreaOffset.X + r.WorkAreaSize.Width - MostLeft);
// Same for height
desktopHeight = (int)regions.Max(r => r.WorkAreaOffset.Y + r.WorkAreaSize.Height - MostTop);
}
}
return new Windows.Foundation.Size(desktopWidth, desktopHeight);
}

Suitescript Saved Search Filter using other saved search results

I am trying to use the results of a specific saved search to try and filter another saved search in suitescript.
Basically, there is a button created on a project. Once the button is clicked, I need to go get all the tasks for that specific project and use each task to filter on a transaction saved search using a custom field and get whatever information is on that saved search.
This is what I have so far:
function runScript(context) {
var record = currentRecord.get();
var id = record.id;
var type = record.type;
var i = 0;
console.log(id);
var projectSearch = search.load({id: 'customsearch1532'})
var billableExpenseSearch = search.load({id: 'customsearch1533'})
var projectFilter = search.createFilter({
name:'internalId',
operator: search.Operator.IS,
values: id
});
projectSearch.filters.push(projectFilter);
var projectResults = projectSearch.run().getRange(0,1000);
while(i < projectResults.length){
var task = projectResults[i].getValue(projectSearch.columns[1]);
console.log(task);
var billableExpenseFilter = search.createFilter({
name:'custcol4',
operator: search.Operator.ANYOF,
values: task
});
billableExpenseSearch.filters.push(billableExpenseFilter);
var billableExpenseResults = billableExpenseSearch.run().getRange(0,1000);
console.log(billableExpenseResults.length);
for(var j = 0; j< billableExpenseResults.length; j++){
var testAmount = billableExpenseResults[j].getValue(billableExpenseSearch.columns[3]);
console.log(testAmount);
}
i++;
}
}
The log for the Task is correct. I have 2 tasks on the project I am trying this on but once we get to the second iteration, the billableExpenseSearch length is showing as 0, when it's supposed to be 1.
I am guessing that my logic is incorrect of the createFilter function doesn't accept changes once the filter is created.
Any help is appreciated!
EDIT:
var billableExpenseSearch = search.load({id: 'customsearch1533'});
var billableExpenseFilter = search.createFilter({
name:'custcol4',
operator: search.Operator.ANYOF,
values: task
});
billableExpenseSearch.filters.push(billableExpenseFilter);
var billableExpenseResults = billableExpenseSearch.run().getRange(0,1000);
console.log(billableExpenseResults.length);
for(var j = 0; j< billableExpenseResults.length; j++){
var taskid = billableExpenseResults[j].getValue(billableExpenseSearch.columns[0]);
console.log(taskid);
Thank you
I think your guess is correct your are keep pushing filters
billableExpenseSearch.filters.push(billableExpenseFilter);
After pushing the filter and extract the value you need to remove it before adding a new one, you can do this by pop() the last one:
billableExpenseSearch.filters.pop();
Note: You can fix this by re-loading the search every time before pushing the filter. this will reset your filters, but I do NOT recommend that since loading a search will consume more USAGE and might receive USAGE_LIMIT_EXCEEDED ERROR.
I also recommend the following:
1- Get all task ids before doing the second search, once you do that you only need to search once. Because if you have many records you might encounter USAGE_LIMIT_EXCEEDED ERROR. Since you work with a client or Suitelet script you only have 1000 USAGE.
Edit: Sample might help you.
var ids = [];
var pagedData = projectSearch.runPaged({pageSize : 1000});
// iterate the pages
for( var i=0; i < pagedData.pageRanges.length; i++ ) {
// fetch the current page data
var currentPage = pagedData.fetch(i);
// and forEach() thru all results
currentPage.data.forEach( function(result) {
// you have the result row. use it like this....
var id = result.getValue(projectSearch.columns[1]);
Ids.push(id);
});
}
Note: This search will extract all records not only first 1000.
After that add the array to the Filter
var billableExpenseFilter = search.createFilter({
name:'custcol4',
operator: search.Operator.ANYOF,
values: [ids]
});
2- Don't Use search.load use search.create it will make your script more readable and easier to maintain in the future.

How can I replace an image in Google Documents?

I'm trying to insert images into Google Docs (other GSuite apps later) from an Add-On. I've succeeded in fetching the image and inserting it when getCursor() returns a valid Position. When there is a selection (instead of a Cursor), I can succeed if it's text that's selected by walking up to the Parent of the selected text and inserting the image at the start of the paragraph (not perfect, but OK).
UPDATE: It seems that I was using a deprecated method (getSelectedElements()), but that didn't fix the issue. It seems the issue is only with wrapped images as well (I didn't realize that the type of the object changed when you changed it to a wrapped text).
However, when an wrapped-text Image (presumably a PositionedImage) is highlighted (with the rotate and resize handles visible in blue), both getSelection() and getCursor() return null. This is a problem as I would like to be able to get that image and replace it with the one I'm inserting.
Here's my code... any help would be great.
var response = UrlFetchApp.fetch(imageTokenURL);
var selection = DocumentApp.getActiveDocument().getSelection();
if (selection)
{
Logger.log("Got Selection");
var replaced = false;
var elements = selection.getRangeElements();
if (elements.length === 1
&& elements[0].getElement().getType() === DocumentApp.ElementType.INLINE_IMAGE)
{
//replace the URL -- this never happens
}
//otherwise, we take the first element and work from there:
var firstElem = elements[0].getElement();
Logger.log("First Element Type = " + firstElem.getType());
if (firstElem.getType() == DocumentApp.ElementType.PARAGRAPH)
{
var newImage = firstElem.asParagraph().insertInlineImage(0, response);
newImage.setHeight(200);
newImage.setWidth(200);
}
else if (firstElem.getType() == DocumentApp.ElementType.TEXT)
{
var p = firstElem.getParent();
if (p.getType() == DocumentApp.ElementType.PARAGRAPH)
{
var index = p.asParagraph().getChildIndex(firstElem);
var newImage = p.asParagraph().insertInlineImage(index, response);
newImage.setHeight(200);
newImage.setWidth(200);
}
}
} else {
Logger.log("Checking Cursor");
var cursor = DocumentApp.getActiveDocument().getCursor();
if (cursor)
{
Logger.log("Got Cursor: " + cursor);
var newImage = cursor.insertInlineImage(response);
var p = cursor.getElement();
var size=200;
newImage.setHeight(size);
newImage.setWidth(size);
}
}
You are using the deprecated 'getSelectedElements()' method of the Range class. You may notice it's crossed out in the autocomplete selection box.
Instead, use the 'getRangeElements()' method. After selecting the image in the doc, the code below worked for me:
var range = doc.getSelection();
var element = range.getRangeElements()[0].getElement();
Logger.log(element.getType() == DocumentApp.ElementType.INLINE_IMAGE); //logs 'true'

Appcelerator Store Local Searches

When a button is pressed, I would like the id and the name of the button saved locally.
I am not quite sure the best way to approach this problem. Should I use appcelerator properties (http://docs.appcelerator.com/titanium/3.0/#!/api/Titanium.App.Properties) or write to a file to storage? At the moment I am using the Ti.App.Properties.setList.
Example code:
searchStorageName = "searchHistory";
searchResultsArray = [];
var currentEntries = (Ti.App.Properties.getList(searchStorageName));
// Create search entry object.
var localSearchObject = {
company_name: resultNodeCompany,
company_id: resultNodeCompanyID,
variation_id: resultNodeCompanyVariationID
};
// Check if existing entries, if so push current search
// and previous searches to array.
if(currentEntries === null || currentEntries === undefined){
searchResultsArray.push(localSearchObject);
Ti.App.Properties.setList(searchStorageName, searchResultsArray);
// searchResultsArray.push(localSearchObject, currentEntries);
}
else {
searchResultsArray.push(localSearchObject, currentEntries);
Ti.App.Properties.setList(searchStorageName, searchResultsArray);
}
I am stuck at the moment as it is inserting duplicate searches into the array. When I loop over the values to create a list in the UI it shows duplicates.
var currentEntries = (Ti.App.Properties.getList(searchStorageName));
var currentEntriesLength = currentEntries.length;
var getPreviousHistorySearchesArray = [];
currentEntries.forEach(function(entry, index) {
var company_name = entry.company_name;
var company_id = entry.company_id;
var variation_id = entry.variation_id;
// Create View Entry.
createSearchHistoryViewEntry(index, company_name, company_id, variation_id);
}
Use SQLite_Database Better than local properties http://docs.appcelerator.com/titanium/3.0/#!/guide/Working_with_a_SQLite_Database

facing performance issues with knockout mapping plugin

I have decent large data set of around 1100 records. This data set is mapped to an observable array which is then bound to a view. Since these records are updated frequently, the observable array is updated every time using the ko.mapping.fromJS helper.
This particular command takes around 40s to process all the rows. The user interface just locks for that period of time.
Here is the code -
var transactionList = ko.mapping.fromJS([]);
//Getting the latest transactions which are around 1100 in number;
var data = storage.transactions();
//Mapping the data to the observable array, which takes around 40s
ko.mapping.fromJS(data,transactionList)
Is there a workaround for this? Or should I just opt of web workers to improve performances?
Knockout.viewmodel is a replacement for knockout.mapping that is significantly faster at creating viewmodels for large object arrays like this. You should notice a significant performance increase.
http://coderenaissance.github.com/knockout.viewmodel/
I have also thought of a workaround as follows, this uses less amount of code-
var transactionList = ko.mapping.fromJS([]);
//Getting the latest transactions which are around 1100 in number;
var data = storage.transactions();
//Mapping the data to the observable array, which takes around 40s
// Instead of - ko.mapping.fromJS(data,transactionList)
var i = 0;
//clear the list completely first
transactionList.destroyAll();
//Set an interval of 0 and keep pushing the content to the list one by one.
var interval = setInterval(function () {if (i == data.length - 1 ) {
clearInterval(interval);}
transactionList.push(ko.mapping.fromJS(data[i++]));
}, 0);
I had the same problem with mapping plugin. Knockout team says that mapping plugin is not intended to work with large arrays. If you have to load such big data to the page then likely you have improper design of the system.
The best way to fix this is to use server pagination instead of loading all the data on page load. If you don't want to change design of your application there are some workarounds which maybe help you:
Map your array manually:
var data = storage.transactions();
var mappedData = ko.utils.arrayMap(data , function(item){
return ko.mapping.fromJS(item);
});
var transactionList = ko.observableArray(mappedData);
Map array asynchronously. I have written a function that processes array by portions in another thread and reports progress to the user:
function processArrayAsync(array, itemFunc, afterStepFunc, finishFunc) {
var itemsPerStep = 20;
var processor = new function () {
var self = this;
self.array = array;
self.processedCount = 0;
self.itemFunc = itemFunc;
self.afterStepFunc = afterStepFunc;
self.finishFunc = finishFunc;
self.step = function () {
var tillCount = Math.min(self.processedCount + itemsPerStep, self.array.length);
for (; self.processedCount < tillCount; self.processedCount++) {
self.itemFunc(self.array[self.processedCount], self.processedCount);
}
self.afterStepFunc(self.processedCount);
if (self.processedCount < self.array.length - 1)
setTimeout(self.step, 1);
else
self.finishFunc();
};
};
processor.step();
};
Your code:
var data = storage.transactions();
var transactionList = ko.observableArray([]);
processArrayAsync(data,
function (item) { // Step function
var transaction = ko.mapping.fromJS(item);
transactionList().push(transaction);
},
function (processedCount) {
var percent = Math.ceil(processedCount * 100 / data.length);
// Show progress to the user.
ShowMessage(percent);
},
function () { // Final function
// This function will fire when all data are mapped. Do some work (i.e. Apply bindings).
});
Also you can try alternative mapping library: knockout.wrap. It should be faster than mapping plugin.
I have chosen the second option.
Mapping is not magic. In most of the cases this simple recursive function can be sufficient:
function MyMapJS(a_what, a_path)
{
a_path = a_path || [];
if (a_what != null && a_what.constructor == Object)
{
var result = {};
for (var key in a_what)
result[key] = MyMapJS(a_what[key], a_path.concat(key));
return result;
}
if (a_what != null && a_what.constructor == Array)
{
var result = ko.observableArray();
for (var index in a_what)
result.push(MyMapJS(a_what[index], a_path.concat(index)));
return result;
}
// Write your condition here:
switch (a_path[a_path.length-1])
{
case 'mapThisProperty':
case 'andAlsoThisOne':
result = ko.observable(a_what);
break;
default:
result = a_what;
break;
}
return result;
}
The code above makes observables from the mapThisProperty and andAlsoThisOne properties at any level of the object hierarchy; other properties are left constant. You can express more complex conditions using a_path.length for the level (depth) the value is at, or using more elements of a_path. For example:
if (a_path.length >= 2
&& a_path[a_path.length-1] == 'mapThisProperty'
&& a_path[a_path.length-2] == 'insideThisProperty')
result = ko.observable(a_what);
You can use typeOf a_what in the condition, e.g. to make all strings observable.
You can ignore some properties, and insert new ones at certain levels.
Or, you can even omit a_path. Etc.
The advantages are:
Customizable (more easily than knockout.mapping).
Short enough to copy-paste it and write individual mappings for different objects if needed.
Smaller code, knockout.mapping-latest.js is not included into your page.
Should be faster as it does only what is absolutely necessary.

Resources