Json parsing is not that fast in titanium for large data - appcelerator

I'm working on a json but the issue I am facing is that it takes about 5 to 6 seconds while the whole data is being parsed. I googled a lot but i didn't find any way to improve the performance
Sample Code : remove tags is my function to remove some extra data
var XMLTools = require("XMLTools");
var my_json = new XMLTools(remove_tags(this.responseText, "<odds", "</odds>")).toJSON();
var jason_data = JSON.parse(my_json);

Related

Web.Content calling API service and merging pages with List.Transform started to fail

I created PowerBI report which which is connecting to data source via API service. Returning json contains thousands of entities. API service is called via Web.Content function. API service returns always total record count and so we are able to calculate nr. of pages which has to be called to obtain whole dataset. This report is displaying data from our servicedesk app, which is deployed on many servers and for many customers and use Query parameters to connect to any of these servers.
Detail of Power query is below.
Why am I writing here. This report was working without any issue more than 1,5 year but on August 17th one of servers start causing erros in step Pages where are some random lines (pages) with errors - see attached picture labeled "Errors in step Pages". and this is reason that next step Entities (List.Union) in query is stopping refresh and generate errors with message:
Expression.Error: We cannot apply field access to the type List. Details: Value=[List] Key=requests
What is notable
API service si returning records in the same order but faulty lists are random when calling with same parameters
some times is refresh without any error
The same power query called on another server is working correctly , problem is only with one specific server.
This problem started without notice on the most important server after 1,5 year without any problem.
Here is full text power of query for this main source, which is used later in other queries to extract all necessary data. Json is really complicated and I extract from it list of requests, list of solvers, list of solver groups,.... and this base query and its output is input for many referenced queries.
Errors in step Pages
let
BaseAPIUrl = apiurl&"apiservice?", /*apiurl is parameter - name of server e.g. https://xxxx.xxxxxx.sk/ */
EntitiesPerPage = RecordsPerPage, /*RecordsPerPage is parameter and defines nr. of record per page - we used as optimum 200-400 record per pages, but is working also with 4000 record per page*/
ApiToken = FnApiToken(), /*this function is returning apitoken value which is returning value of another api service apiurl&"api/auth/login", which use username and password in body of call to get apitoken */
GetJson = (QParm) => /*definiton general function to get data from data source*/
let
Options =
[ Query= QParm,
Headers=
[
Accept="application/json",
ApiKeyName="apitoken",
Authorization=ApiToken
]
],
RawData = Web.Contents(BaseAPIUrl, Options),
Json = Json.Document(RawData)
in Json,
GetEntityCount = () => /*one times called function to get nr of records using GetJson, which is returned as a part of each call*/
let
QParm = [pp="1", pg="1" ],
Json = GetJson(QParm),
Count = Json[totalRecord]
in
Count,
GetPage = (Index) => /*repeatadly called function to get each page of json using GetJson*/
let
PageNr = Text.From(Index+1),
PerPage = Text.From(EntitiesPerPage),
QParm = [pg = PageNr, pp=PerPage],
Json = GetJson(QParm),
Value = Json[data][requests]
in Value,
EntityCount = List.Max({ EntitiesPerPage, GetEntityCount() }), /*setup of nr. of records to variable*/
PageCount = Number.RoundUp(EntityCount / EntitiesPerPage), /*setup of nr. of pages */
PageIndices = { 0 .. PageCount - 1 },
Pages = List.Transform(PageIndices, each GetPage(_) /*Function.InvokeAfter(()=>GetPage(_),#duration(0,0,0,1))*/), /*here we call for each page GetJson function to get whole dataset - there is in comment test with delay between getpages but was not neccessary*/
Entities = List.Union(Pages),
Table = Table.FromList(Entities, Splitter.SplitByNothing(), null, null, ExtraValues.Error)
I also tried another way of appending pages to list using List.Generate. This is also bringing random errors in list but
it is bringing possibility to transform to table in contrast with original way with using List.Transform, but other referenced queries are failing and contains on the last row errors
When I am exploring content of faulty page/list extracting it via Add as New Query there are always all record without any fail.....
Source = List.Generate( /*another way to generate list of all pages*/
() => [Page = 0, ReqPageData = GetPage(0) ],
each [Page] < PageCount,
each [ReqPageData = GetPage( [Page] ),
Page = [Page] + 1 ],
each [ReqPageData]
),
#"Converted to Table" = Table.FromList(Source, Splitter.SplitByNothing(), null, null, ExtraValues.Error), /*here i am able to generate table from list in contrast when is used List.Generate*/
#"Expanded Column1" = Table.ExpandListColumn(#"Converted to Table", "Column1"), /*here aj can expand list to column*/
#"Removed Errors" = Table.RemoveRowsWithErrors(#"Expanded Column1", {"Column1"}) /*here i try to exclude errors, but i dont know what happend and which records (if any) are excluded*/
Extracting errored page
and finnaly I am tottaly clueless not able to find the cause of this behavior on this specific server. I tested to call pages which are errored via POSTMAN, I discused this issue with author of API service and He also tried to call this API service with all parameters but server is returning every page OK, only Power query is not able to List.Transform ...
I will be grateful and appreciate any tips or advice or if somebody solved the same issue in the past ....
Kuby
No, each error line of list in step List.Transform coud by extracted as new query and there are all records from one page OK. hmmmm
Finnaly, problem described in this issue was caused by "corrupted" content of returning json. The provider of core system informed me that they found bug and after fixing on the side of servisdesk is everything OK again. I tried to find problem in Power query and problem was in servisdesk. :(

Google sheets script Exceeded maximum execution time

I wrote a script to import stock data from a csv file stored in Google Drive to an existing google sheet.
In one function I'm doing this for multiple csv files. Unfortunately I get "Exceeded maximum execution time" sometimes, but not all the time.
Do you have an idea how I can boost performance on this:
//++++++++++++++ SPY +++++++++++++++++++
var file = DriveApp.getFilesByName("SPY.csv").next();
var csvData = Utilities.parseCsv(file.getBlob().getDataAsString());
//Create new temporary sheet
var activeSpreadsheet = SpreadsheetApp.getActiveSpreadsheet();
var yourNewSheet = activeSpreadsheet.getSheetByName("SPY-Import");
if (yourNewSheet != null) {
activeSpreadsheet.deleteSheet(yourNewSheet);
}
yourNewSheet = activeSpreadsheet.insertSheet();
yourNewSheet.setName("SPY-Import");
//Import
var sheet = SpreadsheetApp.getActiveSheet();
sheet.getRange(1, 1, csvData.length, csvData[0].length).setValues(csvData);
//Copy from temporary sheet to destination
var spreadsheet = SpreadsheetApp.getActive();
spreadsheet.getRange('A:B').activate();
spreadsheet.setActiveSheet(spreadsheet.getSheetByName('SPY'), true);
spreadsheet.getRange('A2').activate();
spreadsheet.getRange('\'SPY-Import\'!A:B').copyTo(spreadsheet.getActiveRange(),
SpreadsheetApp.CopyPasteType.PASTE_NORMAL, false);
//Delete temporary sheet
// Get Spreadsheet Object
var spreadsheet = SpreadsheetApp.getActiveSpreadsheet();
// Get target sheet object
var sheet = spreadsheet.getSheetByName("SPY-Import");
// Delete
spreadsheet.deleteSheet(sheet);
Thanks in advance!
I believe your situation and goal as follows.
You have several CSV files like SPY.csv.
Your Spreadsheet has the several sheet corresponding to each CSV file like SPY.
You want to put the values from the CSV data to the Spreadsheet.
You want to put the values of the column "A" and "B" of the CSV data.
In your current situation, you copied the script in your question several times and run them by changing the csv filename and sheet name.
You want to reduce the process cost of your script. I understood your goal like this.
Modification points:
SpreadsheetApp.getActiveSpreadsheet() is used several times. And, activate() is used several times.
I think that in your case, SpreadsheetApp.getActiveSpreadsheet() can be declared one time, and activate() is not required to be used.
In order to copy the CSV data to Spreadsheet, the CSV data is put to a temporal sheet and the required values are copied to the destination sheet.
In this case, I think that the CSV data is directly put to the destination sheet by processing on the array.
I think that above points lead to the reduction of process cost. When above points are reflected to your script, it becomes as follows.
Modified script:
Please copy and paste the following script and prepare the variable of obj. When you run the script, the CSV data is retrieved and processed, and then, the values are put to the Spreadsheet.
function myFunction() {
var obj = [
{filename: "SPY.csv", sheetname: "SPY"},
{filename: "###.csv", sheetname: "###"},
,
,
,
];
var ss = SpreadsheetApp.getActiveSpreadsheet();
obj.forEach(({filename, sheetname}) => {
var file = DriveApp.getFilesByName(filename);
if (file.hasNext()) {
var sheet = ss.getSheetByName(sheetname);
if (sheet) {
// sheet.clearContents(); // Is this requierd in your situation?
var csv = DriveApp.getFileById(file.next().getId()).getBlob().getDataAsString();
var values = Utilities.parseCsv(csv).map(([a, b]) => [a, b]);
sheet.getRange(2, 1, values.length, 2).setValues(values);
}
}
});
}
Note:
Please use this script with enabling V8
I'm not sure about your CSV data. So when Utilities.parseCsv(csv) cannot be used, please use the delimiter.
In this modification, Spreadsheet service is used. If above modified script occurs the same error of Exceeded maximum execution time, please tell me. At that time, I would like to propose the sample script using Sheets API.
References:
Spreadsheet Service
parseCsv(csv)

Caching is working for one hour while it should be for days

I have created an API using .NETCore 2.0 ; This API is connected to an oracle database to retrieve needed data; One of the functions takes too much time so I decided to use caching in order to retrieve data faster;
Function description: Get ranking
Caching period: Data should be renewed in cache memory each Monday
I am using IMemoryCache, but the problem is that data is not being cached for multiple days; It lasts only for one hour, after that data is being retrieved from database and takes too much time (10 s.); Below is my code:
var dateNow = DateTime.Now;
int diff = 7; // if today is Monday then should add 7 days to get next Monday date
if (dateNow.DayOfWeek != DayOfWeek.Monday) {
var daysToStartWeek = dateNow.DayOfWeek - DayOfWeek.Monday;
diff = (7 - (daysToStartWeek)) % 7;
}
var nextMonday = dateNow.AddDays(diff).Date;
var totalDays = (nextMonday - dateNow).TotalDays;
if (_cache.TryGetValue("GetRanking", out IEnumerable<GetRankingStruct> objRanking))
{
return Ok(objRanking);
}
var dp = new DataProvider(Configuration);
var response = dp.GetRanking(userName, asAtDate);
_cache.Set("GetRanking", response, TimeSpan.FromDays(diff));
return Ok(response);
Could be related to the token life Time since it's only 1 hour?
Firstly - have you tried checking to see if your worker process is being restarted? You don't specify how you are hosting your application but, obviously, if the application (worker process) is restarted your memory cache will be empty.
If your worker process / process is restarting then you could load the cache on start up.
Secondly - I believe that the implementation may choose to empty the cache due to inactivity or memory constraints. You can set the priority to never remove - https://learn.microsoft.com/en-us/dotnet/api/microsoft.extensions.caching.memory.cacheitempriority?view=dotnet-plat-ext-3.1
I believe you can set this by passing a MemoryCacheOptions object to the constructor of the memory cache https://learn.microsoft.com/en-us/dotnet/api/microsoft.extensions.caching.memory.memorycache.-ctor?view=dotnet-plat-ext-3.1#Microsoft_Extensions_Caching_Memory_MemoryCache__ctor_Microsoft_Extensions_Options_IOptions_Microsoft_Extensions_Caching_Memory_MemoryCacheOptions__.
Finally - I assume you've made your _cache object static so it is shared by all instances of your class. (Or made the controller, if that's what it is, a singleton).
These are my suggestions.
Good luck.

How many data can save in one Parse.Object.saveAll request? And how many number of request will be used for one Parse.Object.saveAll

Recently, I have some test on parse.com. I am now facing a problem of using Parse.Object.saveAll in background job.
From the document in parse.com, it says that a background job can run for 15 minutes. I am now setting a background job to pour the data in the database using the following code:
Parse.Cloud.job("createData", function(request, status) {
var Dummy = Parse.Object.extend("dummy");
var batchSaveArr = [];
for(var i = 0 ; i < 50000 ; i ++){
var obj = new Dummy();
// genMessage() is a function to generate a random string with 5 characters long
obj.set("message", genMessage());
obj.set("numValue",Math.floor(Math.random() * 1000));
batchSaveArr.push(obj);
}
Parse.Object.saveAll(batchSaveArr, {
success: function(list){
status.success("success");
},
error: function(error){
status.error(error.message);
}
});
});
Although it is used to pour data into database, the main purpose is to test the function Parse.Object.saveAll. When I run this job, an error "This application has exceeded its request limit." is appeared in the log. However, when I see the analysis page, it show me that the request count is less than or equal to 1. I only run this job in Parse, and no other request is made during the background is running.
It seems that there is some problem on Parse.Object.saveAll. Or maybe I have some misunderstanding on this function.
Are there anyone facing the same problem?
How many data can save in one Parse.Object.saveAll request?
How many number of request will be used for one Parse.Object.saveAll
I have asked the question in Facebook and the reply is quite disappointed.
Please follow the link:
https://developers.facebook.com/bugs/439821726158766/

WP7 Trouble populating pie chart

I'm having some trouble with populating a pie chart in my WP7 project. At the moment, my code is as follows below. I've tried a few different ways to bring the data back from an xml web service but no luck. Can anyone see what I have done wrong?
The error I'm getting right now is, "Cannot implicitly convert type 'System.Collections.Generic.IEnumerable' to 'System.Xml.Linq.XElement'. An explicit conversion exists (are you missing a cast?)"
XDocument XDocument = XDocument.Load(new StringReader(e.Result));
XElement Traffic = XDocument.Descendants("traffic").First();
XElement Quota = XDocument.Descendants("traffic").Attributes("quota");
ObservableCollection<PieChartItem> Data = new ObservableCollection<PieChartItem>()
{
new PieChartItem {Title = "Traffic", Value = (double)Traffic},
new PieChartItem {Title = "Quota", Value = (double)Quota},
};
pieChart1.DataSource = Data;
my guess is this line has the compile error:
XElement Quota = XDocument.Descendants("traffic").Attributes("quota");
the result of Descendants("traffic") is an IEnumerable, not an XElement. in the line above that you're already getting First of that enumerable, which is the item you want, isn't it?
the quota line should be:
XElement Quota = Traffic.Attributes("quota");
Style wise, most people make local variables lower cased, like traffic and quota and data to distinguish them from class level properties and members.
Update: it looks like Attributes("quota") returns IEnumerable<XAttribute>, so that quota line should be:
XAttribute Quota = Traffic.Attributes("quota").FirstOrDefault();
or to simplify:
var traffic = XDocument.Descendants("traffic").First();
var quota = traffic.Attributes("quota").FirstOrDefault();
I don't want to be mean, but fixing compiler errors like this should be something you shouldn't have to come to stackoverflow for. The compiler error itself is telling you what the problem is: the method returns a type other than what you said it does. Using var can simplify some of that.

Resources