Google sheets script Exceeded maximum execution time - performance

I wrote a script to import stock data from a csv file stored in Google Drive to an existing google sheet.
In one function I'm doing this for multiple csv files. Unfortunately I get "Exceeded maximum execution time" sometimes, but not all the time.
Do you have an idea how I can boost performance on this:
//++++++++++++++ SPY +++++++++++++++++++
var file = DriveApp.getFilesByName("SPY.csv").next();
var csvData = Utilities.parseCsv(file.getBlob().getDataAsString());
//Create new temporary sheet
var activeSpreadsheet = SpreadsheetApp.getActiveSpreadsheet();
var yourNewSheet = activeSpreadsheet.getSheetByName("SPY-Import");
if (yourNewSheet != null) {
activeSpreadsheet.deleteSheet(yourNewSheet);
}
yourNewSheet = activeSpreadsheet.insertSheet();
yourNewSheet.setName("SPY-Import");
//Import
var sheet = SpreadsheetApp.getActiveSheet();
sheet.getRange(1, 1, csvData.length, csvData[0].length).setValues(csvData);
//Copy from temporary sheet to destination
var spreadsheet = SpreadsheetApp.getActive();
spreadsheet.getRange('A:B').activate();
spreadsheet.setActiveSheet(spreadsheet.getSheetByName('SPY'), true);
spreadsheet.getRange('A2').activate();
spreadsheet.getRange('\'SPY-Import\'!A:B').copyTo(spreadsheet.getActiveRange(),
SpreadsheetApp.CopyPasteType.PASTE_NORMAL, false);
//Delete temporary sheet
// Get Spreadsheet Object
var spreadsheet = SpreadsheetApp.getActiveSpreadsheet();
// Get target sheet object
var sheet = spreadsheet.getSheetByName("SPY-Import");
// Delete
spreadsheet.deleteSheet(sheet);
Thanks in advance!

I believe your situation and goal as follows.
You have several CSV files like SPY.csv.
Your Spreadsheet has the several sheet corresponding to each CSV file like SPY.
You want to put the values from the CSV data to the Spreadsheet.
You want to put the values of the column "A" and "B" of the CSV data.
In your current situation, you copied the script in your question several times and run them by changing the csv filename and sheet name.
You want to reduce the process cost of your script. I understood your goal like this.
Modification points:
SpreadsheetApp.getActiveSpreadsheet() is used several times. And, activate() is used several times.
I think that in your case, SpreadsheetApp.getActiveSpreadsheet() can be declared one time, and activate() is not required to be used.
In order to copy the CSV data to Spreadsheet, the CSV data is put to a temporal sheet and the required values are copied to the destination sheet.
In this case, I think that the CSV data is directly put to the destination sheet by processing on the array.
I think that above points lead to the reduction of process cost. When above points are reflected to your script, it becomes as follows.
Modified script:
Please copy and paste the following script and prepare the variable of obj. When you run the script, the CSV data is retrieved and processed, and then, the values are put to the Spreadsheet.
function myFunction() {
var obj = [
{filename: "SPY.csv", sheetname: "SPY"},
{filename: "###.csv", sheetname: "###"},
,
,
,
];
var ss = SpreadsheetApp.getActiveSpreadsheet();
obj.forEach(({filename, sheetname}) => {
var file = DriveApp.getFilesByName(filename);
if (file.hasNext()) {
var sheet = ss.getSheetByName(sheetname);
if (sheet) {
// sheet.clearContents(); // Is this requierd in your situation?
var csv = DriveApp.getFileById(file.next().getId()).getBlob().getDataAsString();
var values = Utilities.parseCsv(csv).map(([a, b]) => [a, b]);
sheet.getRange(2, 1, values.length, 2).setValues(values);
}
}
});
}
Note:
Please use this script with enabling V8
I'm not sure about your CSV data. So when Utilities.parseCsv(csv) cannot be used, please use the delimiter.
In this modification, Spreadsheet service is used. If above modified script occurs the same error of Exceeded maximum execution time, please tell me. At that time, I would like to propose the sample script using Sheets API.
References:
Spreadsheet Service
parseCsv(csv)

Related

onEdit(e) trigger doesn't work for me - Google Sheets, Google Apps Script

I just tried to sort a column with names alphabetically once a new name is added or an existing name is edited (I started the sorting process from row two because row one is my header). I used the onEdit(e) trigger as shown. It should be a simple thing to do but it does not work. When I add a name to the sheet or edit an existing one nothing happens.
ss = SpreadsheetApp.openById("1nQZT16wiN9AX6FqZhioKyeLWoL7_BKfi09zRg");
sheet = ss.getSheetByName("Sheet1");
function onEdit(e) {
range = sheet.getRange(2, 1, sheet.getLastRow()-1);
range.sort(1);
}
The Spreadsheet looks like as seen in the picture:
I hope that someone has a solution to that problem.
You're getting spreadsheet in global scope:
ss = SpreadsheetApp.openById("1nQZT16wiN9AX6FqZhioKyeLWoL7_BKfi09zRg");
This requires any of the following scopes:
https://www.googleapis.com/auth/spreadsheets.currentonly
https://www.googleapis.com/auth/spreadsheets
But simple triggers run without authorization1. Therefore, You cannot use SpreadsheetApp.openById(). You could however get the spreadsheet from the event object2 inside the onEdit:
const onEdit = e => e.source
.getSheetByName("Sheet1")
.getRange("A2:A")
.sort(1)
If you want to open any other spreadsheet than the current bound one, you'd need to use installable triggers

How to convert a json to a a proper table in Power Query

I created a Power BI Custom Data Connector, the idea is to be able to connect to SSRS Dataset using this Custom Data Connector I was able to do it but the resulting formatted json is different from what i expect.
Here's the result when I open the Custom Connector in Power BI, I expected a properly formatted table but the result is not.
Columns are List of Record contain the Column Names and Type
While the Row is a List of List containing the values for CustomerID and CustomerName.
Here's my code.
section Test.PQ.SSRS_Connector;
[DataSource.Kind="Asia.PQ.SSRS_Connector", Publish="Test.PQ.SSRS_Connector.Publish"]
shared Test.PQ.SSRS_Connector.Feed = Value.ReplaceType(SSRSConImpl, type function (url as Uri.Type) as any);
DefaultRequestHeaders = [
#"Accept" = "application/json;odata.metadata=minimal",
#"OData-MaxVersion" = "4.0"
];
SSRSConImpl = (url as text) =>
let
body= "",
source = Web.Contents(url, [ Headers = DefaultRequestHeaders, Content=Text.ToBinary(body)]),
json = Json.Document(source)
in
json;
Posting some sample JSON would be helpful, but based on the screenshots it seems like continuing your function as below may work:
// ... Your function code
json = Json.Document(source),
toTable = Table.FromRows(json[Rows], {"CustomerID", "CustomerName"}) // If there are more columns, consider extracting names dynamically from json[Columns]
// .... Any remaining code
Code is untested.

Update content of Google Doc from another Google Doc without creating a new file

I have two Google Docs (D1 and D2). I would like to copy the content of D1 into D2. The fileIds of these documents are stored in the database which cannot be modified due to the limitation of the Architecture. Hence, I don't have an option of deleting D2 and creating a copy of D1, because, this will result in a new fileId. I am using Google Drive V3 API (java) to interact with Google Drive.
How can I update the content of Google Doc from another Google Doc without creating a new file?
Note: These are Google Docs, not any other format like MS-Word, PDF etc
Google drive api is a file directory api only. You will need to download the file to your system edit the file there and then upload it again.
There is no api that allows for edditing a Google doc on google drive itself.
When you download the file you will be able to choose which format to export it as Downloading Google Documents then you can make your changes before uploading again. upload Files are in two parts the actual metadata of the file. Name, description file resource and the actual data of the file. You will only need to insert the metadata once. Then you can upload the file data when ever you change it this way you will only have one file and not duplicate files just keep the same file id.
Note App Script
Google app scripts does allow for some editing of Google doc files. I know this is not what you asked for but it may help you automate your process please see Google app script
As many have mentioned in the other answers, Google Drive API does not provide the capability to copy the content of a Google Doc. The only way to achieve this is by using Google Apps Script. Following is the code to do the same:
function test() {
var sourceFileId = 'fileId';
var targetFileId = 'fileId';
var sourceDocument = DocumentApp.openById(sourceFileId);
var targetDocument = DocumentApp.openById(targetFileId);
var sourceBody = sourceDocument.getBody();
var targetBody = targetDocument.getBody();
var sourceHeader = sourceDocument.getHeader();
var targetHeader = targetDocument.getHeader();
var sourceFooter = sourceDocument.getFooter();
var targetFooter = targetDocument.getFooter();
copyContent(sourceHeader, targetHeader);
copyContent(sourceFooter, targetFooter);
copyContent(sourceBody, targetBody);
}
var copyContent = function(source, target) {
target.clear();
for(var i = 0; i < source.getNumChildren(); i++) {
var child = source.getChild(i).copy();
if (child.getType() === DocumentApp.ElementType.TABLE) {
target.appendTable(child);
} else if (child.getType() === DocumentApp.ElementType.LIST_ITEM) {
target.appendListItem(child);
} else if (child.getType() === DocumentApp.ElementType.PARAGRAPH) {
target.appendParagraph(child);
}
}
}
Note: This script does not cover all the sections and element types.
In Drive, the content aka media of a file is separate from the metadata of a file.
So, in pseudo code, what you need to do is:-
. Download content of D1
. Use the downloaded content in a content upload to D2.
The file id of D2 remains unchanged.

Sorting a Google SpreadSheet or Google Doc table by column

I've made a script that takes data from a spreadsheet , creates a Google Doc and presents selected rows in a Table format in the Doc. Now I'd like to be able to sort that table Alphabetically before the table is created. I've tried using a couple of different approaches like this
function onEdit(){
var sh = SpreadsheetApp.getActiveSpreadsheet().getActiveSheet();
var editedCell = sh.getActiveRange().getColumnIndex();
if(editedCell == 2) {
var range = sh.getRange("A2:B10");
range.sort({column: 2});
}
}
This kind of sort works but only onEdit(). I'd like to sort before edit or find away to sort the table as the Google Doc is being generated.
Does anyone have any suggestions.
Many thanks in advance.
Use the onOpen() function instead of the onEdit() function.
Note that you will have to open the document, not just refresh the page to get the function to run.

How can I extract values from a custom flat file header into variables?

I have been stuck for a while with this problem and I have no clue. I am trying to upload multiple CSV files which has dates but I wanted the dates stored as date variables so I use the date variables to form part of the column in a table using script componet and I have no idea how to create the dates as date variables in SSIS.
CSV files look as shown below when opened in Excel.
CSV data 1:
Relative Date: 02/01/2013
Run Date: 15/01/2013
Organisation,AreaCode,ACount
Chadwell,RM6,50
Primrose,RM6,60
CSV data 2:
Relative Date: 14/02/2013
Run Date: 17/02/2013
Organisation,AreaCode,ACount
Second Ave,E12,110
Fourth Avenue, E12,130
I want the Relative Date and Run Date stored as date variables. I hope I made sense.
Your best solution would be to use a Script Task in your control flow. With this you would pre-process your CSV files - you can easily parse the first two rows, retrieving your wanted dates and storing them into two variables created beforehand. (http://msdn.microsoft.com/en-us/library/ms135941.aspx)
Important to make sure when passing the variables into the script task you set them as ReadWriteVariables. Use these variables in any way you desire afterwards.
Updated Quick Walkthrough:
I presume that the CSV files you will want to import will be located in the same directory:
Add a Foreach Loop Container which will loop through the files in your specified directory and inside, a Script Task which will be responsible for parsing the two dates in each of your files and a Data Flow Task which you will use for your file import.
Create the variables you will be using - one for the FileName/Path, two for the two dates you want to retrieve. These you won't fill in as it will be done automatically in your process.
Set-up your Foreach Loop Container:
Select a Foreach File Enumerator
Select a directory folder that will contain your files. (Even better, add a variable that will take in a path you specify. This can then be read into the enumerator using its expression builder)
Wildcard for the files that will be searched in that directory.
You also need to map each filename the enumerator generates to the variable you created earlier.
Open up your Script Task, add the three variables to the ReadWriteVariables section. This is important, otherwise you won't be able to write to your variables.
This is the script I used for the purpose. Not necessarily the best, works for this example.
public void Main()
{
string filePath = this.Dts.Variables["User::FileName"].Value.ToString();
using (StreamReader reader = new System.IO.StreamReader(filePath))
{
string line = "";
bool getNext = true;
while (getNext && (line = reader.ReadLine()) != null)
{
if(line.Contains("Relative Date"))
{
string date = getDate(line);
this.Dts.Variables["User::RelativeDate"].Value = date;
// Test Event Information
bool fireAgain = false;
this.Dts.Events.FireInformation(1, "Rel Date", date,
"", 0, ref fireAgain);
}
else if (line.Contains("Run Date"))
{
string date = getDate(line);
this.Dts.Variables["User::RunDate"].Value = date;
// Test Event Information
bool fireAgain = false;
this.Dts.Events.FireInformation(1, "Run Date", date,
"", 0, ref fireAgain);
break;
}
}
}
Dts.TaskResult = (int)ScriptResults.Success;
}
private string getDate(string line)
{
Regex r = new Regex(#"\d{2}/\d{2}/\d{4}");
MatchCollection matches = r.Matches(line);
return matches[matches.Count - 1].Value;
}
The results from the execution of the Script Task for the two CSV files. The dates can now be used in any way you fancy in your Data Flow Task. Make sure you skip the first rows you don't need to import in your Source configuration.

Resources