badRequest: Invalid requests[0]: No request set. (Google::Apis::ClientError) - ruby

I can't solve this problem. Can someone help me?
Issue: badRequest: Invalid requests[0]: No request set. (Google::Apis::ClientError)
I just need to include formulas in a cell sequence.
Here is the code:
requests: [
{
repeatCell: {
range: {
sheetId: sheet_id,
startRowIndex: 0,
endRowIndex: 10,
startColumnIndex: 1,
endColumnIndex: 6
},
cell: {
userEnteredValue: {
formulaValue: "=FLOOR(A1*PI())"
}
},
fields: "*"
}
}
]
}
response = service.batch_update_spreadsheet(spreadsheet_id, request) ```

Unfortunately, although I cannot see your whole script in your question, it is found that in your request body, you use the camel case. I think that this is the reason of your issue. In the case of Ruby, please use the snake case as follows.
Modified script:
request = {
requests: [{repeat_cell: {
range: {
sheet_id: 0,
start_row_index: 0,
end_row_index: 10,
start_column_index: 1,
end_column_index: 6
},
cell: {
user_entered_value: {
formula_value: "=FLOOR(A1*PI())"
}
},
fields: "*"
}}]
}
response = service.batch_update_spreadsheet(spreadsheet_id, request, {})
Note:
In this modification, it supposes that your service can be used for using the batchUpdate method of Sheets API to the Google Spreadsheet of spreadsheet_id. Please be careful this.

Related

GraphQL Apollo pagination and type policies

I am really struggling with this concept. I hope someone can help me understand it better.
The documentations uses a simple example and it's not 100% clear to me how it works.
I have tried using keyArgs, but they didn't work, so I adopted to use the args parameter in the read and merge functions. First, let me explain my scenario.
I have a couple of search endpoints that use the same parameters:
{
search:
{
searchTerm: "*",
includePartialMatch: true,
page: 1,
itemsToShow: 2,
filters: {},
facets: [],
orderBy: {}
}
}
So I have setup my type policies like this:
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
searchCategories: typePolicy,
searchBrands: typePolicy,
searchPages: typePolicy,
searchProducts: typePolicy,
},
},
},
});
And I was using a generic typePolicy for them all.
At first, I tried this:
const typePolicy = {
keyArgs: [
"search",
[
"identifier",
"searchTerm",
"includePartialMatches",
"filters",
"orderBy",
"facets",
],
],
// Concatenate the incoming list items with
// the existing list items.
merge(existing: any, incoming: any) {
console.log("existing", existing);
console.log("incoming", incoming);
if (!existing?.items) console.log("--------------");
if (!existing?.items) return { ...incoming }; // First request
const items = existing.items.concat(incoming.items);
const item = { ...existing, ...incoming };
item.items = items;
console.log("merged", item);
console.log("--------------");
return item;
},
};
But this does not do what I want.
What I would like, is for apollo to work as it does normally, but when the "page" changes for any field, it appends it instead of caching a new request.
Does anyone know what I am doing wrong or can provide me with a better example that what is on the documentation?

Request : duplicateSheet with Google Spreadsheet API : badRequest: Must specify at least one request

I'm trying to duplicate a sheet using Google Spreadsheet API.
But I keep getting this error : badRequest: Must specify at least one request
I've tried a lot of things but nothing seems to work so far.
Here is what I have (ruby) :
request_body = Google::Apis::SheetsV4::BatchUpdateSpreadsheetRequest.new {
{
"includeSpreadsheetInResponse": false,
"requests": [
{
"duplicateSheet": {
"sourceSheetId": 1*********,
"insertSheetIndex": 2,
"newSheetId": 10,
"newSheetName": "*********"
}
}
],
"responseIncludeGridData": false,
"responseRanges": [
""
]}
}
response = service.batch_update_spreadsheet(spreadsheet_id, request_body)
I know the code is not over but I really can't figure out what is missing
Does anyone know what I need ? Many thanks in advance !!!
The new object should be enclosed with open and close parenthesis.
Your code should look like this:
request_body = Google::Apis::SheetsV4::BatchUpdateSpreadsheetRequest.new(
{
"includeSpreadsheetInResponse": false,
"requests": [
{
"duplicateSheet": {
"sourceSheetId": 1*********,
"insertSheetIndex": 2,
"newSheetId": 10,
"newSheetName": "*********"
}
}
],
"responseIncludeGridData": false,
"responseRanges": [
""
]}
)
Reference:
Ruby Object and Classes
In your script, you use the camel case. In the case of Ruby, please use the snake case as follows.
Modified script:
request_body = Google::Apis::SheetsV4::BatchUpdateSpreadsheetRequest.new(
{
include_spreadsheet_in_response: false,
requests: [
{
duplicate_sheet: {
source_sheet_id: 1*********,
insert_sheet_index: 2,
new_sheet_id: 10,
new_sheet_name: "*********",
}
}
],
response_include_grid_data: false,
response_ranges: [""]
})
response = service.batch_update_spreadsheet(spreadsheet_id, request_body)
Note:
As other patterns, you can also use the following scripts.
Pattern 2
request = Google::Apis::SheetsV4::Request.new
request.duplicate_sheet = {
source_sheet_id: 1*********,
insert_sheet_index: 2,
new_sheet_id: 10,
new_sheet_name: "*********",
}
request_body = Google::Apis::SheetsV4::BatchUpdateSpreadsheetRequest.new
request_body.include_spreadsheet_in_response = false
request_body.response_include_grid_data = false
request_body.response_ranges = [""]
request_body.requests = [request]
response = service.batch_update_spreadsheet(spreadsheet_id, request_body)
Pattern 3
request_body = {
include_spreadsheet_in_response: false,
requests: [{duplicate_sheet: {
source_sheet_id: 1*********,
insert_sheet_index: 2,
new_sheet_id: 10,
new_sheet_name: "*********",
}}],
response_include_grid_data: false,
response_ranges: [""],
}
response = service.batch_update_spreadsheet(spreadsheet_id, request_body, {})
Note:
In this answer, it supposes that your service can be used for using the batchUpdate method. Please be careful this.
Reference:
Method: spreadsheets.batchUpdate

Forming an ExecuteSearch Soap action for Filenet

I wrote a nodeJs service and tried to use the strong-soap library to try to ExecuteSearch on IBM Filenet. This is function that I wrote with the arguments to try and execute the search:
filenetResponse = await client.FNCEWS35Service.FNCEWS35InlinePort.ExecuteSearch({
ExecuteSearchRequest: {
SelectionFilter: {
IncludeProperties: { $attributes: { maxRecursion: 1, maxSize: 1, maxElements: 1 }, $value: 'test' },
IncludeTypes: { $attributes: { maxRecursion: 1, maxSize: 1, maxElements: 1 }, $value: 'test' },
ExcludeProperties: 'test',
$attributes: {
maxRecursion: 1,
maxSize: 1,
maxElements: 1
}
},
$attributes: {
maxElements: 1,
continueFrom: 'test',
continuable: true
}
}
});
I'm getting the following error when I send it to Filenet:
"root": {
"Envelope": {
"Body": {
"Fault": {
"faultcode": "e:Server",
"faultstring": "class com.filenet.api.exception.EngineRuntimeException:Web services value for {http://www.w3.org/2001/XMLSchema-instance}type expected. Path when error was detected ExecuteSearchRequest.",
"detail": {
"ErrorStack": {
"ErrorName": "REQUIRED_VALUE_ABSENT",
"ErrorRecord": {
"Source": "com.filenet.api.exception.EngineRuntimeException",
"Description": "Web services value for {http://www.w3.org/2001/XMLSchema-instance}type expected. Path when error was detected ExecuteSearchRequest.",
"Diagnostic": [
{
"$attributes": {
"diagnosticType": "exceptionCode"
},
"$value": "TRANSPORT_WSI_VALUE_EXPECTED"
},
I've compared the argument that I am sending with the ExecuteSearch describe() but could not find what type am I missing that is causing the error.
I was thinking if it was the header, so I tried adding this too but it still gives me the same error:
client.addSoapHeader({ id: '', NoTxResume: '' }, 'TxId');
client.addSoapHeader({ Locale: 'en-us', Timezone: '' }, 'Localization');
I'm starting to wonder if I should keep trying or just discard this idea and move on to using Java as there are working examples that I can use.

GraphQL: Filtering, sorting and paging on nested entities from separate data sources?

I'm attempting to use graphql to tie together a number of rest endpoints, and I'm stuck on how to filter, sort and page the resulting data. Specifically, I need to filter and/or sort by nested values.
I cannot do the filtering on the rest endpoints in all cases because they are separate microservices with separate databases. (i.e. I could filter on title in the rest endpoint for articles, but not on author.name). Likewise with sorting. And without filtering and sorting, pagination cannot be done on the rest endpoints either.
To illustrate the problem, and as an attempt at a solution, I've come up with the following using formatResponse in apollo-server, but am wondering if there is a better way.
I've boiled down the solution to the most minimal set of files that i could think of:
data.js represents what would be returned by 2 fictional rest endpoints:
export const Authors = [{ id: 1, name: 'Sam' }, { id: 2, name: 'Pat' }];
export const Articles = [
{ id: 1, title: 'Aardvarks', author: 1 },
{ id: 2, title: 'Emus', author: 2 },
{ id: 3, title: 'Tapir', author: 1 },
]
the schema is defined as:
import _ from 'lodash';
import {
GraphQLSchema,
GraphQLObjectType,
GraphQLList,
GraphQLString,
GraphQLInt,
} from 'graphql';
import {
Articles,
Authors,
} from './data';
const AuthorType = new GraphQLObjectType({
name: 'Author',
fields: {
id: {
type: GraphQLInt,
},
name: {
type: GraphQLString,
}
}
});
const ArticleType = new GraphQLObjectType({
name: 'Article',
fields: {
id: {
type: GraphQLInt,
},
title: {
type: GraphQLString,
},
author: {
type: AuthorType,
resolve(article) {
return _.find(Authors, { id: article.author })
},
}
}
});
const RootType = new GraphQLObjectType({
name: 'Root',
fields: {
articles: {
type: new GraphQLList(ArticleType),
resolve() {
return Articles;
},
}
}
});
export default new GraphQLSchema({
query: RootType,
});
And the main index.js is:
import express from 'express';
import { apolloExpress, graphiqlExpress } from 'apollo-server';
var bodyParser = require('body-parser');
import _ from 'lodash';
import rql from 'rql/query';
import rqlJS from 'rql/js-array';
import schema from './schema';
const PORT = 8888;
var app = express();
function formatResponse(response, { variables }) {
let data = response.data.articles;
// Filter
if ({}.hasOwnProperty.call(variables, 'q')) {
// As an example, use a resource query lib like https://github.com/persvr/rql to do easy filtering
// in production this would have to be tightened up alot
data = rqlJS.query(rql.Query(variables.q), {}, data);
}
// Sort
if ({}.hasOwnProperty.call(variables, 'sort')) {
const sortKey = _.trimStart(variables.sort, '-');
data = _.sortBy(data, (element) => _.at(element, sortKey));
if (variables.sort.charAt(0) === '-') _.reverse(data);
}
// Pagination
if ({}.hasOwnProperty.call(variables, 'offset') && variables.offset > 0) {
data = _.slice(data, variables.offset);
}
if ({}.hasOwnProperty.call(variables, 'limit') && variables.limit > 0) {
data = _.slice(data, 0, variables.limit);
}
return _.assign({}, response, { data: { articles: data }});
}
app.use('/graphql', bodyParser.json(), apolloExpress((req) => {
return {
schema,
formatResponse,
};
}));
app.use('/graphiql', graphiqlExpress({
endpointURL: '/graphql',
}));
app.listen(
PORT,
() => console.log(`GraphQL Server running at http://localhost:${PORT}`)
);
For ease of reference, these files are available at this gist.
With this setup, I can send this query:
{
articles {
id
title
author {
id
name
}
}
}
Along with these variables (It seems like this is not the intended use for the variables, but it was the only way I could get the post processing parameters into the formatResponse function.):
{ "q": "author/name=Sam", "sort": "-id", "offset": 1, "limit": 1 }
and get this response, filtered to where Sam is the author, sorted by id descending, and getting getting the second page where the page size is 1.
{
"data": {
"articles": [
{
"id": 1,
"title": "Aardvarks",
"author": {
"id": 1,
"name": "Sam"
}
}
]
}
}
Or these variables:
{ "sort": "-author.name", "offset": 1 }
For this response, sorted by author name descending and getting all articles except the first.
{
"data": {
"articles": [
{
"id": 1,
"title": "Aardvarks",
"author": {
"id": 1,
"name": "Sam"
}
},
{
"id": 2,
"title": "Emus",
"author": {
"id": 2,
"name": "Pat"
}
}
]
}
}
So, as you can see, I am using the formatResponse function for post processing to do the filtering/paging/sorting. .
So, my questions are:
Is this a valid use case?
Is there a more canonical way to do filtering on deeply nested properties, along with sorting and paging?
Is this a valid use case? Is there a more canonical way to do filtering on deeply nested properties, along with sorting and paging?
Major part of original questing lies on segregating collections on different databases on separate microservices. In fact, it's nessasary to perform collection joining and subsequent filtering on some key, but it's directly impossible since there is no field in original collection to filter, sort or paginate.
Strightforward solution is perform full or filtered queries to original collections, and then perform joining and filtering result dataset on application server, e.g. by lodash, such at your solution. In is possible for small collections, but in general case causes large data transfer and unefficent sorting since there is no index structure - real RB-tree or SkipList, so with quadratic complexity it's not very good.
Dependent on resource volume on application server, special cache and index tables can be build there. If collection structure is fixed, some relations between collection entries and their fields can be reflected in special search table and update respectively on demain. It's like find & search index creation, but not it database, but on application server. Of cource, it will consume resources, but will be more fast than direct lodash-like sorting.
Also task can be solved from another side, if there is access to structure of original databases. Key is denormalization. In counter for classical relation approach, collections can have dublicate information for avioding further join operation. E.g., Articles collection can have some information from Authors collection, which is nessasary to perform filtering, sorting and pagination in further operations.

Parse.com manipulate Response Object

I am trying to work Ember with Parse.com using
ember-model-parse-adapter by samharnack.
I add added a function to make multiple work search(like search engine) for which I have defined a function on cloud using Parse.Cloud.define and run from client.
The problem is the Array that my cloud response returns is not compatible with Ember Model because of two attributes they are __type and className. how can I modify the response to get response similar to that i get when I run a find query from client. i.e without __type and className
Example responses
for App.List.find() = {
"results":[
{
"text":"zzz",
"words":[
"zzz"
],
"createdAt":"2013-06-25T16:19:04.120Z",
"updatedAt":"2013-06-25T16:19:04.120Z",
"objectId":"L1X55krC8x"
}
]
}
for App.List.cloudFunction("sliptSearch",{"text" : this.get("searchText")})
{
"results":[
{
"text":"zzz",
"words":[
"zzz"
],
"createdAt":"2013-06-25T16:19:04.120Z",
"updatedAt":"2013-06-25T16:19:04.120Z",
"objectId":"L1X55krC8x",
"__type" : Object, //undesired
"className" : "Lists" //undesired
}
]
}
Thanks Vlad something like this worked for me for array
resultobj = [];
searchListQuery.find({
success: function(results) {
for( var i=0, l=results.length; i<l; i++ ) {
temp = results.pop();
resultobj.push({
text: temp.get("text"),
createdAt: temp.createdAt,
updatedAt: temp.updatedAt,
objectId: temp.id,
words: "",
hashtags: ""
});
}
In your cloud code before you make any response, create and object and extract from it the attributes/members you need and then response it. like so:
//lets say result is some Parse.User or any other Parse.Object
function(result)
{
var responseObj = {};
responseObj.name = responseObj.get("name");
responseObj.age = responseObj.get("age");
responseObj.id = responseObj.id;
response.success(responseObj);
}
on the response side you will get {"result": {"name": "jhon", "age": "26", "id": "zxc123s21"}}
Hope this would help you

Resources