SP 365 rest search api return '500 internal server error' for using HTTP POST - sharepoint-rest-api

Our product uses SP 365 search api. Several of our customers report that when using the following search API: '/_api/search/postquery' with body:
{'request': {
'Querytext':'test',
'SourceId':'8413cd39-2156-4e00-b54d-11efd9abdb89',
'RowLimit':400,
'SelectProperties':{
'results':['Title','Path','Description','Write','Rank','Size']},
'TrimDuplicates':true,
'ClientType':'Custom',
'Culture':1030,
'SortList':{'results':[{'Property':'Rank','Direction':'1'}]}
}}
returns HTTP status code 500 with error text (inside json response): 'An unknown error occurred.'
However, if one adds to the query above a condition to limit it to a specific SP list, such as:
{'request': {
'Querytext':'test AND "ListId":{A7B96B28-6062-435B-A2EE-4792512A95A1}',
'SourceId':'8413cd39-2156-4e00-b54d-11efd9abdb89',
'RowLimit':400,
'SelectProperties':{
'results':['Title','Path','Description','Write','Rank','Size']},
'TrimDuplicates':true,
'ClientType':'Custom',
'Culture':1030,
'SortList':{'results':[{'Property':'Rank','Direction':'1'}]}
}}
then the query works well.
This happens for specific tenants like 'pfgroupas.sharepoint.com'.
This API calls have been working for years now (started with SP 2013) & are broken in the last few days.

Turns out that, for certain tenants, the internal server error manifests itself if you specify the 'SortList' parameter. If you create a query without it, like:
{'request': {
'Querytext':'test',
'SourceId':'8413cd39-2156-4e00-b54d-11efd9abdb89',
'RowLimit':400,
'SelectProperties':{
'results':['Title','Path','Description','Write','Rank','Size']},
'TrimDuplicates':true,
'ClientType':'Custom',
'Culture':1030
}}
then everything works fine. Seems to me like a server side bug.

Related

How to solve Unable to find item ID for item in application in Oracle Apex?

I have a public website made using Apex 21.1.3
When a user shares a page of my website, let's say on facebook, Facebook adds "?fbclickId=something" to the URL.
My app then crashes saying : Unable to find item ID for item fbclickId in application.
Apex thinks the user is trying to set an element that does not exist on the application.
I have url processing layer using htaccess that formats the urls before sending them to Apex in a reverse proxy. I could say let's ignore all what comes after a question mark "?" but in this case I wont be able to set any application item value neither. So that's not possible.
Does anyone have an idea how to make Apex ignore setting a parameter if it doesn't exist ?
Using google for example :
This URL https://www.google.com/?anyparameter=anyvalue will always resolve to https://www.google.com
Thanks
Cheers
I'm not aware of any way to ignore invalid parameters, but you can make the error a bit nicer for the end user.
Create a custom apex error handling function. The only difference with standard error handling is that the error with code WWV_FLOW.FIND_ITEM_ID_ERR has a custom message and no additional info. Change the string "Invalid url arguments" to something more relevant for your business case.
create or replace function apex_error_custom
(
p_error IN apex_error.t_error
)
RETURN apex_error.t_error_result
IS
l_result apex_error.t_error_result := apex_error.t_error_result();
BEGIN
l_result := apex_error.init_error_result ( p_error => APEX_ERROR_CUSTOM.p_error );
IF p_error.apex_error_code = 'WWV_FLOW.FIND_ITEM_ID_ERR' THEN
l_result.message := 'Invalid url arguments';
l_result.additional_info := NULL;
END IF;
RETURN l_result;
END apex_error_custom;
Change the application definition to use the new error function:
Application Definitions > Error Handling > Custom Error Function. Note this affects all errors in the application.
An additional way to make the error nicer is to change the default error page to use a defined template (Shared Components > Themes > your theme > Component Defaults > Error page). Note this affects all errors in the application.
Here is the solution I came up with.
I couldn't find any way to ignore unavailable fields but found a trick to avoid sending them to Apex, hence escaping the error.
In the middle tier (nginx, apache, IIS) add the following logic :
Whenever there are two question marks, ignore the second one part:
For example : someApexAppUrl?Parameter=value?fbclickid=something
Should become : someApexAppUrl?Parameter=value
Whenever there is a parameter added to the url for example
someApexAppUrl?Parameter=value
Check the parameter name against
Application Items with a protection level of Unrestricted, Checksum Required - Application Level, Checksum Required - User Level, Checksum Required - Session Level
The hard coded list of the default Apex urls parameters which are : session, request, clear, debug, printerFriendly, trace, timezone, lang, territory, cs, dialogCs, x01 according to this article
Application page items with a name pattern P99_Someting
Whenever a parameter is not among these three categories, ignore it and don't send it to Apex. This way even if facebook adds something like ?fbclickid=xxx the Apex App will still work nicely.
You can add the item to your application to avoid getting this error message.
Create an Application Item (under Shared Components) called FBCLICKID. Set its Session State Protected to Unrestricted.

how can I get ALL records from route53?

how can I get ALL records from route53?
referring code snippet here, which seemed to work for someone, however not clear to me: https://github.com/aws/aws-sdk-ruby/issues/620
Trying to get all (I have about ~7000 records) via resource record sets but can't seem to get the pagination to work with list_resource_record_sets. Here's what I have:
route53 = Aws::Route53::Client.new
response = route53.list_resource_record_sets({
start_record_name: fqdn(name),
start_record_type: type,
max_items: 100, # fyi - aws api maximum is 100 so we'll need to page
})
response.last_page?
response = response.next_page until response.last_page?
I verified I'm hooked into right region, I see the record I'm trying to get (so I can delete later) in aws console, but can't seem to get it through the api. I used this: https://github.com/aws/aws-sdk-ruby/issues/620 as a starting point.
Any ideas on what I'm doing wrong? Or is there an easier way, perhaps another method in the api I'm not finding, for me to get just the record I need given the hosted_zone_id, type and name?
The issue you linked is for the Ruby AWS SDK v2, but the latest is v3. It also looks like things may have changed around a bit since 2014, as I'm not seeing the #next_page or #last_page? methods in the v2 API or the v3 API.
Consider using the #next_record_name and #next_record_type from the response when #is_truncated is true. That's more consistent with how other paginations work in the Ruby AWS SDK, such as with DynamoDB scans for example.
Something like the following should work (though I don't have an AWS account with records to test it out):
route53 = Aws::Route53::Client.new
hosted_zone = ? # Required field according to the API docs
next_name = fqdn(name)
next_type = type
loop do
response = route53.list_resource_record_sets(
hosted_zone_id: hosted_zone,
start_record_name: next_name,
start_record_type: next_type,
max_items: 100, # fyi - aws api maximum is 100 so we'll need to page
)
records = response.resource_record_sets
# Break here if you find the record you want
# Also break if we've run out of pages
break unless response.is_truncated
next_name = response.next_record_name
next_type = response.next_record_type
end

Ruby neo4j-core mass processing data

Has anyone used Ruby neo4j-core to mass process data? Specifically, I am looking at taking in about 500k lines from a relational database and insert them via something like:
Neo4j::Session.current.transaction.query
.merge(m: { Person: { token: person_token} })
.merge(i: { IpAddress: { address: ip, country: country,
city: city, state: state } })
.merge(a: { UserToken: { token: token } })
.merge(r: { Referrer: { url: referrer } })
.merge(c: { Country: { name: country } })
.break # This will make sure the query is not reordered
.create_unique("m-[:ACCESSED_FROM]->i")
.create_unique("m-[:ACCESSED_FROM]->a")
.create_unique("m-[:ACCESSED_FROM]->r")
.create_unique("a-[:ACCESSED_FROM]->i")
.create_unique("a-[:ACCESSED_FROM]->r")
.create_unique("i-[:IN]->c")
.exec
However doing this locally it takes hours on hundreds of thousands of events. So far, I have attempted the folloiwng:
Wrapping Neo4j::Connection in a ConnectionPool and multi-threading it - I did not see much speed improvements here.
Doing tx = Neo4j::Transaction.new and tx.close every 1000 events processed - looking at a TCP dump, I am not sure this actually does what I expected. It does the exact same requests, with the same frequency, but just has a different response.
With Neo4j::Transaction I see a POST every time the .query(...).exec is called:
Request: {"statements":[{"statement":"MERGE (m:Person{token: {m_Person_token}}) ...{"m_Person_token":"AAA"...,"resultDataContents":["row","REST"]}]}
Response: {"commit":"http://localhost:7474/db/data/transaction/868/commit","results":[{"columns":[],"data":[]}],"transaction":{"expires":"Tue, 10 May 2016 23:19:25 +0000"},"errors":[]}
With Non-Neo4j::Transactions I see the same POST frequency, but this data:
Request: {"query":"MERGE (m:Person{token: {m_Person_token}}) ... {"m_Person_token":"AAA"..."c_Country_name":"United States"}}
Response: {"columns" : [ ], "data" : [ ]}
(Not sure if that is intended behavior, but it looks like less data is transmitted via the Non-Neo4j::Transaction technique - highly possibly I am doing something incorrectly)
Some other ideas I had:
* Post process into a CSV, SCP up and then use the neo4j-import command line utility (although, that seems kinda hacky).
* Combine both of the techniques I tried above.
Has anyone else run into this / have other suggestions?
Ok!
So you're absolutely right. With neo4j-core you can only send one query at a time. With transactions all you're really getting is the ability to rollback. Neo4j does have a nice HTTP JSON API for transactions which allows you to send multiple Cypher requests in the same HTTP request, but neo4j-core doesn't currently support that (I'm working on a refactor for the next major version which will allow this). So there are a number of options:
You can submit your requests via raw HTTP JSON to the APIs. If you still want to use the Query API you can use the to_cypher and merge_params methods to get the cypher and params for that (merge_params is a private method currently, so you'd need to send(:merge_params))
You can load via CSV as you said. You can either
use the neo4j-import command which allows you to import very fast but requires you to put your CSV in a specific format, requires that you be creating a DB from scratch, and requires that you create indexes/constraints after the fact
use the LOAD CSV command which isn't as fast, but is still pretty fast.
You can use the neo4apis gem to build a DSL to import your data. The gem will create Cypher queries under the covers and will batch them for performance. See examples of the gem in use via neo4apis-twitter and neo4apis-github
If you are a bit more adventurous, you can use the new Cypher API in neo4j-core via the new_cypher_api branch on the GitHub repo. The README in that branch has some documentation on the API, but also feel free to drop by our Gitter chat room if you have questions on this or anything else.
If you're implementing a solution which is going to make queries like above where you have multiple MERGE clauses, you'll probably want to profile your queries to make sure that you are avoiding the eager (that post is a bit old and newer versions of Neo4j have alleviated some of the need for care, but you can still look for Eager in your PROFILE)
Also worth a look: Max De Marzi's post on Scaling Cypher Writes

Error: "include is invalid for non-ParseObjects" (using parse-osx-library-1.7.5)

I have a Meal object that stores pointers to n created objects "FoodInfo" using the key "MealItems".
When I query for the meal I take advantage of the [query includeKey:#"MealItems"] to fetch the items pointed to while fetching the "Meal".
This works swimmingly if the objects are created while online (ie. all are stored in the cloud db).
However, since I cannot assume access to the cloud at all time for this app I am now trying to enable the local datastore so I've changed my queries to use:
[query fromLocalDatastore];
and I've changed all of my objects' save methods to pinInBackgroundWithBlock followed by (assuming success of local save) saveInBackgroundWithBlock followed by (assuming failure) saveEventually.
To test this, I:
turned off wifi
ran the code to create a meal and then add newly created foods to it. This works with no error codes.
ran a report that then queries for the meal just created. This fails with the following:
Error: Error Domain=Parse Code=121
"include is invalid for non-ParseObjects" UserInfo=0x60800007f400 {
error=include is invalid for non-ParseObjects,
NSLocalizedDescription=include is invalid for non-ParseObjects,
code=121
} {
NSLocalizedDescription = "include is invalid for non-ParseObjects";
code = 121;
error = "include is invalid for non-ParseObjects";
}
Is this scenario not supported?
When I re-enable wifi, the meal is successfully added to the online db, but the query failure still happens when I run the query with the includeKey locally.
Am I missing something here? I'm quite surprised to see this failing. It seems like a really basic feature that should work whether local or cloud based.
Parse objects are not created until you save them. Try using saveEventually first before using pinInBackgroundWithBlock.

Google Spreadsheet API - returns remote 500 error

Has anyone battled 500 errors with the Google spreadsheet API for google domains?
I have copied the code in this post (2-legged OAuth): http://code.google.com/p/google-gdata/source/browse/trunk/clients/cs/samples/OAuth/Program.cs, substituted in my domain;s API id and secret and my own credentials, and it works.
So it appears my domain setup is fine (at least for the contacts/calendar apis).
However swapping the code out for a new Spreadsheet service / query instead, it reverts to type: remote server returned an internal server error (500).
var ssq = new SpreadsheetQuery();
ssq.Uri = new OAuthUri("https://spreadsheets.google.com/feeds/spreadsheets/private/full", "me", "mydomain.com");
ssq.OAuthRequestorId = "me#mydomain.com"; // can do this instead of using OAuthUri for queries
var feed = ssservice.Query(ssq); //boom 500
Console.WriteLine("ss:" + feed.Entries.Count);
I are befuddled
I had to make sure to use the "correct" class:
not
//using SpreadsheetQuery = Google.GData.Spreadsheets.SpreadsheetQuery;
but
using SpreadsheetQuery = Google.GData.Documents.SpreadsheetQuery;
stinky-malinky
Seems you need the gdocs api to query for spreadsheets, but the spreadsheet api to query inside of a spreadsheet but nowhere on the internet until now will you find this undeniably important tit-bit. Google sucks hard on that one.

Resources