I'm trying to use YQL Console to get currency rates, the YQL statement is
select * from yahoo.finance.xchange where pair in ("EURUSD","GBPUSD")
the console results give me
{
"query": {
"count": 2,
"created": "2017-10-26T02:42:44Z",
"lang": "en-US",
"results": {
"rate": [
{
"id": "EURUSD",
"Name": "EUR/USD",
"Rate": "1.1829",
"Date": "10/26/2017",
"Time": "3:42am",
"Ask": "1.1829",
"Bid": "1.1829"
},
{
"id": "GBPUSD",
"Name": "GBP/USD",
"Rate": "1.3269",
"Date": "10/26/2017",
"Time": "3:42am",
"Ask": "1.3269",
"Bid": "1.3269"
}
]
}
}
}
but the rest query gives me error
{"error":{"lang":"en-US","diagnostics":{"cache":{"execution-start-time":"0","execution-stop-time":"0","execution-time":"0","method":"GET","type":"MEMCACHED","content":"ENV.queryyahooapiscomproductionsg3.store://datatables.org/alltableswithkeys.15a841ff462a38eb6175e73b4dc747ef"},"env":"Failed to read from storage: store://datatables.org/alltableswithkeys: Invalid store url: store://datatables.org/alltableswithkeys","warning":"Invalid environment specified: store://datatables.org/alltableswithkeys"},"description":"No definition found for Table yahoo.finance.xchange"}}
The yahoo.finance.xchange is a community table. In the YQL console there should be a checkbox saying Show Community Tables select that and you should have access to it. The REST call here works. Let me know if you have any questions.
Related
I'm trying to create recurring events under my outlook calendar and for that, I'm following instructions from here.
My payload looks like this:
{
"Subject": "Blocked By DEV",
"Body": {},
"Start": {
"DateTime": "2022-08-30T23:30:00",
"TimeZone": "Asia/Calcutta"
},
"End": {
"DateTime": "2022-08-30T23:45:00",
"TimeZone": "Asia/Calcutta"
},
"Recurrence": {
"pattern": {
"type": "WEEKLY",
"interval": 1,
"daysOfWeek": [
"Monday",
"Tuesday"
]
},
"range": {
"type": "numbered",
"startDate": "2022-08-30",
"numberOfOccurences": 3
}
}
}
I'm trying to hit the endpoint /me/events from Graph Explorer and getting this 400 error.
{
"error": {
"code": "UnableToDeserializePostBody",
"message": "were unable to deserialize "
}
}
Is there anything wrong with my payload? Is there any way to have more details about the failure?
Just spotted the culprit, numberOfOccurences was supposed to be numberOfOccurrences.
And if someone asks why I ended up doing that mistake, I just referred docs, and seems like official documentation has that typo.
I am writing a GraphQL resolver that retrieves all vertices by a particular edge using the following query (created returns label person):
software {
created {
name
}
}
Which would resolve to the following Gremlin Query for each software node found:
g.V().hasLabel('software').has('name', 'ripple').in('created')
This returns a result that includes all properties of the object:
{
"result": [
{
"#type": "d",
"#rid": "#24:0",
"#version": 6,
"#class": "person",
"in_knows": [
"#35:0"
],
"name": "josh",
"out_created": [
"#32:0",
"#33:0"
],
"age": 32,
"#fieldTypes": "in_knows=g,out_created=g"
}
],
"dbStats": {
...
}
}
I realize that this will fall foul on GraphQL's N+1 query so i'm trying to batch queries together using a Dataloader pattern. (i'm also hoping to do property selections, so i'm not asking the database to return too much info)
So i'm trying to craft a query like so:
g.V().union(
__.hasLabel('software').has('name', 'ripple').
project('parent', 'child').by('id').
by(__.in('created').fold()),
__.hasLabel('software').has('name', 'lop').
project('parent', 'child').by('id').
by(__.in('created').fold())
)
But this results in the following where the props are missing and it just includes the id of the vertices I want:
{
"result": [
{
"parent": "ripple",
"child": [
"#24:0"
]
},
{
"parent": "lop",
"child": [
"#22:0",
"#23:0",
"#24:0"
]
}
],
"dbStats": {
...
}
}
My Question is, how can I have the Gremlin query return all of the props for the found vertices and none of the other props? Should I even been doing batching this way?
For anyone else reading, the query I was trying to write wouldn't work because the TraversalSet created in the .by(_.in('created') can't be cast from a List to an ElementMap as the stream cardinality wouldn't be enforced. (You can only have one record per row, I think?)
My working query would be to duplicate the keys for each row and specify the props needed (the query below is ok for gremlin 3.3 as used in ODB, otherwise if you've got < gremlin 3.4 replace the last by step with be(elementMap('name', 'age')):
g.V().union(
__.hasLabel('software').has('name', 'ripple').
as('parent').
in('created').as('child').
select('parent', 'child').
by(values('name')).
by(properties('id', 'name', 'age').
group().by(__.key()).
by(__.value())),
__.hasLabel('software').has('name', 'lop').
as('parent').
in('created').as('child').
select('parent', 'child').
by(values('name')).
by(properties('id', 'name', 'age').
group().by(__.key()).
by(__.value()))
)
So that you get a result like this:
{"data": [
{
"parent": "ripple",
"child": {
"id": 5717,
"name": "josh",
"age": 32
}
},
{
"parent": "lop",
"child": {
"id": 5709,
"name": "peter",
"age": 35
}
},
{
"parent": "lop",
"child": {
"id": 5713,
"name": "marko",
"age": 29
}
},
{
"parent": "lop",
"child": {
"id": 5717,
"name": "josh",
"age": 32
}
}
]
}
Which would allow you to create a lookup where you concat all results for "lop" and "ripple" into arrays.
I have updated document in the elastic search. after update using I am fetching the same document by using their ID. It is giving me following response:
{
"_index": "b123456",
"_type": "documents",
"_id": "bltde56dd11ba998bab",
"_version": 3,
"found": true,
"_source": {
"title": "index.json",
"url": "/index1",
"tags": [],
"created_at": "2018-06-19T05:02:38.174Z",
"updated_at": "2018-06-19T05:07:57.155Z",
"version": 1,
"fields": [{
"uid": "fname",
"value": "john"
},
{
"uid": "lname",
"value": "test"
}
],
"class": "first"
}
}
After I am using update_by_query to update document I am sending following request to update_by_query:
{
"script": {
"source": "ctx._source.title = params.title;ctx._source.url = params.url;ctx._source.created_at = params.created_at;ctx._source.updated_at = params.updated_at;ctx._source.version = params.version;ctx._source.fields = params.fields",
"params": {
"title": "Demo title",
"url": "/demo",
"created_at": "2018-06-19T05:02:38.174Z",
"updated_at": "2018-06-19T05:07:57.155Z",
"version": 2,
"fields": [{
"uid": "fname",
"value": "vicky"
},
{
"uid": "lname",
"value": "test"
}
]
}
},
"query": {
"bool": {
"must": [{
"term": {
"_id": "bltde56dd11ba998bab"
}
},
{
"range": {
"version": {
"lt": 2
}
}
}
]
}
}
}
But it is giving me status code:409 and following error:
[documents][bltde56dd11ba998bab]: version conflict, current version
[3] is different than the one provided [2]
My document also contain custom version key.
Can anyone help me into this
For the sake of posterity, I'll submit an answer to this old question. The issue is occurring because ElasticSearch's internal version value in the _version field is actually 3 in your initial response, not 1.
You are then trying to update the document to using external version value 2, Elastic sees this as a conflict, as internally it thinks version 3 is the most up-to-date version, not version 1. Effectively, something as caused your external version scheme and Elastic's internal version scheme to become out-of-sync.
Also note, the following parameter should be included in your update calls to indicate that the operation should follow the rules for external versioning as opposed to Elastic's internal versioning scheme.
"version_type":external
There is a subtle but important distinction that needs to be made by specifying this parameter.
With version_type set to external, Elasticsearch will store the
version number as given and will not increment it. Also, instead of
checking for an exact match, Elasticsearch will only return a version
collision error if the version currently stored is greater or equal to
the one in the indexing command.
More information can be on Elastic's version can be found in their blog post
for me, it was document id. I am using node js elastic-search client, when I create a document I need to pass a document Id,
I was getting version conflict because I was trying to create multiple documents with the same id.
await elasticWrapper.client.create({
index: ElasticIndexs.Payments,
id: data.id, // <-- id should be unique
body: {
...data,
},
});
``
Suppose I have these documents in a Things table:
{
"name": "Cali",
"state": "CA"
},
{
"name": "Vega",
"state": "NV",
},
{
"name": "Wash",
"state": "WA"
}
My UI is a state-picker where the user can select multiple states. I want to display the appropriate results. The SQL equivalent would be:
SELECT * FROM Things WHERE state IN ('CA', 'WA')
I have tried:
r.db('test').table('Things').filter(r.expr(['CA', 'WA']).contains(r('state')))
but that doesn't return anything and I don't understand why that wouldn't have worked.
This works for getting a single state:
r.db('test').table('Things').filter(r.row('state').eq('CA'))
r.db('test').table('Things').filter(r.expr(['CA', 'WA']).contains(r.row('state')))
seems to be working in some versions and returns
[
{
"id": "b20cdcab-35ab-464b-b10b-b2f644df73e6" ,
"name": "Cali" ,
"state": "CA"
} ,
{
"id": "506a4d1f-3752-409a-8a93-83385eb0a81b" ,
"name": "Wash" ,
"state": "WA"
}
]
Anyway, you can use a function instead of r.row:
r.db('test').table('Things').filter(function(row) {
return r.expr(['CA', 'WA']).contains(row('state'))
})
I am developing a platform with JSON API using Python Flask. In some cases I need to join three tables. How to join tables with a array of IDs gave me some guidance but I need a solution beyond it.
Let's assume we have three tables for a messaging app.
Accounts
Conversations
Messages
Message Readers
Accounts table snippet
{
"id": "account111",
"name": "John Doe",
},
Conversations table snippet
{
"id": "conversation111",
"to": ["account111", "account222", "account333"], // accounts who are participating the conversation
"subject": "RethinkDB",
}
Messages table snippet
{
"id": "message111",
"text": "I love how RethinkDB does joins.",
"from": "account111", // accounts who is the author of the message
"conversation": "conversation111"
}
Message Readers table snippet
{
"id": "messagereader111",
"message": "message111",
"reader": "account111",
}
My question is "What's the magic query to get the document below when I receive a get request on an account document with id="account111"?"
{
"id": "account111",
"name": John Doe,
"conversations": [ // 2) Join account table with conversations
{
"id": "conversation111",
"name": "RethinkDB",
"to": [ // 3) Join conversations table with accounts
{
"id": "account111",
"name": "John Doe",
},
{
"id": "account222",
"name": "Bobby Zoya",
},
{
"id": "account333",
"name": "Maya Bee",
},
]
"messages": [ // 4) Join conversations with messages
{
"id": "message111",
"text": "I love how RethinkDB does joins.",
"from": { // 5) Join messages with accounts
"id": "account111",
"name": "John Doe",
},
"message_readers": [
{
"name": "John Doe",
"id": "account111",
}
],
},
],
},
],
}
Any guidance or advice would be fantastic. JavaScript or Python code would be awesome.
I had a hard time understanding what you want (you have multiple documents with the id 111), but I think this is the query you are looking for
Python query:
r.table("accounts").map(lambda account:
account.merge({
"conversations": r.table("conversations").filter(lambda conversation:
conversation["to"].contains(account["id"])).coerce_to("array").map(lambda conversation:
conversation.merge({
"to": conversation["to"].map(lambda account:
r.table("accounts").get(account)).pluck(["id", "name",]).coerce_to("array"),
"messages": r.table("messages").filter(lambda message:
message["conversation"] == conversation["id"]).coerce_to("array").map(lambda message:
message.merge({
"from": r.table("accounts").get(message["from"]).pluck(["id", "name",]),
"readers": r.table("message_readers").filter(lambda message_reader:
message["id"] == message_reader["message"]).coerce_to("array").order_by(r.desc("created_on")),
})).order_by(r.desc("created_on"))
})).order_by(r.desc("modified_on"))
})).order_by("id").run(db_connection)