Issues with Quickbase API call - quickbase

I am using JSON Quickbase API documentation below:
Quickbase API
I am trying to update records with Quickbase via recordId as per below, and it's working fine:
{
"to": "my-table-id-goes-here",
"data": [
{
"6": {
"value": "nancy more is the value to be updated"
},
"3": {
"value": "recordId_to_be_used_to_make_updates"
}
}
]
}
My issue: I want to update where email and userid is equal to certain value.
Eg. in normal SQL queries something like "update mytable_name set name ='nancy more' where email='nancy#gmail.com' and userid=70".
Is it possible with Quickbase? Is there a way to achieve that based on the code above, assuming email field is 7 and userid field is 8 or whatever?

The end result is possible but not through a single API call. The insert/update records API call for Quick Base only updates records when the key field is included in the record payload (the key field is the record ID by default but can be changed to another field in the table). If you don't already know the value of the key field, you'll need to query for the matching records first and then use the returned record ID/key field to perform that update.
For example, you could query for records where email is "nancy#gmail.com" and userid is 70:
POST https://api.quickbase.com/v1/records/query
QB-Realm-Hostname: host
Authorization: QB-USER-TOKEN userToken
Content-Type: application/json
{
"from": "tableId",
"where": "{7.EX.'nancy#gmail.com'}AND{8.EX.70}"
}
You can then use the id's of the returned set of records to perform your update. How you go about reading the response and making the upsert request will depend on the language you're using.

Related

How To Update Entire Row In Supabase Table? (Getting 400 Error)

I am trying to make an input form where the user can change several values in one row simultaneously which get submitted to the database using the update() method. The values are being read from the Supabase Table into several input fields as defaultValues, which the user can edit and later submit form to update values in the Supabase instance of the table.
Input Form Looks Like This
The input values received from user are stored in a object that has the shape :
inputFields = {
"name": "Piramal Glassed", // NEW
"email": "piramal#glass.com", // NEW
"contact": "98203" // NEW
"pin": 400066,
"city": "Mumbai",
"state": "Maharashtra",
"country": "India",
}
The values marked as // NEW are the ones that have been changed by the user and need to be updated in the subsequent row in the Supabase Table.
I keep getting 400 Error. RLS is disabled for now. This is the function I am using to send the data back to the supabase table.
const { data, error } = await supabase
.from("company")
.update([inputFields.values])
.match([originalFields.values]); // Contains Original Values Of Row That Need To Be Replaced (same shape as inputFields.values)
};
What am I doing wrong here?
It looks like your match filter doesn't work.
I would suggest trying to match row by its ID since you want to update only one row. This way you are actually trying to update all rows that match this data, and that might be causing an issue. I am not sure if we can batch update like this with supabase at the moment.
This is something I'm using in my apps whenever I want to update data for a single row, and I don't have any issues with it. I would suggest to try this, if the question is still relevant. 😊
await supabase.from("company").update(inputFields).match({ id: company.id });
or
await supabase.from("company").update(inputFields).eq("id", company.id)
You could also pass only new values to .update() so you don't update whole row, but only data that has been changed.

Elasticsearch dsl python, natural key for document?

I have a document which looks like
{
date_at: '2020-10-01',
foo_id: 3,
value: 5
}
When date_at and foo_id are defined, document is uniquely defined.
So I'd like to do something like
MyDocument.update_or_create(date_at=date_at, foo_id=foo_id, {value: some_value})
If a document with given date_at and foo_id exists, update the document, otherwise create the document.
In order to update or create a document (what ES calls "upsert"), you need to go through the update API and that API requires a document ID.
Selecting a document with a specific date_at and foo_id would be the job of the update by query API but that API doesn't support "upserting" (i.e. create or update).
So, if your documents are uniquely defined by date_at and foo_id, I'd suggest giving them IDs that contain those two values, like for instance 2020-10-01:3. Doing so would allow you to leverage the update API like this:
POST your-index/_update/2020-10-01:3
{
"doc": {
"value": "some_value",
"date_at": "2020-10-01",
"foo_id": 3
},
"doc_as_upsert": true
}
An alternative approach would be creating daily indices and using foo_id as document id. Then upserting would be as simple as:
PUT your-index-2020-10-01/_doc/3
{
"value": "some_value",
"date_at": "2020-10-01",
"foo_id": 3
}
foo_id would be always unique within the index.

Using numerics as type in Elasticsearch

I am going to store transaction logs on elasticsearch. I am new to ELK stack and not sure about how I should implement this on ELK stack. My transaction is printing lines of log sequentially(upserts) and instead of logging these to a file I want to store these on ElastichSearch and later I will query the logs by the transactionId I have created.
Normally the URI for querying will be
/bookstore/books/_search
but in my case it must be like
/transactions/transactionId/_search
because I dont want to store lines as array attached to a single transaction record but I am not sure if this is a good practice to create a new type in the beginning of every transaction. I am not even sure if this is possible.
Can you give advices about storing these transaction data on elasticsearch?
if you want to query with a URI like /transactions/transactionId/_search, that means you are planning to create multiple types every time a new transactionid comes. Now , apart from this being a bad design, its not even possible to have more than one type in an index(post version 5.X I guess) and types have been completely removed since version 7.X .
One work-around is if you use the transactionId itself as the document ID while creation. Then you can get the log associated with one transactionId by querying GET transactions/transactionId (read about the length restrictions of the document id though) but this might cause another issue, that being , there can be multiple logs for the same transaction, so each log entry having the same id would simply overwrite the previous entry.
The best solution here will be to change how you query those records.
For this you can put transactionId as one of the fields in the json body, along with maybe a created time stamp at the time of insertion ( let ES create the documents with the auto generated id) and then query all logs associated with a transaction like :
POST transactions/_search
{
"sort": [
{
"createdDate": {
"order": "asc"
}
}
],
"query":{
"bool":{
"must":[
{
"term":{
"transactionId.keyword":"<transaction id>"
}
}
]
}
}
}
Hope, this helps

How to implement a partial resource rest api?

In order to limit the size of my REST API answers, I want to implement the Google performance tip: using the fields query string parameter to do partial resources.
If I have a full answer GET https://myapi.com/v1/users
[
{
"id": 12,
"first_name": "Angie",
"last_name": "Smith",
"address": {
"street": "1122 Something St.",
"city": "A city"
..and so on...
}
},
... and so on
]
I will be able to filter it GET https://myapi.com/v1/users?fields=first_name
[
{
"id": 12,
"first_name": "Angie"
},
... and so on
]
The concept is pretty easy to understand, but I can't find an easy way to implement it!
My API resources are all design the same way:
use query string parameters for filtering, sorting, paging.
call a service with that parameters to do a SQL request (only the WHERE condition, the ORDER BY condition and the LIMIT are dynamic)
use a converter to format data back to JSON
But when using this new fields parameter, what do I need to do? where do I filter the data?
Do I need to filter only the JSON output? But I will make (in that example) an unwanted JOIN query on address table and fetch unwanted fields in the users table.
Do I need to make a dynamic SQL query to fetch exactly the requested fields and add the JOIN only when the end user need it? Then the converter will have to be smart to convert only the available fields in the SQL query.
In my opinion, this second solution will produce a code extremely dynamic, extremely complex and difficult to maintain.
So, how do you implement such REST API with partial resource feature? What are you best practice in that case?
(I'm a PHP developer, but I don't think it's relevant for that question)
If your backend is doing
GET https://myapi.com/v1/users
which results in SQL:
select * from users
which you then turn into JSON, can you not just do:
GET https://myapi.com/v1/users?fields=first_name,surname,email
get all the required fields (rough idea of PHP implementation):
$fields = split(",", $_GET["fields"]);
$sql = "select ";
foreach ($fields as &$field) {
// do a check to see if the field is ok first...
if (checkField($field)) {
$sql += field + "," // deal with commas
}
}
$sql += " from users";
to build SQL like:
select firstname,surname,email from users
and turn that limited dataset to JSON?

Changing data in every document

I am working on an application that has messages and I want to store all the messages. But my problem is the message has a from first name and last name which could change. So if for example my JSON was
{
"subject": "Hello!",
"message": "Hello there",
"from": {
"user_id": 1,
"firstname": "George",
"lastname": "Lastgeorge"
}
}
The user could potentially change their last name or even first name. Which would require basically looping over every record in elasticsearch and updating everyone with the user_id.
Is there a better way to go about doing this?
I feel you should use parent mapping.
Keep the user info as parent with userID as key.
/index/userinfo/userID
{
"name" : "George",
"last" : "Lastgeorge"
}
Next , you need to maintain each chat as a child document and map the parent to the userindo type.
This way , whenever you want to make some change to the user information , simply make the change in userInfo type.
With this feature intact , you can search your logs based on user information , or search users based on chat records.
Link - http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/parent-child.html

Resources