When to disable aws Amplify or AppSync conflict resolution - graphql

I noticed in the new Amplify Graphql transformer v2, AppSync Conflict Resolution is enabled for all tables by default (https://docs.aws.amazon.com/appsync/latest/devguide/conflict-detection-and-sync.html), I wonder if it will bring any harm if I disable conflict resolution for my API?
I'm building a yelp like rating app, and if two clients try to mutate the same object, I think it's fine just let them mutate concurrently and the request comes later overrides the previous one. So I don't really understand what this conflict resolution is useful for?
I feel it's really inconvenient that I need to pass in a _version field when mutating an object and when deleting, it will not delete immediately, instead it will have _deleted field set to true and schedule to delete after ttl time
Thanks very much!
Pro tip: to disable conflict resolver in amplify, run amplify update api, and you will be prompt to a choice to disable conflict resolver

Versioned Data Sources
AWS AppSync currently supports versioning on DynamoDB data sources. Conflict Detection, Conflict Resolution, and Sync operations require a Versioned data source. When you enable versioning on a data source, AWS AppSync will automatically:
Enhance items with object versioning metadata.
Record changes made to items with AWS AppSync mutations to a Delta table.
Maintain deleted items in the Base table with a “tombstone” for a configurable amount of time.
Versioned Data Source Configuration
When you enable versioning on a DynamoDB data source, you specify the following fields:
BaseTableTTL
The number of minutes to retain deleted items in the Base table with a “tombstone” - a metadata field indicating that the item has been deleted. You can set this value to 0 if you want items to be removed immediately when they are deleted. This field is required.
DeltaSyncTableName
The name of the table where changes made to items with AWS AppSync mutations are stored. This field is required.
DeltaSyncTableTTL
The number of minutes to retain items in the Delta table. This field is required.
Delta Sync Table
AWS AppSync currently supports Delta Sync Logging for mutations using PutItem, UpdateItem, and DeleteItem DynamoDB operations.
When an AWS AppSync mutation changes an item in a versioned data source, a record of that change will be stored in a Delta table that is optimized for incremental updates. You can choose to use different Delta tables (e.g. one per type, one per domain area) for other versioned data sources or a single Delta table for your API. AWS AppSync recommends against using a single Delta table for multiple APIs to avoid the collision of primary keys.
The schema required for this table is as follows:
ds_pk
A string value that is used as the partition key. It is constructed by concatenating the Base data source name and the ISO8601 format of the date the change occurred. (e.g. Comments:2019-01-01)
ds_sk
A string value that is used as the sort key. It is constructed by concatenating the IS08601 format of the time the change occurred, the primary key of the item, and the version of the item. The combination of these fields guarantees uniqueness for every entry in the Delta table (e.g. for a time of 09:30:00 and an ID of 1a and version of 2, this would be 09:30:00:1a:2)
_ttl
A numeric value that stores the timestamp, in epoch seconds, when an item should be removed from the Delta table. This value is determined by adding the DeltaSyncTableTTL value configured on the data source to the moment when the change occurred. This field should be configured as the DynamoDB TTL Attribute.
The IAM role configured for use with the Base table must also contain permission to operate on the Delta table. In this example, the permissions policy for a Base table called Comments and a Delta table called ChangeLog is displayed:

Related

AWS DMS with CDC. The update records only include the updated field. How to include all?

We recently started the process of continuous migration (initial load + CDC) from an Oracle database on RDS to S3 using AWS DMS. The DB is using LogMiner.
the problem that we have detected is that the CDC records of type Update only contain the data that was updated, leaving the rest of the fields empty, so the possibility of simply taking as valid the record with the maximum timestamp value is lost.
Does anyone know if this can be changed or in what part of the DMS or RDS configuration to touch so that the update contains the information of all the fields of the record?
Thanks in advance.
Supplemental Logging at table level may increase what is logged, but that will also increase total volume of log data written for a given workload.
Many Log Based Data Replication products from various vendors require additional supplemental logging at the table level to ensure the full row data for updates with before and after change data is written to the database logs.
re: https://docs.oracle.com/database/121/SUTIL/GUID-D857AF96-AC24-4CA1-B620-8EA3DF30D72E.htm#SUTIL1582
Pulling data through LogMiner may be possible, but you will need to evaluate if it will scale with the data volumes you need.
DMS-FULL/CDC also supports Binary Reader better option to LogMiner. In order to capture updates WITH all the columns use "ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS" on Oracle side.
This will push all the columns in a update record to endpoint from Oracle RAC/non-RAC dbs. Also, a pointer for CDC is use TRANSACT_ID in DMS side to generate a unique sequence for each record. Redo will be little more but, it is what it is; you can keep an eye on it and DROP the supplemental logging if require at the table level.
Cheers!

Hasura graphQl update trigger repeats many time

My table columns look like name, email, phone, and pin.
I'm using Hasura for collecting user details.
Problem:
I want to hash the pin field using some hashing algorithm. So I decided to have a separate AWS Lambda function to convert a plain pin to hashed one and update it to the same column.
Now I set a trigger (when pin get updated it will trigger the webhook). I successfully updated the hashed one to my database. But the problem is after lambda updated the field Hasura re-trigger the webhook again. The process is to keep on going until I shut down my Hasura instance.
In Hasura documentation they mentioned below
In case of UPDATE, the events are delivered only if new data is
distinct from old data. The composite type comparison is used to
compare the old and new rows. If rows contain columns, which cannot be
compared using <> operator, then internal binary representation of
rows by Postgres is compared.
however, after the lambda update, the data is same as old one but why it is kept on calling.
I think you should use action for this instead of trigger. With that way, database only store hashed pin.

AppSync update two tables with one mutation

I am a little confused on the best approach in how to update two tables with on GraphQL mutation, I am using AWS AppSync.
I have an application where I need a User to be able to register for an Event. Given I am using DynamoDB as the database, I had thought about a denormalized data structure for the User and Event tables. I am thinking of storing an array of brief Event details, such as eventID and title in the User table and an array of entrants in the Events table, holding only brief user info, such as userID and name. Firstly, is this a good approach or should I have a third join table to hold these 'relationships'.
If it's OK, I am needing to update both tables during the signUp mutation, but I am struggling to get my head around how to update 2 tables with the one mutation and in turn, one request mapping template.
Am I right in thinking I need to use a Pipeline resolver? Or is there another way to do this?
There are multiple options for this:
AppSync supports BatchWrite operations to update multiple DynamoDb tables at the same time
AppSync supports DynamoDb transactions to update multiple DynamoDb tables transactionally at the same time
Pipeline resolvers

Simulating server-side group and sort in Azure table storage

I have a table to which I add records whenever the user views a particular resource. The key fields are
Username
Resource
Date Viewed
On a history page of my app, I want to present a set number (e.g., top 5) of the user's most recently viewed Resources, but I want to group by Resource, so that if some were viewed several times, only the most recent of each one is shown.
To be clear, if the raw data looked like this:
UserA | ResourceA | Jan 1
UserA | ResourceA | Jan 2
UserA | ResourceB | Jan 3
UserA | ResourceA | Jan 4
...
...only the bottom two records would appear in the history page.
I know you can get server-side chronological sorting by using a string derived from the date in the PartitionKey or RowKey fields.
I also see that you could enable a crude grouping mechanism by using Username and Resource as your PartitionKey and RowKey fields, and then using Insert-or-update, to maintain a table in which you kept pointers for the most recent value for each combination. However, those records wouldn't be sorted chronologically.
Is there any way to design a set of tables so that I can get the data I need without retrieving tons of extra entities and sorting on the client? I'm willing to get elaborate with the design if that's what it takes. Thanks in advance!
First, I would strongly recommend that you read this excellent Azure Storage Table Design Guide: Designing Scalable and Performant Tables document from Storage team.
Yes, I would agree that it is somewhat tricky with Azure Table Storage but it is doable :).
What you have to do is keep multiple copies of the same data. Each copy will serve a different purpose.
Considering the scenario where you want to fetch most recent lines for Resource A and B, here's what your entity structure would look like:
PartitionKey: Date/Time (in Ticks) reversed i.e. DateTime.MaxValue.Ticks - LastAccessedDateTime.Ticks. Reverse ticks is required to that most recent entries will show up on the top of the table.
RowKey: Resource name.
AccessDate: Indicates the last access date/time.
User: Name of the user who accessed that resource.
So when you are interested in just finding out most recently used resources, you could start fetching records from the top.
In short, your data storage approach should be primarily governed by how you want to fetch the data. It would even mean you will have to save the same data multiple times.
UPDATE
As discussed in the comments below, Table Service doesn't directly support Server Side Grouping. This is something that you would need to do on your own. What you could do is create a separate table to store the access counts. As and when the resources are accessed, you basically either insert a new record in that table or update the count for that resource in that table.
Assuming you're always interested in finding out resource access count within a date/time range, here's what your entity structure would look like:
PartitionKey: Date/Time (in Ticks). The precision would depend on your reporting requirement. For example, if you want to maintain access counts by day then your precision would be a day.
RowKey: Resource name.
AccessCount: This field will constantly update as and when a resource is accessed.
LastAccessDateTime: This field will denote when a resource was last accessed.
For updating access counts, I would recommend that you make use of a background process. Basically in this approach, as a resource is accessed you add a message in a queue. This message will have resource name and date/time resource was last accessed. Then have a background process poll this queue and fetch messages. As the messages are received, you first get the current count and last access date/time for that resource. If no records are found, you simply insert a record in this table with count as 1. If a record is found then you compare the date/time from the table with the date/time sent in the message. If the date/time from the table is smaller than the date/time sent in the message, you update both count (increase that by 1) and last access date/time. If the date/time from the table is more than the date/time sent in the message, you only update the count.
Now to find most accessed resources in a time span, you simply query this table. Assuming there are limited number of resources (say in 100s), you can get this information from the table with at least 1 request. Since you're dealing with small amount of data, you can simply download this data on the client side and order it anyway you see fit. However to see the access details for a particular resource, you would have to fetch detailed data (1000 entities at a time).
Part of your brain might still be unconsciously trapped in relational-table design paradigms, I'm still getting to grips with that issue myself.
Rather than think of table storage as a database table (with the "query-ability" that goes with it) try visualizing it in more simple (dumb) terms.
A design problem I'm working on now is storing financial transaction data, and I want to know what the total $ amount of these transactions are. Because Azure table storage doesn't (yet?) offer aggregate functions I can't simply go .Sum(). To get around that I'm going to:
Sum the values of the transactions in my app before I pass them to azure.
I'll then pass that the result of the sum into azure as a separate piece of information, called RunningTotal.
Later on I can just return RunningTotal rather than pulling down all the transactions, and I can repeat the process by increment the value of RunningTotal each time i get new transactions.
Of course there are risks to this but the app is a personal one so the risk level is low and manageable, at least as a proof-of-concept.
Perhaps you can use a similar approach for the design of your system: compute useful values in advance. I'll almost be using table storage as a long-term cache rather than a database.

Can I capture Performance Counters for an Azure Web/Worker Role remotely...?

I am aware of the generation of the Performance Counters and Diagnosis in webrole and worker-role in Azure.
My question is can I get the Performance Counter on a remote place or remote app, given the subscription ID and other certificates (3rd Party app to give performance Counter).
Question in other words, Can I get the Performance Counter Data, the way I use Service Management API for any hosted service...?
What are the pre-configurations required to be done in Server...? to get CPU data...???
Following is the description of the attributes for Performance counters table:
EventTickCount: Stores the tick count (in UTC) when the log entry was recorded.
DeploymentId: Id of your deployment.
Role: Role name
RoleInstance: Role instance name
CounterName: Name of the counter
CounterValue: Value of the performance counter
One of the key thing here is to understand how to effectively query this table (and other diagnostics table). One of the things we would want from the diagnostics table is to fetch the data for a certain period of time. Our natural instinct would be to query this table on Timestamp attribute. However that's a BAD DESIGN choice because you know in an Azure table the data is indexed on PartitionKey and RowKey. Querying on any other attribute will result in full table scan which will create a problem when your table contains a lot of data.
The good thing about these logs table is that PartitionKey value in a way represents the date/time when the data point was collected. Basically PartitionKey is created by using higher order bits of DateTime.Ticks (in UTC). So if you were to fetch the data for a certain date/time range, first you would need to calculate the Ticks for your range (in UTC) and then prepend a "0" in front of it and use those values in your query.
If you're querying using REST API, you would use syntax like:
PartitionKey ge '0<from date/time ticks in UTC>' and PartitionKey le '0<to date/time in UTC>'.
You could use this syntax if you're querying table storage in our tool Cloud Storage Studio, Visual Studio or Azure Storage Explorer.
Unfortunately I don't have much experience with the Storage Client library but let me work something out. May be I will write a blog post about it. Once I do that, I will post the link to my blog post here.
Gaurav
Since the performance counters data gets persisted in Windows Azure Table Storage (WADPerformanceCountersTable), you can query that table through a remote app (either by using Microsoft's Storage Client library or writing your own custom wrapper around Azure Table Service REST API to retrieve the data. All you will need is the storage account name and key.

Resources