Partition Error when uploading to CosmosDB from CRM - dynamics-crm

Background:
I have set up my LogicApp and everything appears to work fine except when I try to upload back to CosmosDB from CRM.
This is what I have done so far:
The problem appears to happen here:
error:
["PartitionKey extracted from document doesn't match the one specified in the header"]
I have placed the partition within the dynamic field so I am unsure why I am getting this error:
My partition ID is
OwnerInCRM
The partitionkey in which I plan to send my data to is:
OwnerInCRM
My dilemma/question:
Why am I getting this partition error when I am putting the correct value in the partition key value field? (OwnerInCRM)
Any suggestion is appreciated.

In your screenshot of your container,your Partition key is OwnerInCRM.However your OwnerInCRM value in your document is "",and your Partition key value is not same with "",which leads to your error.(your Partition key value should be same to your OwnerInCRM value,not value of your id)
Please try this,set both your Partition key value and OwnerInCRM value is Richard.

Related

DynamoDb delete with sort key

I have fields below in dynamo dB table
event_on -- string type
user_id -- number type
event name -- string type
Since this table may have multiple records for user_id and event_on is the single field which can be unique so I made it primary key and user_id as sort key
Now I want to delete the all records of a user, so My code is
response = dynamodb.delete_item(
TableName=events,
Key={
"user_id": {"N": str(userId)}
})
It throwing error
Exception occured An error occurred (ValidationException) when calling
the DeleteItem operation: The provided key element does not match the
schema
also is there anyway to delete with range
Can someone suggest me what should I have do with dynamodb table structure to make this code work
Thanks,
It sounds like you've modeled your data using a composite primary key, which means you have both a partition key and a sort key. Here's an example of what that looks like with some sample data.
In DynamoDB, the most efficient way to access items (aka "rows" in RDBMS language) is by specifying either the full primary key (getItem) or the partition key (query). If you want to search by any other attribute, you'll need to use the scan operation. Be very careful with scan, since it can be a costly way (both in performance and money) to access your data.
When it comes to deletion, you have a few options.
deleteItem - Deletes a single item in a table by primary key.
batchWriteItem - The BatchWriteItem operation puts or deletes multiple items in one or more tables. A single call to BatchWriteItem can write up to 16 MB of data, which can comprise as many as 25 put or delete requests
TimeToLive - You can utilize DynamoDBs Time To Live (TTL) feature to delete items you no longer need. Keep in mind that TTL only marks your items for deletion and actual deletion could take up to 48 hours.
In order to effectively use any of these options, you'll first need to identify which items you want to delete. Because you want to fetch using the value of the sort key alone, you have two options;
Use scan to find the items of interest. This is not ideal but is an option if you cannot change your data model.
Create a global secondary index (GSI) that swaps your partition key and sort key values. This pattern is called an inverted index. This would allow you to identify all items with a given user_id.
If you choose option 2, your data would look like this
This would allow you to fetch all item for a given user, which you could then delete using one of the methods I outlined above.
As you can see here, delete_item needs the primary key and not the sort key. You would have to do a full scan, and delete everything that contains the given sort key.
If you are created a DynamoDB table by the Primary key and sort key, you should provide both values to remove items from that table.
If the sort key was not added to the primary key on the table creation process, the record can be removed by the Primary key.
How I solved it.
Actually, I tried to not add the sort key when created the table. And I'm using indexes for sorting and getting items.

DyanamoDB with AWS Lambda function Order by Desc Order with scan

I am trying to create AWS Lambda function using Node.js and try to scan records from dynamodb. But it gives me records in random order I would like to fetch top 5 records which are recently added in to table. I would like to sort based on Timestamp so can get latest 5 records. Any one have an idea please help me out.
dynamodb does not intend to support ordering in its scan operation. Order is supported in query operations.
To get the behavior you want you can do the following (with one caveat, see below):
Make sure that each record on your table has an attribute (let's call it x) which always holds the same value (does not matter which value, let's say the value is always "y")
define a global secondary index on your table. the key of that index should use x as the partition key (aka: "hash key") and the timestamp field as the sort key.
then you can issue a query action on that index. "Query results are always sorted by the sort key value" (see here) which is exactly what you need.
The caveat: this means that your index will hold all records of your table under the same partition key. This goes against best practices of dynamodb (see Choosing the Right DynamoDB Partition Key). It will not scale for large tables (more than tens of GB).

postgresql custom primarykey

I try to make a project using hibernate and postgres as DB. The problem I have is I need to store the primary key as this 22/2017 or like 432/1990.
Let's say the first number is object_id and second year_added.
I think what I want to achieve is to make a first number and second number together a primary key so 22/2017 is different from 22/2016.
The only idea I have is when user add new object I generate current date year and trying to find last id and increment it.
So next year first added object should be : 1/2018.
So far in my db only object_id is stored as a primary key.
This solution seems to work fine:
PostgreSQL: Auto-increment based on multi-column unique constraint
Thanks for helping me anyway.

HBase row key design for reads and updates

I'm try to understand the best way to design the key for my HBase Table.
My use case :
Structure right now
PersonID | BatchDate | PersonJSON
When some thing about the person is modified, a new PersonJSON and new a batchdate is inserted in to Hbase updating the old records. And every 4 hours a scan of all the people who are modified are then pushed to Hadoop for further processing.
If my key is just personID it great for updating the data. But my performance sucks because I have to add a filter on BatchData column to scan all the rows greater than a batch date.
If my key is a composite key like BatchDate|PersonID I could use startrow and endrow on the row key and get all the rows that have been modified. But then I would have lot of duplicated since the key is not unique and can no longer update a person.
Is bloom filter on row+col (personid+batchdate) an option ?
Any help is appreciated.
Thanks,
Abhishek
In addition to the table with PersonID as the rowkey, it sounds like you need a dual-write secondary index, with BatchDate as the rowkey.
Another option would be Apache Phoenix, which provides support for secondary indexes.
I usually do two steps:
Create table one just have key is commbine of BatchDate+PersonId, value could be empty.
Create table two just as normal you did. Key is PersonId Value is the whole data.
For date range query: query table one first to get the PersonIds, and then use Hbase batch get API to get the data by batch. it would be very fast.

getUserBy() phpfox

During working I faced a very weird thing in phpfox script
I put in the table user a new field .. and this field is tinyint with default value 0 and started to work on giving the user the ability to insert the value through links and finally it's succeeded but when I tried to get this value by getUserBy('name_of_the_field') it gave me a null value although I checked it in the database table and found that field has a value ... so could you help me please ?!
The getUserBy() does not get every field in the user table, there is a predefined list of columns that it will fetch.
You will need to get this field in a different way or write a plug-in to the hook "user.service_auth___construct_query" so it loads your new field, I have not tried this but I believe it should work as a plug-in to that hook:
$this->database()->select('u.my_new_field,');

Resources