How can I prevent users from submitting a timestamp with their transaction?
When I create a transaction, the timestamp field is added to the transaction and can be inputted by the user.
While the timestamp on the transaction is populated if the client does not send it, the user can send in their own timestamp to back-date a transaction.
I think you would have to enforce this in the client tier (prior to submitting the transaction using the Node.js or REST API).
This is an interesting requirement however. Can you elaborate more on the use case and what you are worried about?
Related
I am using Spring Boot and not much experience on Transactions stuff...
#Service
#Transactional
class FundTransferService {
public void doSomeFunds(){
if(realPaymentGateway()){
//then do db call, to update User Transaction details, WHAT IF SERVER GOES DOWN HERE OR ANY EXCEPTION??
}
}
public boolean realPaymentGateway(){
//Using Braintree to transfer Funds
}
}
Above there are 2 things happening, paymentGateway(which is some rest call) and if success, then only updating DB with User Transaction details.
I want above 2 THINGS should happen Atomic, I mean either both (Rest and DB) shud Success, or rollback everything..
My problem is:
Q1) While Updating DB details, due to some exception or server goes down, then might rollback happen only for DB stuff, but not RESTCALL..
Q2) Should I first update DB and then go for PaymentGateway for transfering funds or reverse should be correct? Please suggest me..
Hope you understand my query, so whats the solution for above problem?
You cannot atomically do what you want to do, like span a distributed transaction, because Braintree is a REST service outside of your control.
You can however, call Braintree, and afterwards update your database accordingly. This way, there's a minimal time-window right between your brain-tree call and your transaction commit, that your server gets unexpectedly "killed", but there's no easy way around that in the scope of this answer.
You could also have some sort of write-ahead-log, where you try and remember what REST service you are going to call and then reconcile with the actual REST calls that happened, but this is a bit more elaborate and quite possibly overkill.
You can apply transactions only for transactional resources. As I understood you have 1 non-transactional input (further, TX) (call with payment request) and 2 outputs (non-TX remote REST call and TX SQL database). If you want to keep real-time consistency, I can propose the next architectural approach:
- put payment request into message queue. MQ Broker must support XA protocol for 2PC commit (for instance, ActiveMQ);
- your service reads message from queue using XA-connection, sends REST request to remote server and saves data into DB (which also supports XA protocol) using XA-connection;
- if REST call will failed or something else, you will rollback changes and start from processing payment request from the queue again. BTW, your remote REST resource must be idempotent;
If you're using non-transactional resoureces no way to keep it in consistent .
Also, you can lookup information, for instance, on Atomikos site for using 2PC.
Using hotchocolate graphql server 10.3.5 w/ pure-code-first. Clients currently query and subscribe to receive changes, works fine.
However, the query schema and subscription payload schema are often identical. I'd prefer that clients need only do the subscribe alone -- they'd get an immediate 'push' and updates as before.
Presumably hooking into a hotchocolate 'user subscribed' event and doing a push there would be the solution if it's not already supported. But unsure where/how to approach.
Per hotchocolate author in V10.4 and V11.0 this will be straightforward, closing pending final release of those versions.
I want to save the identity of the user invoking a transaction in hyperledger composer. is there a way of getting the user identity inside a transaction without passsing it as a transaction parameter?
Depends on how your users are managed. Typically an organization has a couple of fabric users that invoke transactions on the blockchain. This user can be determined by the ledger. However if you authenticate users at the application level then invoke with the same fabric client there is no way of drilling down to know which user from within an organization invoked the transaction without passing the user as part of the transaction
Answering my question, Hyperledger composer has a global method getCurrentParticipant that can be called inside the transaction to get the participant invoking the transaction. Also it has getCurrentIdentity which can be used to get the identity of the current participant. for more information
Currently learning spring integration, I want to retrieve information from a MySQL database to use inside an int:service-activator, or an int:splitter .
Unfortunately, it would seem that most examples and documentation is based around the idea of using an int-jdbc:inbound-channel-adapter, which in itself requires a poller. I don't want to poll a database, but rather retrieve specific data based on the payload of an existing message originating from an int:gateway. This data would then be used to further modify the payload, or assist in how the message is split.
I tried using int-jdbc:outbound-gateway, as the description states:
... jdbc.JdbcOutboundGateway' for updating a database in response to a message on the request channel, and/or for retrieving data from the database ...
This implies that it can be used for retrieval of data only and not just updates, but as I implement it, there's a complaint that at least one update statement is required:
And so I'm currently sitting with a faulty prototype that initially looks like so:
The circled piece being the non-functioning int-jdbc:outbound-gateway.
My end goal is to, based on the payload coming from the incomingGateway (in the picture above), retrieve some information from a MySQL database, and use that data to split the message in the analyzerSplitter, or to perhaps modify the payload using an int:service-activator. This should then all be linked up to a int-jdbc:message-store which I believe could assist with performance. I do not wish to poll the database on a regular basis, and I do not wish to update anything in the database.
By testing using the polling int-jdbc:inbound-channel-adapter, I am confident that my datasource bean is set up correctly and the query can execute.
How would I go about correctly setting up such behaviour in spring integration?
If you want to proceed with the flow after updating the database, you can simply use a JdbcTemplate in a method invoked by a service activator, or, if it's the end of the flow, use an outbound channel adapter.
The outbound channel adapter is the inverse of the inbound: its role is to handle a message and use it to execute a SQL query. By default, the message payload and headers are available as input parameters to the query, as the following example shows:
...
As I heard most of the time that in micro services architecture, for every single micro service we have to create individual database.
But if I have to maintain foreign key constraint across the different databases which is not possible. Like I have a user table in authentication micro service and I want to use it in my catalog service(userid column from user table)
So how can it be resolve.
Thanks in Advance
You can maintain a shadow copy (with only useful information for eg. just the userid column) of user table in catalog service via event sourcing(for e.g. you can use rabbit MQ or apache kafka for async messaging).
Catalog service will use the user information in read only mode. This solution is however effective only when user information doesn't change frequently. Otherwise async communication can be inefficient and costly.
In that case you can implement API calls from catalog service to user service for any validations to be done on user data.
Use the Saga Pattern to maintain data consistency across services.
A saga is a sequence of local transactions. Each local transaction
updates the database and publishes a message or event to trigger the
next local transaction in the saga. If a local transaction fails
because it violates a business rule then the saga executes a series of
compensating transactions that undo the changes that were made by the
preceding local transactions.