hyperledger Composer - transaction failed - hyperledger-composer

Getting stuck while making transaction on composer-playground. Github Link. It throws the error
t: Instance org.hcsc.network.Commodity#ts1 has property company with type org.hyperledger.composer.system.NetworkAdmin that is not derived from org.hcsc.network.Trader

In your definition of Trace you have a --> Trader company, and in your code you assign me (current participant) - BUT you have processed the transaction using an ID that is bound to the Network Admin (org.hyperledger.composer.system.NetworkAdmin)
You need to run the transaction as a Trader
Create a new Trader participant
Issue an ID to the participant
Select and use that ID
Run the transaction
BTW I notice that you are using a new Date(); in your transaction - this is an example of a 'non-deterministic' value, and when you move to a multi-peer configuration this will fail. It will fail because when the Fabric runs the transaction on Multi-peer and tries to find consensus, the timestamps will be fractionally different on each peer and the transaction will be rejected. For the same reason you can't use random numbers in transactions.

Related

Unexpected in Spring partition when using synchronized

I am using Spring Batch and Partition to do parallel processing. Hibernate and Spring Data Jpa for db. For the partition step, the reader, processor and writer have stepscope and so I can inject partition key and range(from-to) to them. Now in processor, I have one synchronized method and expected this method to be ran once at time, but it is not the case.
I set it to have 10 partitions , all 10 Item reader read the right partitioned range. The problem comes with item processor. Blow code has the same logic I use.
public class accountProcessor implementes ItemProcessor{
#override
public Custom process(item) {
createAccount(item);
return item;
}
//account has unique constraints username, gender, and email
/*
When 1 thread execute that method, it will create 1 account
and save it. If next thread comes in and try to save the same account,
it should find the account created by first thread and do one update.
But now it doesn't happen, instead findIfExist return null
and it try to do another insert of duplicate data
*/
private synchronized void createAccount(item) {
Account account = accountRepo.findIfExist(item.getUsername(), item.getGender(), item.getEmail());
if(account == null) {
//account doesn't exist
account = new Account();
account.setUsername(item.getUsername());
account.setGender(item.getGender());
account.setEmail(item.getEmail());
account.setMoney(10000);
} else {
account.setMoney(account.getMoney()-10);
}
accountRepo.save(account);
}
}
The expected output is that only 1 thread will run this method at any given time and so that there will be no duplicate inserttion in db as well as avoid DataintegrityViolationexception.
Actually result is that second thread can't find the first account and try to create a duplicate account and save to db, which will cause DataintegrityViolationexception, unique constraints error.
Since I synchronized the method, thread should execute it in order, second thread should wait for first thread to finish and then run, which mean it should be able to find the first account.
I tried with many approaches, like a volatile set to contains all unique accounts, do saveAndFlush to make commits asap, using threadlocal whatsoever, no of these works.
Need some help.
Since you made the item processor step-scoped, you don't really need synchronization as each step will have its own instance of the processor.
But it looks like you have a design problem rather than an implementation issue. You are trying to sychronize threads to act in a certain order in a parallel setup. When you decide to go parallel and divide the data into partitions and give each worker (either local or remote) a partition to work on, you must admit that these partitions will be processed in an undefined order and that there should be no relation between records of each partition or between the work done by each worker.
When 1 thread execute that method, it will create 1 account
and save it. If next thread comes in and try to save the same account,
it should find the account created by first thread and do one update. But now it doesn't happen, instead findIfExist return null and it try to do another insert of duplicate data
That's because the transaction of thread1 may not be committed yet, hence thread2 won't find the record you think have been inserted by thread1.
It looks like you are trying to create or update some accounts with a partitioned setup. I'm not sure if this setup is suitable for the problem at hand.
As a side note, I would not call accountRepo.save(account); in an item processor but rather do that in an item writer.
Hope this helps.

Referencing object should be updated if referenced object is saved?

Imagine the following situation: We have two database tables, Tenant and House. Tenant references House with a #ManyToOne mapping.
Tenant tenant = tenantRepository.findById(id).orElseThrow();
House house = tenant.getHouse();
house.setPrice(340_000);
house = houseRepository.save(house); // A new instance is returned by the CrudRepository::save() method
// TODO Is this necessary for further use?
tenant.setHouse(house);
// Further use...
tenant.setAge(23);
tenant = tenantRepository.save(tenant); // Otherwise it is saved with the old reference where house's ID can be null?
...
Is it necessary to update the Tenant with the new reference of House?
EDIT: For clarification, you may assume the entities were loaded (therefore, in managed state) immediately before the above code. And because this "transaction" is a part of a Spring #RequestMapping function, the transaction will be implicitly committed in the end of it.
EDIT 2: The question is not whether I should or not save the house at all in the beginning to avoid this situation. It is about understanding better how the objects are managed.
--- But you may tell me also, should I just update everything first, and save in the end, as a common practice?
The critical question is are house and tenant already managed entities?
If yes (because they got loaded in the same transaction that is still running) all the House instances involved are the same and you don't need to set the house in tenant.
But in that case, you don't even need to call save anyway.
If they are detached instances, yes you need to call tenant.setHouse(house);.
Without it, you will get either an exception or overwrite the changes to house, depending on your cascade setting on the relation.
The preferred way to do all this is:
Within a single transaction:
Load the entities
manipulate them as desired
commit the transaction
JPA will track the changes to the entities and flush them to the database before actually committing the database transaction.

Want to refer to previous transactions in transaction processor function

I have 2 types of transaction :
orderrequest ( provide details including orderer name and transaction id )
ordereraccept (provide id of the orderrequest transaction being accepted)
within the transaction processor function of orderaccept i want to refer to the previous orderrequest transaction using the id to perform validation
i was thinking of using some form of historian but have not been able to get anything to work .
In the test section of composer playground i am able to view previous transactions so I just need a way to do this within a transaction processor function
Thanks a lot

Using TransactionScope with Entity Framework code first and universal providers

I'm trying to set up a transaction where my code creates a new user using the membership provider and then goes on to create an object and put it into one of my Entity Framework tables. If the EF operation fails, I want to be able to rollback to before the user was created. I have a single connection string for both EF and membership, so I think both operations should be using the same sql connection.
When I first run it, I get an
"MSDTC on server ... is unavailable."
exception on the Membership.CreateUser line. When I start the DTC service, I get an
"The underlying provider failed on open"
exception with an inner exception
"The operation is not valid for the state of the transaction."
on the same line. If I change the order around and do the EF save first and then the membership, the EF part works, but CreateUser fails with the same exceptions.
It seems like 2 sql connections are being used even though I have one connection string. Is there a way to force both the membership and EF operations to use the same connection or is there some other way to put this inside of a transaction?
Here's the code
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required))
{
MembershipCreateStatus createStatus;
MembershipUser user = Membership.CreateUser(model.UserName, model.Password, model.UserName, null, null, true, null, out createStatus);
//add objects to the DbContext db
db.SaveChanges();
scope.Complete();
As u say even u use the same connection string for both EF and membership they use different connections then transaction scope automatically escalating to DTC
And as far i know for use same connection for both
it is not possible to put a custom connection into the SqlMembershipProvider
according to this link
to avoid involving DTC

Inventory Material ID and Oracle Install Base

1) Does inventory material transaction ID get populated when any standard transaction is made in oracle install base.
2) I defined a custom transaction type and passing that to public API,at that time material transaction ID is not getting populated.
Please let me know whether material transaction ID is populated only to standard transactions or also for custom transactions
For each and every transaction done in Inventory material transaction ID is populated that you can check in Inventory >> transactions >> material transactions..

Resources