Relation check in Hyperledger Composer - hyperledger-composer

I have noticed one thing in hyperledger composer, after relating two entities, even if in input we give the id which is not existing it will accept the entry without checking whether the related asset or participant exists or not. Is this a bug ?

This isn't a bug, its by design.
this question is best answered by the comments shown here -> https://github.com/hyperledger/composer/issues/3065#issuecomment-354953014
Hyperledger Composer doesn't enforce relationships (or those since 'disconnected' / 'orphaned' out on the ledger) - to try and preserve integrity of relationships would be a near impossible task (CouchDB is a key/value DB, not a relational DB :-) )
So its perfectly feasible to have an asset with an owner field, that's a relationship field in the asset, still referencing a participant record/instance that no longer exists. It is up to the application or client to enforce any 'referential integrity' checks, if that is so desired.

Related

Entity Framework 6 and Oracle: The table/view does not have a primary key defined. The Entity is read-only

I have an ASP.NET Core application that uses EF6 for dealing with a third-party application's database.
Everything is working as expected, but I'm unable to insert rows into a joining table.
I have two tables, Users and Groups, and a joining table GroupUser that identifies which users are members of which groups. Users has a PK of UserId, and Groups has a PK of GroupId.
GroupUser has only 3 columns: GroupId, UserId and another column (which is irrelevant for this post). The two foreign keys in this table identify a unique record.
Every time I try to insert into GroupUser, I get the inner exception
The table/view does not have a primary key defined. The entity is read-only
The error is correct. There is no PK, but both of the FKs are marked as keys in the model. Shouldn't VS be able to use those as a PK somehow?
The inserts used to work as some point, but required some manual modification of the .edmx file as XML in order to work. Unfortunately, our version control records containing this modification have been lost (and I wasn't the one originally working on this).
I've looked at and tried about a dozen articles around this, but they generally have to do with views instead of tables, so don't seem applicable to my case. The ones that did seem applicable didn't solve the issue.
The only other clue I have for a solution is this comment I found in the code:
// Important note: If you have updated the edmx file in the [redacted]
// project and suddenly start having problems, the edmx file may need to be
// edited as an xml file so that you can make changes necessary to make
// VS believe that the GroupUser table has a primary key. See revision #[redacted]
I'm able to insert into User and Group tables just fine, and as I've said, I don't have access to the revision log mentioned.
Edit: The database is for a third-party application, and unfortunately, it's not as simple as just modifying the table to add a PK. I wish it was. Problem would be solved. But I've been advised by the vendor not to make this change, as it may have unexpected consequences, and would void our support.
How can I 'trick' EF into thinking the table has a key? I'm also open to other workarounds. Modifying the DB structure is currently out of the question.

Is there a lib for laravel or lumen 8 to handle filtering for any table?

If I have n tables in laravel and need list with filtering on all of them...Is there a lib that can do this magic without having to code the filters for each table?
Edit.
I forgot to mention that it should handle filters ALSO on relations.
Magic is not the solution, instead put some work on it.
I'm happy to announce that there is such a library now. But it is not public (and maybe will never be with this kind of attitude towards "magic"). If the laravel/lumen community is curious about it, lets talk.
Key features:
Crud REST operations including filtering capabilities (by any column) over max 9 tables via laravel/lumen relations (including these ones How to create Laravel 8 custom relation HasManyThrough 2 and 3 Link Tables so involving 4 or 5 tables in total?). Filtering includes: in, not in, starts with, contains, from, to, is null,is not null, multi-sorting filters on the resource's relations etc.
It can be used together with https://github.com/jarektkaczyk/eloquence/wiki/Mappable to not expose column names from db to FE.
PS. May I remind you that the question was: "Is there a lib for laravel or lumen 8 to handle filtering for any table?" So the answer is: YES there is (this PS if just for haters).

Achieve one to many relationship Spring MVC

I am trying to achieve one to many relationship. I know how to do basic one to many relationship between requestor id and userid.
My question is How to refer gtlUserId(resourceRequestTable) to gtlUserId (User table) as by default spring is mapping gtlUserId (resourceRequestTable) to userId in user table
It has some Ways.
I think you should give a specific way during all project !
As My experience each many to one must be a Drop Down in Client side .
However in your Table ResourceTypeEntity should be drop down inside ResourceRequesTable when value of option is Id[primary Key].
Also Your table not seems good design why two many to one map to same table? it may cause 3NF problem in DB also pay attention Cascade it when Parent Table related to other Parent is not good Design ,Keep it simple with uni Direction Many to One And force user to delete manually parent is better ,CaseCade Delete when Parent has related to other table will make exception handling and testing too hard.
please take a look https://examples.javacodegeeks.com/enterprise-java/spring/mvc/spring-mvc-dropdown-box-example/

Using two different slugs on a route

I'm upgrading an old procedural site to laravel 5.2, and I'm struggling with the old routes I made.
On this website, the routes were made like this : {user_slug}/{content_slug}.html. For the moment, I use cviebrock/eloquent-sluggable to generate the slugs, but I'm open to another one if this one cannot meet my needs.
I have two questions :
Can I make the content-slug unique, but per user ?
How can I write the route and the controller in order to match the correct user slug ad the correct content slug ?
I have not done this myself but I believe there would be a way in the validation rules to do this. Here is an untested rough draft to check content_slug in the posts table but only check uniqueness where the user_id field equals a variable:
'content_slug' => "unique:posts,content_slug,NULL,id,user_id,$user->id"
Depending on who you ask, they may advise you (either instead of or as well as doing the above) to set up a key in the database based on the user_id and content_slug fields. This way the database returns an error if an insert is attempted as well as gives a performance boost when running a query off that index. Queries off of an index can literally give an exponential performance increase.

Very slow search of a simple entity relationship

We use CRM 4.0 at our institution and have no plans to upgrade presently as we've spend the last year and a half customising and extending the CRM to work with our processes.
A tiny part of model is a simply hierarchy, we have a group of learning rooms that has a one-to-many relationship with another entity that describes the courses available for that learning room.
Another entity has a list of all potential and enrolled students who have expressed an interest in whichever course.
That bit's all straightforward and works pretty well and is modelled into 3 custom entities.
Now, we've got an Admin application that reads the rooms and then wants to show the courses for that room, but only where there are enrolled students.
In SQL this is simplified to:
SELECT DISTINCT r.CourseName, r.OtherInformation
FROM Rooms r
INNER JOIN Students S
ON S.CourseId = r.CourseId
WHERE r.RoomId = #RoomId
And this indeed is very close to the eventual SQL that CRM generates.
We use a Crm QueryEntity, a Filter and a LinkEntity to represent this same structure.
The problem now is that the CRM normalizes the a customize entity into a Base Table which has the standard CRM entity data that all share, and then an ExtensionBase Table which has our customisations. To Give a flattened access to this, it creates a view that merges both tables.
This view is what is used by the Generated SQL.
Now the base tables have indices but the view doesn't.
The problem we have is that all we want to do is return Courses where the inner join is satisfied, it's enough to prove there are entries and CRM makes it SELECT DISTINCT, so we only get one item back for Room.
At first this worked perfectly well, but now we have thousands of queries, it takes well over 30 seconds and of course causes a timeout in anything but SMS.
I'm given to believe that we can create and alter indices on tables in CRM and that's not considered to be an unsupported modification; but what about Views ?
I know that if we alter an entity then its views are recreated, which would of course make us redo our indices when this happens.
Is there any way to hint to CRM4.0 that we want a specific index in place ?
Another source recommends that where you get problems like this, then it's best to bring data closer together, but this isn't something I'd feel comfortable in trying to engineer into our solution.
I had considered putting a new entity in that only has RoomId, CourseId and Enrolment Count in to it, but that smacks of being incredibly hacky too; After all, an index would resolve the need to duplicate this data and have some kind of trigger that updates the data after every student operation.
Lastly, whilst I know we're stuck on CRM4 at the moment, is this the kind of thing that we could expect to have resolved in CRM2011 ? It would certainly add more weight to the upgrading this 5 year old product argument.
Since views are "dynamic" (conceptually, their contents are generated on-the-fly from the base tables every time they are used), they typically can't be indexed. However, SQL Server does support something called an "indexed view". You need to create a unique clustered index on the view, and the query analyzer should be able to use it to speed up your join.
Someone asked a similar question here and I see no conclusive answer. The cited concerns from Microsoft are Referential Integrity (a non-issue here) and Upgrade complications. You mention the unsupported option of adding the view and managing it over upgrades and entity changes. That is an option, as unsupported and hackish as it is, it should work.
FetchXml does have aggregation but the query execution plans still uses the views: here is the SQL generated from a simple select count from incident:
'select
top 5000 COUNT(*) as "rowcount"
, MAX("__AggLimitExceededFlag__") as "__AggregateLimitExceeded__" from (select top 50001 case when ROW_NUMBER() over(order by (SELECT 1)) > 50000 then 1 else 0 end as "__AggLimitExceededFlag__" from Incident as "incident0" ...
I dont see a supported solution for your problem.
If you are building an outside admin app and you are hosting CRM 4 on-premise you could go directly to the database for your query bypassing the CRM API. Not supported but would allow you to solve the problem.
I'm going to add this as a potential answer although I don't believe its a sustainable or indeed valid long-term solution.
After analysing the indexes that CRM had defined automatically, I realised that selecting more information in my query would be enough to fulfil the column requirements of an Index and now the query runs in less then a second.

Resources