Hyperledger composer - Question about permissions ACL working, when different business network is invoked from one business network - hyperledger-composer

I have a question.
- I have multiple Business networks, each having its own ACL (Say i have 3 such networks A, B & C).
- I also have a Top-level business network, which calls one of these networks A or B or C based on the needs.
- Now assume I call network A via the top-level network using the binded participant identity X.
- If I use nativeAPI().invokechaincode() to call network A, will the permissions.ACL of network A be processed correctly for X?

Related

Microservices communication within bounded context

As a part of our DDD design, we are working on a bounded context and have identified two microservices A and B.
Service A needs to make calls to Service B via REST API. Service B already provides open API spec on how to get any data. We use openapi generator to auto generate client side DTOs.
There is exactly a one-to-one mapping between B DTO to A's domain object.
As far as I understand, we should use anti corruption layer (using hexagonal architecture) if our service is communicating with an untrusted third party service. But what should we do, if the communication is between internal services, like in the case above ?
Should microservice B API DTOs directly used as domain objects in microservice A ? This would mean that we will not create separate domain class/object in service A. We treat B's DTO as domain object in service A.
Should we create an adapter layer to convert B's DTO to domain objects in microservice A ?
Should I create domain objects in service A which would also be manual output DTO for service B ?. This is opposite of point 1, where DTOs are treated as domain objects, where as in this case, we are treating domain objects as DTOs
In my opinion, you are mixing two separate concepts:
The "anticorruption layer", which is a strategic DDD pattern
The layers in a layered or onion architecture, which are tactical patterns to separate concerns within an application
The goal of the anticorruption layer is to translate the ubiquitous language from another Bounded Context into your ubiquitous language so that your code doesn't get polluted with concepts not belonging to your Bounded Context.
The goal of the layers in a layered or onion architecture and specifically the contracts between them (the DTOs you are talking about) is to avoid changes in one part of the code (for example, adding or removing a property in a domain object in the Core) to cause issues somewhere else (like accidentally modifying the public API contract).
If I understand it correctly, your two microservices belong to the same Bounded Context. If that is the case, you shouldn't need any anticorruption layer because both microservices share the same ubiquitous language.
Now regarding the options that you propose, I'm not sure I fully understand options 2 and 3, but what I'll say is that if you are doing real microservices, you can't use option 1, as these microservices wouldn't be autonomous, and independently deployable as a change in microservice B's API would require a change and coordinated deployment of microservice A.
So, design your microservices so that you have control over how their parts evolve (be able to make non-breaking changes in their APIs, change storage strategies without having to change the Core, change the core without having to change the API, etc). Of course, if a microservice is very simple, don't over-engineer it.
The answer here is simple - There is no magic to make breaking changes non-breaking. So treat internal dependencies like you would external. That requires a high discipline on versioning of the APIs and moving versions ahead. The Service A needs to publish new API version specifications ahead of time and Service B needs to be given time to implement those. Depending on the size of your organization it can be useful to support multiple API versions concurrently (for a while that is). You may also want Team A to be monitoring usage of their API versions (e.g. how many customers are still using an old version of the API etc.).
I would not put a high focus on how exactly your API specification is documented or which technology is used to generate client bindings (if at all). This should be up to the service A owners and service B should be able to adopt whatever being offered. In general I would put the focus on simplicity and avoid too many additional layers if they don't provide clear benefit. And team B should be free to design their internals as they see best fit (even with respect to data that comes from Service A). The whole point of microservices architecture is to be able to move forward independently.

Dependency between adapters in hexagonal architecture Spring Boot

I've been trying to refactor a brand new project to follow the hexagonal architecture and ddd patterns.
This is the structure of my domain. I have files and customer data. Entity wise this makes sense to be separated. The "facade" objects connect the ports with the domain. Quick example:
Controller (application layer) --uses--> Facade --uses--> Ports <--implement-- Adapters (infrastructure layer)
The problem I have is I have a third adapter (not in the picture) that is an external OCR app. It's an external client (we use a feign client to connect their API) and it provides customer data (first adapter), but also serves us with the raw data of images (second adapter).
My first two adapters have entities, repos and databases on our local systems but, this third one, to me makes sense given the theory behind hexagonal architecture, to be separated in its own adapter.
But then how do I use it from my other two adapters? Should the three of them be in the same adapter since they depend on each other? CustomerData and File have a One To Many relationship as well so maybe it makes sense?
I have only implemented the File part so far and have yet to refactor the CustomerData part since I'm trying to wrap my head around the concepts first.
I've seen a lot of articles but most of them are really simple with no real world examples and they have clearly separated domains.
Thanks a lot for the clarification in advance.
In lack of a better idea, since the interface ports are beans implemented by the facades, I'm wiring the ports I need in the other domain's facades and using them the same way as if it was a controller of that same domain. The diagram would be something similar to:
Facade (domain1) --uses--> Port (of domain2) <--implement-- Adapters (infrastructure layer)
Edit:
I've found out a very extensive article that is very useful to understand hexagonal architecture but goes even deeper.
Long story short, I'll copy the relevant part:
Triggering logic in other components
When one of our components (component B) needs to do something whenever something else happens in another component (component A), we can not simply make a direct call from component A to a class/method in component B because A would then be coupled to B.
However we can make A use an event dispatcher to dispatch an application event that will be delivered to any component listening to it, including B, and the event listener in B will trigger the desired action. This means that component A will depend on an event dispatcher, but it will be decoupled from B.
Hexagonal Architecture doesn't forbid the relationships between adapters.
Anyway, usually we will have a port for each external actor interacting with our business logic, and an adapter to translate to/from the actor.
You can take a look at this:
https://jmgarridopaz.github.io/content/hexagonalarchitecture-ig/chapter1.html

Signal R Websockets and multi node servers

I am mapping users to connections as described in the following link https://learn.microsoft.com/en-us/aspnet/signalr/overview/guide-to-the-api/mapping-users-to-connections so I can find which user's to send messages to.
I was wondering if there is any additional work required for this to work smoothly on multi node servers / load balancing. Im not experienced on the infrastructure side but I'm assuming if there are multi servers spun up, there would be multiple static hashmaps storing the mappings of users to connections - i.e., one for each server.
Would this mean users that have made a connection from their browser to node A will not be able to communicate to users who've connected to node B ?
If this is the case, how would we go about making this possible.
In that same link, just below the Introduction section, it discusses 4 different mapping methods:
The User ID Provider (SignalR 2)
In-memory storage, such as a dictionary
SignalR group for each user
Permanent, external storage, such as a database table or Azure table storage
And after that there is a table that show which of these works in different scenarios. One of those scenarios being "More than one server".
Since it is not mentioned, it depends on which mapping method you are following.
From there, you can check out "scaling out" on the same site you noted which has several methods you can follow depending on what suites your needs. This is where sending messages to clients regardless of which server they connect are handled.

What exactly is Software-Defined Networking (SDN)?

I was poring over the docs for Open DayLight, and can't seem to wrap my head around what software-defined networking even is. All the media hype, blogs and articles I can find on SDN are riddled with buzzwords that don't mean anything to me as an engineer. So I ask: What (exactly) is SDN? What are some specific uses cases/problems it solves? Is it:
Just making proprietary networking hardware serve network APIs, thus allowing programs to configure them (instead of IT guys using a console or web interface)?; or
Implementing (traditionally proprietary) networking hardware as software; or
Writing software that somehow integrates with virtual networking hardware used by virtualization platforms (vLANs, vSwitches, etc.)?; or
Something else completely?!?
BONUS: How does Open DayLight fit into this equation here?
First of all, you are right, there is not official definition from NIST or some similar standardization body and the fact that its meaning is fuzzy is exploited by marketing people.
The main point of SDN is that it allows to program network functions with APIs.
In the past, networking devices like switches and routers were only configurable using a proprietary interface (be it vendor specific tools or just the CLI on the device) and there were no APIs which allow to configure OSI L2 - L3 aspects like VLANs and routes but also L6 - L7 aspects like load balancing highly dynamically. Btw. In the case of L6 - L7 functions, the term NVF = Network Virtualized Function seems to be established by now.
This is needed especially for multi tenancy capable virtualized IaaS systems. You can create new VPCs and arrange them together at will. To really isolate tenants from each other, you need to have a L2 isolation and so the same dynamics that is offered for VPCs is propagated to the networking for interconnecting them.
Conclusion: It is about your first bullet with the extension, that the APIs must not necessarily be offered by some hardware appliance, it can also be offered by some pure software implementation.
Regarding OpenDaylight:
It is the OpenStack pendant for SDN. They also actively push integration with OpenStack. They say they are an "open, reference framework for programmability and control through an open source SDN and NFV solution". This means it provides (as you say) a façade for the manfold aspects of networking.
They have all the big names as members which probably means they have the power to establish a de-facto standard like OpenStack did. Members benefit in that they can provide plugins, integrations and adaptations for their products so that they seamlessly integrate with OpenDayligh and you only need to care about a single standard API.
SDN is programmable networks. Different SDN solutions provide different functions in their APIs towards the app developer.
There is a good overview of SDN for software developers here:
https://github.com/BRCDcomm/BVC/wiki/SDN-applications
The most common elements for SDN solutions are
North-bound API: A programming interface used by an application/script to monitor, manage and control the network topology and packet flows within the network.
Network elements: Switching or routing network elements that enforce the rules provided by the application via the north-bound API. These elements may be physical (Cisco, Brocade, Tallac, etc) or virtual (Open VSwitch, Brocade Vyatta vrouter, Cisco 1000, etc) or a combination.
Controller-based solutions have a clustered architectural element (the 'controller') that provides the north-bound api towards applications and an extensible set of south-bound APIs to which network devices connect. Some controllers available today are OpenDaylight, Open Network Operating System (ONOS), Juniper Open Contrail, Brocade Vyatta Controller (ODL distribution), HP VAN Controller and more.
Best rules of thumb to understand an SDN offering:
Read its north-bound API - this tells you what you will be able to monitor, manage and control in your network.
Find out which south-bound APIs it supports - this tells you which switches/routers it might work with.
Some SDN use cases/applications:
DevOps/Admin automation - Applications and scripts that make a network admin or DevOps life easier through automation. OpenStack Neutron is a common example.
Security - HP provides 'Network Protector' that learns the topology of the network and then monitors activity providing alerts and/or remediation of non-compliant behaviors.
Network optimization
Brocade offers 'Traffic Manager' that monitors network utilization and modifies traffic flows in real time to optimize quality based on defined policies.
HP provides 'HP Network Optimizer' that provides an end-to-end voice optimized path for enterprise Microsoft Lync users.
Lyatiss provisions AWS networks in realtime to meet application needs.
Monitoring classroom time-on-task - Elbrys provides an application that provides a teacher with a dashboard to monitor student's time-on-task in real time and cause redirects of individual students to web pages of their choosing. (Disclaimer: I work for Elbrys Networks)
OpenDaylight project proposals page - https://wiki.opendaylight.org/view/Project_Proposals:Main
The concept of SDN is very simple. SDN decouples control-plane (i.e. decision making) from data-plane (the actual forwarding actions) and provides API between them (e.g. OpenFlow API).
Image source: https://www.commsbusiness.co.uk/features/software-defined-networking-sdn-explained/
With SDN architecture, network engineers no longer have to learn proprietary CLI commands for different vendors. They can focus on developing logically centralized control programs to make network global decisions and send it down to network switches (data-plane). Dumped network switches (data-plane) received controller rules/decisions and process network packets accordingly if no decision found they ask the controller.
For example: In SDN architecture routing algorithms developed as a program in the controller, it collects all required metadata (e.g. switches, ports, host connections, links, speed, etc) from the network then make a routing decision for each switch in the network. While in a conventional network, a routing algorithm is implemented in a distributed fashion in all switches (i.e. generally each switch has its own intelligence and makes its own routing decision).
SDN explained by Nick Feamster
Here is a good paper that illustrates the road map to SDN

Determine whether a name is a Workgroup or a Domain in a Windows forest

My app is running on client's network, where they have a windows forest with multiple Domains and Workgroups. We're using NetServerEnum function, with the flag SV_TYPE_DOMAIN_ENUM, for enumerating all of these "sub-networks" - domains and workgroups.
After that, we need to determine for each name whether it's a WORKGROUP or a DOMAIN.
One option I had is using DsGetDcName, knowing it should fail for a workgroup, but I'm quite sure there are better ways.

Resources