What is difference between Microservices and Decentralized applications - spring-boot

I am new to the decentralized application, after going through some articles I confused between the microservices and decentralized application. Can someone help me to understand the difference between them. I know that microservices can be built using spring boot & docker. Is there any other technology present to build it.I think Ethereum is used to develop the decentralized application. Can someone help me to understand the difference?

A microservice application still runs on your infrastructure, you still control all of its nodes, state and infrastructure. So, despite being distributed (and even though the infrastructure might not be yours such as a 3rd party cloud), you still have the power to interfere in all of its aspects.
The main selling point of a decentralized application is that theoretically no one can actually interfere in its infrastructure since it's not owned by a single entity. Theoretically anyone in the world (and the largest its user base, the more resilient the decentralized application becomes) can become a node in the infrastructure and the "current valid state" is calculated based on a kind of agreement between the nodes (so, unless you can interfere in a majority of the nodes, which you don't own, you can't change the application's state on your own).
In a certain sense, you're right that they seem similar, since they're both distributed applications. The decentralized ones just go a step further to be not "owned" and "controlled" by a single entity and be the product of an anonymous community.
EDIT
So suppose you/your company makes a very cool microservice application and you host it on a bunch of 3rd party clouds around the world to make sure it's very redundant and always available. A change of heart on your part (or maybe being forced to do so by government regulations) can shutdown the application out of the blue or ban certain users from it or edit/censor content currently being published on it. You're in full control since it's your app. As good as your intentions might be, you are a liability, a single point of failure in the ecosystem.
Now, if your app is decentralized... there's no specific person/entity to be hunt down to force such a behavior. You need to hunt thousands/millions of owners of single independent nodes providing infrastructure to the app and enforcing its agreed set of rules. So how would you go banning users / censoring content / etc? You (theoretically) can't... unless you can reach a majority of its nodes and that has already proven to be quite difficult and even brute force might be nearly impossible to achieve.

Microservice
Microservice is rather software architecture. The idea is that you have many small applications - microservices, each focused on addressing only a single goal, but doing it really well.
Specific instance of a microservice can be for example application running HTTP server for managing users. It could have HTTP endpoints for adding, viewing and deleting users in a database. You would then deploy such application together with database on some server.
With fair degree of simplification, we could say that microservice is not all that different than a web browser you are running on your computer. The difference between your web browser and a microservice is that microservice will be running on server, exposing some sort of network interface, whereas you browser runs on your personal computer and it doesn't expose network interface for others to interact with.
The bottom line is that singe microservice is just an application running on a server, you can modify its code anytime, you can stop it any time, you can change data in the database it's using.
Decentralized application
Decentralized application is deployed to blockchain. Blockchain is network of computers (Ethereum MainNet has tens of thousands of nodes), all running the same program. When you write decentralized application (called smart contract, in terms of Ethereum blockchain) and you "deploy it", what happens is that you basically insert your code into this networks of computers and each of them will have it available.
Once code of your application is in the network, you can interact with it - you can the interface you defined in your decentralized application, by sending JSON-RPC requests to a server which is part of this blockchain network.
It then takes some time until your request for execution is picked up by network. If everything goes right, your request is eventually distributed to the network and executed by every single computer connected to blockchain.
The consequence of that is that if some computer in the network tries to lie about the result, the fraudulent attempt would be noticed by the rest of the network.
The bottom line here is that decentralized application is not executed on one computer, but many (possibly thousands) and even as a creator, you cannot modify it's code or data (you can only to limited degree)

Related

can micro-service interact with downstream service through localhost origin

Can micro-service interact with downstream service through localhost origin, since my all service is running in same server is that is correct approach ? I found that interacting a downstream service with domain name takes much time when compared to localhost. i was curious to know whether we can do like this ?
You're right, you can communicate with other services running in the same host with localhost. It's completely fine and when thinking about network round trips, it's beneficial.
But,
what if you want to scale the services?
What if you want to move any of the services to a different host?
While considering at least these scenarios, binding to a specific host is not worth. And this is applicable if you are using the IP of the host.
*I found that interacting a downstream service with domain name takes much time when compared to localhost.*.
I see what you're saying.
Microservices architecture is not a silver bullet for software development design and always come with tradeoffs
And about your deployment strategy Multiple Service Instances per Host pattern.
How you are going to handle if your services have different resource requirements?
say what if one of your services is utilizing all the host resource?
What if you need to scale out one independent service?
How you are going to ensure the availabilities of your services?
..
..
So there are so many questions you must consider before going with a pattern in microservices. It all depends on your requirements.
If your services are on the same server you should using a message broker or mechanism like grcp to talk between your services so doesn't matter if your orgin is. If you are using HTTP to communicate between your micro services then it totally not gain any advantages of micro services architecture and your architecture is flawed.
Microservice is a concept, it does not force you to where you deploy your application and how they might call each other. You may deploy your microservices on different virtual machines that are hosted on the same physical server. The whole point is you need to have a reason for everything that you decide to do with your architecture.
The first question is why you have split your application into different microservices? for only carrying the word of microservice on your architecture or having better control on the business logic, scalability, and maintainability of the project?
These are important things you need to take care of them when you are designing an application. draw the big picture of your product, how it's going to be used. which service/component is mostly being used by the customers, does keeping it with other microservices on the same server makes performance issues or not? what if any issue happens to the server and whole applications would be unreachable.

Clustering Microservice Components

We have a set of Microservices collaborating with each other in the eco system. We used to have occasional problems where one or more of these Microservices would go down accidentally. Thankfully, we have some monitoring built around which would realize this and take corrective action.
Now, we would like to have redundancy built around each of those Microservices. I'm thinking more like a master / slave approach where a slave is always on stand by and when the master goes off, the slave picks it up.
Should we consider using any framework that we could use as service registry, where we register each of those Microservices and allow them to be controlled? Any other suggestions on how to achieve the kind of master / slave architecture with the Microservices that would enable us to have failover redundancy?
I thought about this for a couple of minutes and this is what I currently think is the best method, based on experience.
There are a couple of problems you will face with availability. First is always having at least one endpoint up. This is easy enough to do by installing on multiple servers. In the enterprise space, you would use a name for the endpoint and then have it resolve to multiple servers (virtual or hardware). You would also load balance it.
The second is registry. This is a very easy problem with API management software. The really good software in this space is not cheap, so this is not a weekend hobbyist type of software. But there are open source API Management solutions out there. As I work in the Enterprise space, I am very familiar with options like Apigee, CA, Mashery, etc. so I cannot recommend an open source option and feel good about myself.
You could build your own registry, if you desire. Just be careful how you design it, as a "registry of all interface points" leads to a service that becomes more tightly coupled.

SOA and APIs - are the APIs endpoint services or they require a distinct service?

I'm designing a RESTful Service Oriented Architecture web application to make it scale as good as possible and put different kind of services on different machines (separating resource intensive operations from other services).
I also want users to be able to access their data to make their own applications.
I'm not sure if I have to design these services to be opened to the world, so it's just a matter of make them listen on a web domain (like AWS) or create another service to handle API requests.
It makes sense to me to have secure opened webservices, but it does add a lot of complexity to the architecture itself because each service becomes a client that has to be recognized (trust) by other services in the same suite, just as well as I have to recognize 3rd party applications trying to access their own data.
Is this a right SOA approach? What I want to be sure is that I'm not mixing wrong concepts designing a wrong service oriented architecture.
All services have crud interfaces so they could be queried using REST principles.
Depending on the nature of your system, it may be viable to have unsecured webservices, so they can all talk to each other without the security overheads. To make the services available to 3rd parties, you could then use a Service Perimeter Guard as the only mechanism for accessing the services externally and apply security at this layer. This has the benefit of providing consistent security across all of your services, however if the perimeter is compromised then access to all of the services is obtained.
This approach may not be viable for all services. For instance information that is considered "personal-in-confidence" (e.g., employee data such as home addresses, emergency contact details, health data, etc), will need to be secured so that unauthorised staff cannot access it.
Regarding your comment of putting different services on different machines, this will result in under-utilised resources on some machines and possibly over-utilised resources on others. To avoid this, deploy all services to all machines and use a load-balancer. This will provide more optimal resource usage and simplify deployments (e.g., using Chef or Puppet) as all of the nodes are the same. As the resource usage increases, you can then simply add more nodes. Similarly if the resource usage is low, you can remove nodes.
Regarding your last sentence, there is a whole lot more to REST than CRUD (such as HATEOAS).

Java EE App Design

I am writing a Java EE application which is supposed to consume SAP BAPIs/RFC using JCo and expose them as web-services to other downstream systems. The application needs to scale to huge volumes in scale of tens of thousands and thousands of simultaneous users.
I would like to have suggestions on how to design this application so that it can meet the required volume.
Its good that you are thinking of scalability right from the design phase. Martin Abbott and Michael Fisher (PayPal/eBay fame) layout a framework called AKF Scale for scaling web apps. The main principle is to scale your app in 3 axis.
X-axis: Cloning of services/ data such that work can be easily distributed across instances. For a web app, this implies ability to add more web servers (clustering).
Y-axis: separation of work responsibility, action or data. So for example in your case, you could have different API calls on different servers.
Z-Axis: separation of work by customer or requester. In your case you could say, requesters from region 1 will access Server 1, requesters from region 2 will access Server 2, etc.
Design your system so that you can follow all 3 above if you need to. But when you initially deploy, you may not need to use all three methods.
You can checkout the book "The Art of Scalability" by the above authors. http://amzn.to/oSQGHb
A final answer is not possible, but based on the information you provided this does not seem to be a problem as long as your application is stateless so that it only forwards requests to SAP and returns the responses. In this case it does not maintain any state at all. If it comes to e.g. asynchronous message handling, temporary database storage or session state management it becomes more complex. If this is true and there is no need to maintain state you can easily scale-out your application to dozens of application servers without changing your application architecture.
In my experience this is not necessarily the case when it comes to SAP integration, think of a shopping cart you want to fill based on products available in SAP. You may want to maintain this cart in your application and only submit the final cart to SAP. Otherwise you end up building an e-commerce application inside your backend.
Most important is that you reduce CPU utilization in your application to avoid a 'too-large' cluster and to reduce all kinds of I/O wherever possible, e.g. small SOAP messages to reduce network I/O.
Furthermore, I recommend to design a proper abstraction layer on top of JCo including the JCO.PoolManager for connection pooling. You may also need a well-thought-out authorization concept if you work with a connection pool managed by only one technical user.
Just some (not well structured) thoughts...

Should cluster support be at the application or framework level?

Lets say you're starting a new web project that required the website to run on and MVC framework on Mono. A couple major requirements are that it has to scale easy, be stable and work with multiple servers that may or may not be in the same place or even on the same local network.
The first thing I thought of was a sort of cluster communication between servers. Each server would act as a node and be its own standalone application and would query other nodes in a known list for session information and things like that.
But one major design questions I have is should this functionality be built into the supporting framework or should the application handle the synchronization of the data?
Or am I just way off and this would never work?
Normaly clustering rather belongs to some kind of middleware layer, thus on your framework level. However it can also be implemented on the application level.
It depends on your exact use, if you want load balancing, scalability etc.

Resources