Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I work in a small organization (50 employees), and we need to manage business processes. Now we use mail, and it is terrible.
We can buy a big system like an IBM BPM or Pega, or try to use Redmine. I have extensive experience in Redmine, but I can not understand what is the-the IBM BPM or Pega.
That will be more effective?
May you suggest the pros and cons of each solution?
I know that is a project management application, not BPM, but maybe sometimes it can be used as very simple BPM system?
A business process typically involves multiple people (participants / roles) and systems. The BPMS for instance:
manages the task lists of the process particpants
orchestrates the control flow between the different manual and system tasks
manages the process context information throughout the process (data. documents, persistence, versioning - ideally all ootb without coding)
provides rollback, error compensation features
creates an audit trail which is important for compliance / processes that need to be auditable (QA, regulators)
provides dashboards for operational monitoring
and reports for analysis and reporting of KPIs like averages process execution times or volumes grouped by different business data
allows you to model your business process in a graphical way, preferably in a standard notation (BPMN), which is much more user-friendly and a good basis for the communication between business and IT.
supports the evaluation of simple or complex business rules to determine process flow and work assignment with user-friendly means
allows versioning of the process definitions
may help wit generation correspondence from process data and collection of required documents
...
If Redmine's basic workflow and integration features are sufficient to cover your requirements then you could (ab)use it as a WfMS. It will certainly ease the pain you currently have with just unstructured email communication. However, the list above may give you an idea of the actual extent of the usual business requirements. The Redmine approach is rather limited. I would use a more comprehensive tool from the start so you don't have to switch systems again after the expectations and requirements have increased.
If your motivation to use Redmine is only driven by cost then you can as well consider one of the open-source BPMS such as Camunda BPM.
Some of these BPMS are also offered cloud-based, fully managed, available in minutes and with flexible consumption-based pricing and subscription models.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
What is the recommended approach to decide the technology to use for creating miroservice?
ex: All 50 microservices running in .NET platform using SQL Server as
DB for each one of them
OR
Mix and match between different technology
ex : 15 Spring-based microservice with MongoDB, 15 .NET with SQL, 20 NodeJS microservice with Redis
Microservice with different technology
I know this will again come down to developers who are familiar with what technology but all I am looking to know is which approach you would have taken if you have more than 50 microservices.
It really depends on the role of each microservice. If all of them are REST APIs with a pretty similar functionality (but completely different scope), then it would be helpful to use the same tech stack, because:
You can optimize your development workflows
You get more homogeneity across your entire system, which translates into a number of benefits down the road (identify/fix issues faster, optimize resource usage, etc).
However, if you have some microservices which have different constraints in terms of performance (or consistency, or any other vector), you can use a different tech stack just for that one. The architectural model of microservices allows that - it doesn't matter what's behind a microservice as long as it exposes an API that can be used by other microservices.
TL;DR - if you have strong reasons to use different tech stacks for some microservices, you should do it, but keep in mind that it doesn't come without a cost.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
After doing rigorous research and analysis I finally arrived to a point which is confusing me "Is Microservice a design pattern or architecture".
Some say it's a pattern evolved as a solution to monolithic applications and hence design pattern
And some confirms no doubt it's an architecture which speaks about their development, management, scalability, autonomous & full stack.
Any thoughts or suggestions I welcome to get myself clarified.
Microservices can be best described as an architectural style. Beside architectural decisions the style also includes organizational and process relevant considerations.
The architectural elements include:
Componentizing by business concern.
Strict decoupling in terms of persistence.
Well defined interfacing and communication.
Aim for smaller service sizes.
The organizational elements include:
Team organization around components (Conway's Law).
Team size limitations (two-pizza team).
The process relevant elements include:
Less centralized governance.
Smaller, more frequent releases.
Higher degree of freedom for technology decisions.
Product oriented development (agile, MVP, lean, etc).
For more details I recommend reading the articles from Martin Fowler.
I would describe it as a software architectural style that require functional decomposition of an application.
Usually, it involves a monolithic application is broken down into multiple smaller services, each deployed in its own archive, and then composed as a single application using standard lightweight communication, such as REST over HTTP or some async communication (of course, at some point micro services are written from scratch).
The term “micro” in microservices is no indication of the line of code in the service, it only indicates the scope is limited to a single functionality.
Each service is fully autonomous and full-stack. Thus changing a service implementation has no impact to other services as they communicate using well-defined interfaces. There are several advantages of such an application, but its not a free lunch and requires a significant effort in NoOps.
It's important to focus on that that each service must have the properties of:
Single purpose — each service should focus on one single purpose and do it well.
Loose coupling — services know little about each other. A change to one service should not require changing the others. Communication between services should happen only through public service interfaces.
High cohesion — each service encapsulates all related behaviors and data together. If we need to build a new feature, all the changes should be localized to just one single service.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
This post was edited and submitted for review 1 year ago and failed to reopen the post:
Opinion-based Update the question so it can be answered with facts and citations by editing this post.
Improve this question
What are advantages and disadvantages of microservices and monolithic architecture?
When to chose microservice architecture or monolithic architecture?
This is a very important question because a few people get lured by all the buzz around microservices, and there are tradeoffs to consider. So, what are the benefits and challenges of microservices (when compared with the monolithic model)?
Benefits
Deployability: more agility to roll out new versions of a service due to shorter build+test+deploy cycles. Also, flexibility to employ service-specific security, replication, persistence, and monitoring configurations.
Reliability: a microservice fault affects that microservice alone and its consumers, whereas in the monolithic model a service fault may bring down the entire monolith.
Availability: rolling out a new version of a microservice requires little downtime, whereas rolling out a new version of a service in the monolith requires a typically slower restart of the entire monolith.
Scalability: each microservice can be scaled independently using pools, clusters, grids. The deployment characteristics make microservices a great match for the elasticity of the cloud.
Modifiability: more flexibility to use new frameworks, libraries, datasources, and other resources. Also, microservices are loosely-coupled, modular components only accessible via their contracts, and hence less prone to turn into a big ball of mud.
Management: the application development effort is divided across teams that are smaller and work more independently.
Design autonomy: the team has freedom to employ different technologies, frameworks, and patterns to design and implement each microservice, and can change and redeploy each microservice independently
Challenges
Deployability: there are far more deployment units, so there are more complex jobs, scripts, transfer areas, and config files to oversee for deployment. (For that reason, continuous delivery and DevOps are highly desirable for microservice projects.)
Performance: services more likely need to communicate over the network, whereas services within the monolith may benefit from local calls. (For that reason, the design should avoid "chatty" microservices.)
Modifiability: changes to the contract are more likely to impact consumers deployed elsewhere, whereas in the monolithic model consumers are more likely to be within the monolith and will be rolled out in lockstep with the service. Also, mechanisms to improve autonomy, such as eventual consistency and asynchronous calls, add complexity to microservices.
Testability: integration tests are harder to setup and run because they may span different microservices on different runtime environments.
Management: the effort to manage operations increases because there are more runtime components, log files, and point-to-point interactions to oversee.
Memory use: several classes and libraries are often replicated in each microservice bundle and the overall memory footprint increases.
Runtime autonomy: in the monolith the overall business logic is collocated. With microservices the logic is spread across microservices. So, all else being equal, it's more likely that a microservice will interact with other microservices over the network--that interaction decreases autonomy. If the interaction between microservices involves changing data, the need for a transactional boundary further compromises autonomy. The good news is that to avoid runtime autonomy issues, we can employ techniques such as eventual consistency, event-driven architecture, CQRS, cache (data replication), and aligning microservices with DDD bounded contexts. These techniques are not inherent to microservices, but have been suggested by virtually every author I've read.
Once we understand these tradeoffs, there's one more thing we need to know to answer the other question: which is better, microservices or monolith? We need to know the non-functional requirements (quality attribute requirements) of the application. Once you understand how important is performance vs scalability, for example, you can weigh the tradeoffs and make an educated design decision.
While I'm relatively new to the microservices world, I'll try to answer your question as complete as possible.
When you use the microservices architecture, you will have increased decoupling and separation of concerns. Since you are litteraly splitting up your application.
This results into that your codebase will be easier to manage (each application is independent of the other applications to stay up and running). Therefore, if you do this right, it will be easier in the future to add new features to your application. Whereas with a monolithic architecture, it might become a very hard thing to do if your application is big (and you can assume at some point in time it will be).
Also deploying the application is easier, since you are building the independent microservices separately and deploying them on separate servers. This means that you can build and deploy services whenever you like without having to rebuild the rest of your application.
Since the different services are small and deployed separately, it's obvious easier to scale them, with the advantage that you can scale specific services of your application (with a monolithic you scale the complete "thing", even if it's just a specific part within the application that is getting an excessive load).
However, for applications that are not intended to become too big to manage in the future. It is better to keep it at the monolithic architecture. Since the microservices architecture has some serious difficulties involved. I stated that it is easier to deploy microservices, but this is only true in comparison with big monoliths. Using microservices you have the added complexity of distributing the services to different servers at different locations and you need to find a way to manage all of that. Building microservices will help you in the long-run if your application gets big, but for smaller applications, it is just easier to stay monolithic.
#Luxo is spot on. I'd just like to offer a slight variation and bring about the organizational perspective of it. Not only does microservices allow the applications to be decoupled but it may also help on an organizational level. The organization for example would be able to divide into multiple teams where each may develop on a set of microservices that the team may provide.
For example, in larger shops like Amazon, you might have a personalization team, ecommerce team, infrastructure services team, etc. If you'd like to get into microservices, Amazon is a very good example of it. Jeff Bezos made it a mandate for teams to communicate to another team's services if they needed access to a shared functionality. See here for a brief description.
In addition, engineers from Etsy and Netflix also had a small debate back in the day of microservices vs monolith on Twitter. The debate is a little less technical but can offer a few insights as well.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
What would be a good way for Microservices .NET to communicate with each other? Would a peer to peer communication be better (for performance) using NETMQ (port of ZeroMQ) or would it be better via a Bus (NServiceBus or RhinoBus)?
Also would you break up your data access layer into microservices too?
-Indu
A Service Bus-based design allows your application to leverage the decoupling middleware design pattern. You have explicit control in terms of how each Microservice communicates. You can also throttle traffic. However, it really depends on your requirements. Please refer to this tutorial on building and testing Microservices in .NET (C#).
We are starting down this same path. Like all new hot new methodologies, you must be careful that you are actually achieving the benefits of using a Microservices approach.
We have evaluated Azure Service Fabric as one possibility. As a place to host your applications it seems quite promising. There is also an impressive API if you want your applications to tightly integrate with the environment. This integration could likely answer your questions. The caveat is that the API is still in flux (it's improving) and documentation is scarce. It also feels a bit like "vendor lock".
To keep things simple, we have started out by letting our microservices be simple stateless applications that communicate via REST. The endpoints are well-documented and contain a contract version number as part of the URI. We intend to introduce more sophisticated ways of interaction later as the need arises (ie, performance).
To answer your question about "data access layer", my opinion would be that each microservice should persist state in whatever way is best for that service to do so. The actual storage is private to the microservices and other services may only use that data through its public API.
We've recently open sourced our .NET microservices framework, that covers a couple of the needed patterns for microservices. I recommend at least taking a look to understand what is needed when you go into this kind of architecture.
https://github.com/gigya/microdot
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I've been looking for a discussion on ways to monitor and alert on production applications for a little while now, but haven't found any overwhelming information.
I'm in the process of converting a behemoth of an application into smaller microservices and thought now would be a great time to implement some better monitoring of this application. What are some ways, ideally without using paid applications, to monitor the health of the overall application, and individual microservices?
Some possibilities I've considered.
- Building a small application that periodically checks or receives heartbeats.
- Setting up logstash with kabana on openstack to monitor various logs that the services spit out.
Aaaannnddd that's about all I got.
We're running a fairly large environment (hundreds of servers) which is microservices/docker based, multi-tier, highly available and completely elastic.
When it comes to monitoring and alerting, we're using two different tools:
Nagios for availability monitoring - it basically sends us an email if a service is down, lacks resources or suffers from any other problem which prevents it from operating
ELK - We use it to find the root cause of the problem and to alert about issues, trends before they actually impact the application/business.
So when there is a significant issue, Nagios will alert and we will jump into the Log analytics console to try to find the problem. In some cases, the ELK will alert when issues start to build up before it is seen on Nagios. That way we can prevent the issue from deteriorating. You can read more about setting your own ELK setup on AWS here - http://logz.io/blog/deploy-elk-production/
There are obviously many commercial tools for both monitoring, alerting and log analytics but since you're looking for free/open-source tools I've recommended these.
**As a disclaimer, I'm the CEO and Co-Founder of Logz.io which amongst other things offers Enterprise-ELK as a service
There are two elements to monitoring:
Availability - will it work
Performance - is it working properly
Availability is easy, there are hundreds of tools which do synthetic transactions. You can use a service (I can provide a specific life, but there are so many out there from pingdom, to site 24x7, to various other point solutions)
If you want to understand performance have a look at the APM technologies. THey range from more simplistic tracing products which look at the end user and component level performance to more sophisticated tools which actually stitch the whole transaction path together including the browser data.
Gartner has research on both of these markets (I wrote a lot of it before I left). I work for a company AppDynamics which does all of the above in a single product including application availability and performance (mobile or web). We offer the solution SaaS or you can install it internally. FInally we also pull the data together including logs into a backend.
You can build availability monitoring and log collection, you can also collect client side data and other telemetry you emit, but there is no good open source APM tooling out there for a true transaction tracing technology. Also how much time do you want to spend managing ELK, opentsdb, graphite, statsd, collectd, Nagios, etc etc to get this done...
There are multiple way to monitor your production servers, you can go with some of the free limited server monitor like Nagios which is hard to configure and not as simple to work. Or you can look at some of the players in this market like Stackify, LogicMonitor or several others. If you want additional tools like code level monitoring, then you'll need to look on vendors that provide APM (application performance management) such as Stackify, New Relic, AppDynamics You'll find vast price differences and features, so it is really about what are your requirements.