Jaeger integration Effort in Microservices in nodejs - microservices

I am analysing at a very hight level how much effort would it be for jaeger integration in nodejs microservices.
Does it require code changes or only deployment. and if code changes is required, is code changes needed in first service (i.e. api-gateway) or all the services need to have code changes.
I would really appreciate if someone can give a rough idea of tasks and effort.

Great question and a very popular one too. In short, yes, code changes are required. Not just in one service but in all the services that a request will go through. You need to instrument all services to get continuous traces that will be able to tell you the story of a request as it travels through the system.

Related

Micro Layers: Do not add functionality on top but simplify overall Dependencies

I was going through design principles but could not understand this principle(avoid Micro layers), what would the significance be. I tried to google it but could not find any examples or explanation for this design principle. Could it possible for someone to explain this with example,what advantages it has in which scenarios? Does layering not localizes changes and reduces ripple effect of changes in software?
You’ve misinterpreted the way the principle is written. The author wasn’t trying to say “avoid micro services”. They were trying to say “When dealing with a micro service, don’t keep adding features or functionality to it. Instead, add an additional micro service to deliver the new functionality.”
The intent is to help you keep each micro service focused on a single task. This simplifies any system that depends on your service. And, it means you can more easily update your service — possibly by quickly rewriting it if you come up with a better performing design, for example. It’s hard to say “we’re going to rewrite our server” if that’s a six month task. It’s much easier when it’s only a one- or two-sprint task.
This thread needs a reboot coz of 2 reasons.
The Clean Code book that I have doesn't have a mention of Micro Layers. So not sure where the now "omni"-downloaded Clean Code cheat sheet got this from.
It would help if someone can guide me to where in the Clean Code book I can read on this one.
Am not fully satisfied that we re discussing Micro Layers in the scope of Micro Services. Bringing in an arch pattern Micro Services is not helping discuss a topic in a book that was written at a base level of Code and OOAD and a bit of design.
Instead for practical illustration purpose a code level example of the above statement is needed.

Conducting Chaos Engineering

I have 28 micro-services, some of which communicates with each other. All of them are built with SpringBoot 2x and they use their own resources (database, rabbitmq etc.). They are deployed in PCF.
I need to identify weakness of the overall system. That's when I resorted to Chaos Engineering. As this is my first time, I can use some help regarding how to design the effort, what metrics should I collect, tools that I can use, how long should I run such tests etc.
TIA
Done with my research. Posting it here so that someone else might find it useful.
Very good introductions including how to start your first chaos experiment: https://www.gremlin.com/community/tutorials/chaos-engineering-the-history-principles-and-practice/
Good summary of people, tools, companies doing chaos experiments:
https://coggle.it/diagram/WiKceGDAwgABrmyv/t/chaos-engineeringcompanies%2C-people%2C-tools-practices/0a2d4968c94723e48e1256e67df51d0f4217027143924b23517832f53c536e62
Tools:
Spinnaker: https://www.spinnaker.io/. Netflix Chaos Monkey does not support deployments that are managed by anything other than Spinnaker. That makes it pretty hard to use Chaos Monkey from Netflix.
ChaosMonkey for SpringBoot: https://docs.chaostoolkit.org/drivers/cloudfoundry/. Very easy to follow instructions. Easy to turn on/off using Spring profile.
Chaos Toolkit - https://docs.chaostoolkit.org/drivers/cloudfoundry/. This tool is particularly helpful to my situation since my applications are deployed in Cloud Foundry and this tool has a CloudFoundry extension. Pretty elaborate, but easy to follow instructions. My preferred tool so far.
Chaos Lemur - https://content.pivotal.io/blog/chaos-lemur-testing-high-availability-on-pivotal-cloud-foundry. This tool has promise but network admin won't share AWS credentials for me to muck with Pivotal cells.
You already found Chaos Toolkit, which is what I would recommend for the actual experimenting, especially when starting out.
The basis of Chaos is that you need to plan it out and analyse properly before starting out. And then you need to find out what kind of information you will need to monitor the results of the experiment. What happens to service X if we implement latency? Monitor requests, errors etc. Is there a functioning degraded state or is it just dead?
Your question is very broad and as such it's difficult to answer.
Make sure to try to prevent cascading failures whenever you start an experiment and always have stop that terminates the experiment and rolls everything back should something break really bad.

How do I set up Google Cloud to do Behavior Driven Development (BDD) for a web app with an AngularJS client and a Java server?

I want to do Behavior Driven Development (BDD) on Google Cloud. I've written out my BDD stories and it looks like a basic web app will satisfy the requirements. I'd like to use AngularJS for writing client code and Java for the server because these are what I'm most familiar with. I'm also somewhat familiar with Maven.
How do I get started in a way that allows me to focus on writing the code?
1] Select a Google Cloud Service (App Engine, Compute Engine, Container Engine)?
2] Find and copy a Hello World example for any technology that also has as many of the other components as I want to use (JBehave for BDD, AngularJS, Java, a Google Cloud service above)? But which component's getting-started guide should I start with so that the other components integrate easily?
3] Find a suitable Maven archetype?
4] Investigate Spring.io? I've heard that Spring.io tries to make it easy for developers to focus on coding. But I don't know much else about it.
I'd like to spend as little time as possible setting up the project so that I can start doing Behavior Driven Development as quickly as possible. What I normally find happens with a project like this is I lock down one of the decisions about which technology to use, follow their getting started guide, but then run into a brick wall when I start integrating the other components.
How do I start this project so I can spend the least amount of time on non-coding aspects as possible?
Personally, I would not focus on where to execute the system. I my world, development is done on a local computer. CI is done somewhere else and the final artifacts are executed somewhere. This somewhere must be possible to deploy to from your CI build so you can verify that it actually works before deploying.
I would start by building something that works local on my computer, then move forward. I would not spend any time searching for a Maven archetype, I would slowly build my project manually. This may sound as a slow way of doing it, but it will give me knowledge about what is happening. The magic added is magic I have added and therefore no magic.
Where should you start then? I suggest to start by cloning https://github.com/cucumber/cucumber-java-skeleton and extend it with the business functionality you need. If you need more technology, add it when you need it. Not before you need it. My experience is that I usually need less technical stuff than one could imagine from the start. And definitely not the tooling I could think of before I started the project.
One approach is to think front to back or back to front. Thinking front to back means starting with the user interface and once that's built, create the middle tiers, and finally the back end.
The problem with starting with the user interface though is that you can't really verify that it works without a backend. But I believe that's a problem Dependency Injection (DI) solves. You build the user interface and wherever it needs to call the next layer down in the stack (e.g. the server APIs), you instead give it a mock server to call. You can implement enough of the mock server to make the BDD stories pass for the user interface. When every BDD story passes for the user interface, you can then build the next layer down in the stack.
It should be possible to get started with developing the user interface by finding a Hello World example for the front-end technology (AngularJS). Look for a Hello World example that incorporates the two necessary pieces for testing: BDD and Dependency Injection. If you can't find one, then just start with the AngularJS Hello World, get it running. Then as a separate task go do a Hello World for BDD and hopefully it will be apparent after learning how to get BDD working to get BDD working with the AngularJS project. Then do the same for Dependency Injection. Hopefully, that gets you to the point of having an AngularJS fully implemented front-end that you can verify works with BDD and Dependency Injection.
Then you can work on the middle tier. You could set it up as a separate project, independent of the AngularJS project so that you don't have to worry about the hassles of combining code from two layers of the stack into one project. Maven should be able to do that but documentation for Maven tends not to be as easy to use.
To develop the middle tier, find a Hello World example for developing a REST-based API server that runs on Google Cloud. You don't need the front or the back end at this point. The front end can be simulated by the BDD stories and the back end can be simulated by DI. Once all of the BDD stories pass for the middle layer, then you can build the back-end.
Developing the back-end is similar to building the middle-layer. Find a Hello World example for developing a database application that runs on Google Cloud. Most likely the relevant technology is the Google Datastore using Objectify as an Objected Oriented wrapper. But let's call this layer the service layer because there should be a layer of abstraction between the REST API and the datastore. The complication here is might not be very straightforward to develop this layer independently of the middle-tier, but try if possible to do that. In other words, create a separate project that's based on a Google Datastore Hello World example. Use BDD to simulate the middle-tier. You might not need DI anymore because you're at the bottom of the stack, just call the datastore directly. But DI might be useful anyway if it's not possible to run the datastore on your local machine where you're developing.
Now that you have BDD stories functioning on all three layers (User Interface front-end, REST API middle-tier, service layer back-end), now start making it work on the production servers. I'm not confident this is the best approach though because it seems like a lot of complications could arise in this final step. Theoretically, if each layer passed the BDD tests, then it should all zip up together nicely. But integrating it all together might not go that smoothly. One strategy for making sure it goes smoothly is to map each layer onto its own dedicated production system. If each piece ran smoothly on a development machine, shouldn't it run smoothly on a production machine?
Well hopefully, but I'm hoping someone else will propose a better approach that allows someone to spend an even higher proportion of time on coding and a lower proportion of time on this DevOps stuff.

Looking for help on how to manage microservices in Golang

Currently, I deal with microservices on a daily basis at my 9-5. Most everything that I touch is written in PHP, and as only a software engineer, SysOps manages everything that has to do with apps running, etc. I have a little familiarity in how the infrastructure and build pipeline is setup, but I still am not a SysOps or DevOps guy.
With that said, I love Golang and for a side project, I am creating a fairly large web application with a lot of moving parts. Writing and designing the code is easy as I have learned a lot from my day job, but deploying and managing Golang web apps (as they are executables) is quite different than updating files for apache to serve.
I have researched a lot on how I would build and deploy my microservice apps, but I keep on thinking of more problems that will need to be solved along the way. I have tinkered with the idea of using Docker for all of this, but I would rather not have the added complexity of learning that and managing storage for all of the images as that could be large.
Is there a best practice or a good way to manage Golang applications after they have been deployed? I would need a way to keep track of all the microservice processes to be able to see if they are still up and to be able to stop them when a new build is going to be deployed.
As for the setup, just assume that all the microservices will be run on the server, not in a container or in a VM. They will all need to be managed, but also able to act upon independently. Jenkins will be used for building and deploying. I will be using Consul for service discovery and possibly configuration, and most likely health checks on the services. I'm thinking of having each microservice register itself to consul when started and deregister when stopping.
Again, I am looking for a solution that is hopefully not just "Docker". I also had thoughts into creating a deploy service that manages the services (add and remove), as well as registering them in Consul. So if I cannot find a better solution, I might go that path. Any help is appreciated.
** Sorry if my question was confusing, but since a couple people answered on the wrong topic at hand, I will try to clarify. I don't need any help making the microservices, or even know anything about them. I brought that point up as to why I need to ask my question. Basically what I need is just the ability to manage the running go processes of all my microservices so I can do deployments and be able to stop and start processes to update the code. It is easier when you have to worry about one app, but when you can have up to 10-15 difference microservices they become harder to keep track of. After my own research, it seems that Supervisord is what I am looking for, but I'm not sure. That is the direction I am going in with this question. Thanks.
Golang is great to use for microservices, but I would say there is not so big difference of managing golang or other languages microservices.
What I would say is golang specific:
you don't need to install anything on servers since golang is compiled to single library
you can take advantage of std lib golang rpc package and gob binary decoding, instead of usin 3rd party solution (gorpc, protocol buffer etc)
Other than that you need to use your own judgement. There is plenty of ways of doing one things in microservices world; one day you will implement solution A but when after 3 month you will see that its better to do B, do that.
In internet, there is so much reading about microservices. I will recommend you 2 good resurces: https://books.google.co.uk/books/about/Building_Microservices.html?id=RDl4BgAAQBAJ&source=kp_cover&redir_esc=y&hl=en
And article: http://highscalability.com/blog/2014/4/8/microservices-not-a-free-lunch.html
Remember, microservices are not a golden bullet, they often can help making application easier to maintain and grow, but from the other side require lot of additional work, consequence in specifying API contracts and strong devops culture.

Golang tour distributed pattern

According to this article, the app-engine front-end and the playground back-end communicate through RPC calls. Each one of app-engine front-end instance and playground instance can be created to support scaling.
I am asking myself what is/are the patterns (solutions) to load balance works between front-end request and back-end instance while keeping RPC.
One solution may be to use one global working queue where tasks are puts inside it with a 'Reply-To' header. This header should point to a per front-end instance queue where responses are put. Something like the following schema (from RabbitMQ tutorial) with rpc_queue shared between back-end instances :
I am not sure this would be a good way to do especially the fact that if the shared queue is offline, the whole system fail (but how to take care of this?).
Thank you.
As an answer and a follow-up of comments I received on the first post, I developed Indenter, a small proof of concept based on the idea proposed of a service discovery daemon (I use etcd instead of ZooKeepr for simplicity however).
I wrote an article about it and release the code if someone may be interested one day:
Indenter: a scalable, fault-tolerant, distributed web service copying the go playground architecture.

Resources