I decided to move to MicroService architecture, divide a project into multiple services and run those services on DCOS.It really gives a good story to project deployment and maintenance. But it makes development process complex.
For the developer, it was easy to run the application locally while implementation is in progress.Now the project is divided into multiple services and runs on DCOS which require good configuration. so to test application for the developer in the middle of implementation becomes a nightmare.
Guys, anyone is using DCOS with Microservice, can you please suggest what process you are following for internal development.
DCOS is just a tool for your deployment, so in order to give a better answer you'd have to share more about your technology stack in your question. But here some general thoughts: There are different types/levels of testing and there are different considerations for each.
On unit level - This depends on what technology you use for implementing your services and is the reason why languages like go become more and more popular for server development. If you use go for example, you can easily run any service you are currently developing locally (not containerized) on the dev machine. And you can easily run attached unit tests. You would either run dependent services locally or mock them up (probably depending on effort). You may also prefer asking each service team to provide mock services as part of their regular deliveries.
And you will require special environment settings and service configuration for the local environment.So summarized this approach will require you to have means in place to run services locally and depending on the implementation technologies you use it will be easier or harder.
On deployment/integration level - Setting up a minimal cluster on the dev's local machine and/or using dedicated testing- and staging clusters. This allows you to test the services including containerization and in a more final deployment environment with dependencies. You would probably write special test clients for this type of test. And this approach will also require you to have separate environment settings, configuration files, etc. for the different environments. Tools like Jaeger become more popular for helping with debugging errors through multiple services here.
Canary testing. Like the name suggests - You deploy your latest service version to a small portion of your production cluster in order to test it on a limited number of users first before rolling it out to the masses. In this stage you can run user level tests and in fact your users become the testers, so it is to be used carefully. Some organizations prefer to have special beta-type-users that will only get access to those environments.
Related
I want to able to use https://web.dev/measure/ on our staging environments but unfortunately, our staging environments are heavily restricted only allowing certain IPs to access it.
My Question is. Is it possible to whitelist https://web.dev/measure/ soo that I can run these tests on our stating environments?
I believe the web.dev/measure tool uses the PageSpeed Insights API under the hood, so you could look at your server logs for the API accessing it, and see if they're consistent from run to run. If they are, the you could whitelist that IP.
Honestly, though, I'd recommend looking into Lighthouse CI, as it's designed for your use case. The web.dev/measure tool is designed for learning.
The question is tied more to CI/CD practices and infrastructure. In the release we follow, we club a set of microservices docker image tags as a single release, and do CI/CD pipeline and promote that version.yaml to staging and production - say a sort of Mono-release pattern. The problem with this is that at one point we need to serialize and other changes have to wait, till a mono-release is tested and tagged as ready for the next stage.A little more description regarding this here.
An alternate would be the micro-release strategy, where each microservice release in parallel through production through the CI/CD pipeline. But then would this mean that there would be as many pipelines as there are microservices? An alternate could have a single pipeline, but parallel test cases and a polling CD - sort of like GitOps way which takes the latest production tagged Docker images.
There seems precious little information regarding the way MS is released. Most talk about interface level or API level versioning and releasing, which is not really what I am after.
Assuming your organization is developing services in microservices architecture and is deploying in a kubernetes cluster, you must use some CD tool (continuous delivery tool) to release new microservices services, or even update a microservice.
Take a look in tools like Jenkins (https://www.jenkins.io), DroneIO (https://drone.io)... Some organizations use Python scripts, or Go and so on... I, personally, do not like this approch, I think the best solution is to pick a tool from CNCF Landscape (https://landscape.cncf.io/zoom=150) in Continuous Integration & Delivery group, these are tools test and used in the market.
An alternate would be the micro-release strategy, where each microservice release in parallel through production through the CI/CD pipeline. But then would this mean that there would be as many pipelines as there are microservices?
It's ok in some tools you have a parameterized pipeline thats build projects based in received parameters, but I think the best solution is to have one pipeline per service, and some parameterized pipelines to deploy, or apply specific tests, archive assets and so on... Like you say micro-release strategy
Agreed, there is little information about this out there. From all I understand the approach to keep one pipeline per service sounds reasonable. With a growing amount of microservices you will run into several problems:
how do you keep track of changes in the configuration
how do you test your services efficiently with regression and integration tests
how do you efficiently setup environments
The key here is most probably that you make better use of parameterized environment variables that you then look to version in an efficient manner. This will allow you to keep track of the changes in an efficient manner. To achieve this make sure to a.) strictly paramterize all variables in the container configs and the code and b.) organize the config variables in a way that allows you to inject them at runtime. This is a piece of content that I found helpful in regard to my point a.);
As for point b.) this is slightly more tricky. As it looks you are using Kubernetes so you might just want to pick something like helm-charts. The question is how you structure your config files and you have two options:
Use something like Kustomize which is a configuration management tool that will allow you to version to a certain degree following a GitOps approach. This comes (in my biased opinion) with a good amount of flaws. Git is ultimately not meant for configuration management, it's hard to follow changes, to build diffs, to identify the relevant history if you handle that amount of services.
You use a Continuous Delivery API (I work for one so make sure you question this sufficiently). CDAPIs connect to all your systems (CI pipelines, clusters, image registries, external resources (DBs, file storage), internal resources (elastic, redis) etc. They dynamically inject environment variables at run-time and create the manifests with each deployment. They cache these as so called "deployment sets". Deployment Sets are the representation of the state of an environment at deployment time. This approach has several advantages: It allows you to share, version, diff and relaunch any state any service and application were in at any given point in time. It provides a very clear and bullet proof audit auf anything in the setup. QA environments or test-feature environments can be spun of through the API or UI allowing for fully featured regression and integration tests.
I have worked in an organisation where we used both puppet and ansible for configuration management... but I always wondered why would they use both tools ... what can puppet do that Ansible cannot do?
The only thought that came to my mind was:
- Puppet was used to check if the system is in the desired state at regular intervals; while Ansible was used to deploy one time things (code, scripts, packages etc)
Can someone please explain why would an organisation use both the tools? Can regular config check be done by Ansible?
Cheers
In the interest of full disclosure, I'm an upstream community contributing developer to Ansible but I will do my best to keep my response neutral.
I think this is largely opinionated and you'll get varied results depending on who you talk to but I think about it effectively like this:
Ansible is an automation tool and Puppet is a configuration management tool. I don't consider them to be direct competitors they way they seem to get compared by tech journalists except for the fact that there's some overlap in their abilities to perform the functions you would want out of a configuration management tool: service/system state, configuration file templating, application lifecycle management, etc.
The main place where I see these tools in completely different light is that Ansible performs automation of tasks, those tasks can be one of many "type" of things that you don't really expect from a configuration management tool, such as IaaS provisioning (AWS, GCE, Azure, RAX, Linode, etc), physical network configuration (Cisco IOS/ASA, JunOS, Arista, VyOS, Netscaler, etc), virtual machine creation/management, physical load balancer configuration (F5 BigIP) and the list goes on. Effectively, Ansible is your "automation glue" to create and automate a process that you and your team might have otherwise had to do by hand. It as a tool gets compared to things like Puppet, Chef, and SaltStack because one of the many "types" of task you would automate more or less add up to configuration management.
On the flip side though Configuration Management tools such as Puppet generally have a daemon running on the nodes, which needs to be provisioned/bootstrapped (maybe with Ansible), which has it's advantages and disadvantages (which I won't debate here, it's largely out of scope). One thing that daemon provides you is continuous eventual consistency. You can set configuration management authoritatively on the Puppet Master and then the agent will maintain that state on the systems and will provide reporting when it has to change something which can be wired up to alert monitoring to notify you when something's wrong. While Ansible will also report when something needed changing, it only does this when you run the Ansible Playbook. It's a push-model and not pull-model (nor is it a continuously running daemon that will enforce system state). This has it's advantages for reporting and the like. I will note that something like Ansible Tower/AWX can more or less emulate this functionality, but it's not a "baked in" feature. Just something to keep in mind.
Ultimately, I think it boils down to a matter of familiarity of technologies, desired feature set, and if you have a pre-existing investment (both time and money) into a toolchain. If you have been using Puppet for 5 years, there's no real motivation to fork-lift replace it with something else when you can use Ansible to augment it (there's even a puppet module in Ansible) and allow each to play nicely with each other, getting the features you want from both. However, if you're starting from scratch, then I think you may consider actually doing a Pros/Cons or feature comparison for what you really want out of the tool(s) to find out if it's worth the investment of picking up two tools from scratch or finding one that can fulfill all your needs and, while I'm biased towards Ansible in this regard, the choice ultimately lies on the person who's going to have to use the utility to maintain the infrastructure.
I think a good example of the hybrid approach is I know of a few companies that use Puppet for configuration management, and Ansible for software lifecycle release process where one of the tasks in their playbooks is literally calling the puppet module to bring all the systems into configuration consistency. The Ansible component in this is to automate/orchestrate between various systems, the basic outline of the process is this: start with removing a group of hosts from the load balancer, ensure database connections have stopped, perform upgrades/migrations, run puppet for configuration/state consistency, and then bring things back online in whatever order they've deemed appropriate. This all happens from a single command (or a click of a button in Tower/AWX).
Anyhoo, I know that was kind of long winded but hopefully it was helpful.
Both Marathon and Aurora are built on Mesos and supposedly are engineered for running long running services. My questions are:
What are their differences? I have struggled in finding any good explanations regarding their key differences
Do these frameworks run anything that runs on Linux? For Marathon they state that it can run anything that "is executable in a shell" but this is sort of vague :)
Thanks!
Disclaimer: I am the VP of Apache Aurora, and have been the tech lead of the Aurora team at Twitter for ~5 years. My likely-biased opinions are my own and do not necessarily represent those of Twitter or the ASF.
Do these frameworks run anything that runs on Linux? For Marathon they
state that it can run anything that "is executable in a shell" but
this is sort of vague :)
Essentially, yes. Ultimately these systems are sophisticated machinery to execute shell code somewhere in a cluster :-)
What are their differences? I have struggled in finding any good
explanations regarding their key differences
Aurora and Marathon do indeed offer similar feature sets, both being classified as "service schedulers". In other words, you hand us instructions for how to run your application servers, and we do our best to keep them up.
I'll offer some differences in broad strokes. When it comes to shortcomings mentioned in each, I think it's safe to say that the communities are aware and intend to fix them.
Ease of use
Aurora is not easy to install. It will likely feel like you are trailblazing while setting it up. It exposes a thrift API, which means you'll need a thrift client to interact with it programmatically (a REST-like API is coming, but is vaporware at the moment), or use our command line client. Aurora has a DSL for configuration which can be daunting, but allows you to easily share templates and common patterns as you use the system more.
Marathon, on the other hand, helps you to run 'Hello World' as quickly as possible. It has great docs to do this in many environments and there's little overhead to get going. It has a REST API, making it easier to adapt to custom tools. It uses JSON for configuration, which is easy to start with but more prone to cargo culting.
Targeted use cases
Aurora has always been designed to handle a large engineering organization. The clusters at Twitter have tens of thousands of machines and hundreds of engineers using them. It is critical to Twitter's business. As a result, we take our requirements of scale, stability, and security very seriously. We make sure to only condone features that we believe are trustworthy at scale in production (for example, we have our Docker support labeled as beta because of known issues with Docker itself and the Mesos-Docker integration). We also have features like preemption that make our clusters suitable for mixing business-critical services with prototypes and experiments.
I can't make any claim for or against Marathon's scalability. On the feature front, Marathon has build out features quickly, but this can feel bleeding edge in practice (Docker support is a good example). This is not always due to Marathon itself, but also layers down the stack. Marathon does not provide preemption.
Ownership
To some, ownership and governance of a project is important. It feel that in practice it does not define the openness of a project, but for some people/companies the legal fine print can be a deal-breaker.
Marathon is owned by a company (Mesosphere)
To some, this is beneficial, to others is is not. It means that you can pay for support and features. It also means that there is something to be sold, and the project direction is ultimately decided by Mesosphere's interests.
Aurora is owned by the Apache Software Foundation
This means it is subject to the governance model of the ASF, driven by the community. Aurora does not have paying customers, and there is not currently a software shop that you can pay for development.
tl;dr If you are just getting your feet wet with running services on Mesos, I would suggest Marathon as your first port of call. It will be easier for you to get running and poke around the ecosystem. If you are forming the 'private cloud strategy' for a company, I suggest seriously considering Aurora, as it is proven and specifically designed for that.
So I've been evaluating both and this is my summary.
Aurora
[+] also handles recurring jobs
[+] finer grained, extensive file-based configuration
[+] has namespaces so multiple environments can co-exist
[-] read-only UI, no official API
[~] file based configuration and cli based execution brings overhead (which can be justified with more extensive feature set)
Marathon
[+] very easy to setup and use
[+] UI that provides control and extensive API (even with features missing from UI at the moment)
[+] event bus to listen in on api calls
[-] handles only long-running jobs
[-] does not have separate deployment-run-cleanup steps, these if necessary need to be combined in a script of one-liner
Even though Aurora has better capabilities, I prefer Marathon due to Auroras complexity/overhead and lack of UI (for control) & API
I have more experience with Marathon.
Ideological:
Marathon is a relatively tested product that is used in production at AirBnB. Aurora is an early Apache project (so YMMV).
Both are open source and active. Feel free to contribute pull requests or file issues!
Technical:
Marathon doesn't schedule batch tasks or cron jobs
Marathon has a friendly UI and better health indicators (in 0.8.x)
In regards to your second question, you can run any command or docker container, and Mesos will do the resource isolation for you. If you have 50% CentOS nodes and 50% Ubuntu nodes and you run a task that executes apt-get, the task will have a 50% chance of failure. Mesos and Marathon have no awareness of the actual machines.
Disclaimer: I don't have hands-on experience with Aurora, only with Marathon.
ad Q1: In a nutshell Apache Aurora is capable of doing what Marathon + Chronos can provide, that is, schedule both long-running services and recurring (batch) jobs; see also Aurora user guide.
ad Q2: Yes, anything. Currently based on cgroups and Docker but hey, you can roll your own.
I want to create a production management system to be used by a small manufacturing firm. The system will allow to document different stages in manufacturing of equipment. The requirements are as follows:
1.Non browser based interface.Need something like Swing or AWT based.While i understand the convenience of implementing a browser based solution,the business owner insists on a non browser interface
2.Accessed from multiple systems.These systems will allow CRUD operations on the central system (Thin Client?)
3.The application will not have more than 3 concurrent users.
I need some advice regarding what would be a good path for this kind of application.Currently, i'm thinking of using Griffon with RMI. However, i don't have much development experience.Read a bit about Apache River (Jini) too.Would it be a good idea to use Griffon with RMI?
Please provide some advice. Thanks.
EDIT:after some reading, i've decided to use more mainstream frameworks.So, Griffon is not an option. How about Jini(Apache River) or OSGI (Apache Felix)?
Hmm how is that a project which recently moved out of the incubation phase be considered mainstream vs a project that's been used in production for more than 3 years now? Anyway, Apache River gives you access to Jini technology and nothing more; meaning you can't achieve item #1 of your list with River alone. River may use RMI for accessing remote resources, however you can use RMI directly, or try out DRMI, Kryonet, Hessian/Burlap, Spring's HTTP Invoker, Protocol Buffers, Avro/Thrift, REST, SOAP, ZMQ and many more.
Even if you choose one of these options and/or River you still have to define the following things
application structure (file structure and runtime behavior)
build setup
dependency management
testing profiles
packaging
deployment strategies
These things and more are what Griffon brings to the table. As you may have noticed the framework allows you to build up applications by adding plugins, reducing thew amount of time you must allot for hunting down dependencies, setting up bootstrap mechanism and getting things done. On the subject of remoting technologies have a look at the different options Griffon has to offer http://artifacts.griffon-framework.org/tags/plugin/remoting
Even more, you can also combine OpenDolphin (http://open-dolphin.org/dolphin_website/Home.html) with Griffon. There's even an example application found at the opendolhpin repository showing a full client-server application (build with Griffon, Grails and OpenDolphin) https://github.com/canoo/open-dolphin/tree/master/dolphin-griffon-crud
With what seems to be your current understanding of the problem, I would not recommend OSGI, especially for a small manufacturing firm (Possible maintenance issues, depending on the "personel").
The main reason why I wouldn't advocate JINI or OSGI in your case is because of what you said
However, i don't have much development experience.
JINI (Apache River) is a viable option as long as you fully understand the concepts of LookupService and service registrations, etc. There's tons of RMI going on here with possible firewall implications...
OSGI is not difficult but you may have issues deciding how to structure your applications as well as interacting with services, etc.
Try to stick to the simplest approach that you can handle for the implementation (Flexible design in mind): Make it work and then improve it.
There are simple Web Services options such as Spring Remoting (over http/https for example), unless Spring introduces too many concepts and headaches for your app.