Helm chart - long and boring deployment - performance

I have huge application with many many microservices, which developed by different teams.
There is chart which includes tens of subcharts.
Deployment of this chart takes 10 min (only render all values) and start deployment.
As I mentioned there are different teams which have different style when writing helm charts.
So my questions - are there any useful helm performance tools which could show "problematic" places? For example point out that charts have many tpl function calls, which leads to slowness...

Related

How to decompose a monolith into microservices by business capability? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 15 days ago.
Improve this question
I have a monolith application that uses one database, and in my company, we decide to rewrite the application and use microservices in the backend.
At this time, we decided NOT to split the database because other applications and processes are using it, and it takes two years to change.
The difficulty in the process is to decompose and identify the right microservices.
I'll try to explain our system by start describing the UI. Please read carefully because I am trying to explain it in detail.
The system displays the stock market data. The Company or Fund or Fund manager in the market is posting everyday reports about the company's activities like status, information for investors, and more.
"breaking announcement" page
displays a list of today's priority reports. Each row contains the subject from the pdf document (the report) that the company is publishing and the company that belongs to the report:
When the user clicks on the row, we redirect to "report page" and which contains the report details:
In the database, we have entities such report, company, company_report, event, public_offers, upcoming_offering, and more.
So to get the list, we run an inner join query like this:
Select ... From report r inner join
company_report cr on r.reportid=cr.reportid
inner join company c on cr.company_cd=c.company_cd
Where ....
Most of our server endpoints are not changing anything but are only used to retrieve the data.
So I'll create this endpoint /reports/breaking-announcement to get the list, and it returns an object like that:
[{ reportId, subject, createAt, updateAt, pdfUrl, company: { id, name } }]
today's companies report page acts like "breaking announcement" page. but the page displays all the reports from today (not necessarily with priority).
disclosures are reports
On this page, we also have a search to get all reports by cretiria for example to get reports by company name. to do that we have autocomplete so the user types the company name or id.
In other to do that we think it should be API endpoint /companies/by-autocomplete and the response will [{ companyId, companyName, isCompany }].
eft page same as before, but this time we display the Funds report's (not a companys reports).
The list contained the fund name and the subject of the report. each click on the row leads to the report detail page (same page).
On this page we have a search by criteria such date-from date-to, name or id of the funds by autocomplete. and endpoint (/funds/by-autocomplete returns [{ fundId, fundName, ...}]).
foreign etf page same as before, list of items. each item is like before:
<fund name>
<subject of the report>
The query is different.
Okay, this was a very long description. thank you for reading.
Now I want to detect what are the microservices for this application.
I endup with:
Report microservice - responsible for getting and handling all the reports in the system.
which have a endpoints like getall, getbyid, get like getbreakingannouncement, getcompanytodayreports, getfunds, getforeignfunds. the report microservice will make a request to company or funds microservice to join the data from the company and build to the response.
company microservice:
handle all companies data. I mean endpoints such getall, getByIds (for report service), getByAutocomplete.
funds microservice:
handle all funds data. I mean endpoints such getall, getByIds (for report service), getByAutocomplete.
There are other services, such as a notification service or email service. but those are not business services. I want to split up my business logic into microservices in order to deploy and maintain them easily.
I'm not sure I decomposing right. maybe I do. but is fit the microservice ideas? it's fit the Pattern: Decompose by business capability
? if not what are the business capability in my system?
I don't think a query-oriented decomposition of your current application monolith will lead to a good microservice (MS) design. Two of your proposed microserivces have the same end-point query API which suggests to me that you are viewing your first-generation microservices as just entity-servers.
Your idea to perform joins on cross MS query operations indicates these first gen "microservices" are closely coupled and hence fall short of a genuine MS architecture.
One technique to verify an MS design is to ask yourself, "how would the whole system cope if one MS is unavailable for 3 minutes?". Solving that design challenge leads down a path towards decoupled message-base interactions between the microservices. And this in turn leads to interactions between Microservices being expressed as business operations where one MS raises messages that trigger a mutation in the state of another MS.
Maybe you should reduce the scope of your MS ambitions and instead look at Schema Stitching in GraphQL. Reading between the lines of your question I think a more realistic first step towards a distributed system would be to create specialised query services with a GraphQL endpoint.
At this time, we decided NOT to split the database because other applications and processes are using it, and it takes two years to change.
I'll try to stop you right here. In general case shared database is a huge antipattern in microservices architecture and should be avoided as much as possible. There are multiple problems here - less transparent dependencies between services which can cause high coupling with all the consequences in development and deployment, increasing chance to eventually end up with distributed monolith instead of microservices, etc.
Other applications and processes using it should not stop you from moving away from it - there are things which allow to mitigate that - you just sync data between services and "legacy" database (asynchronously using basically the same approaches like you will use in your microservices - for example transaction log tailing for example using something like debezium). It have it's own costs but I would argue that it is usually better to pay them upfront then have to pay bigger percentages on the tech debt.
I endup with: ....
I would argue that this split looks more like decomposition by subdomain then by business capability. Which is actually can be quite fine and suits microservices architecture also.
Based on your description I see at least the following business capabilities in your system that can be defined:
View (manage?) breaking announcements
View (manage?) reports
Search (reports?)
Potentially "today's reports" and "Funds reports" can be considered as separate business capabilities.
I want to split up my business logic into microservices in order to deploy and maintain them easily.
Then again - I highly recommend to reconsider not moving away from shared database.
I'm not sure I decomposing right
Without whole overview of the system including amount of data, data flows, resources available for development and competences in the teams, amount of incoming new business requirements, potential vectors of change, etc. it is hard to actually tell.
P.S.
Note that despite the microservices architecture having a lot of popularity it is not always a right solution for a concrete project to go full-blown microservices. If you have quite small team and/or do not handle high loads/large amount of data with various access patterns then potentially you do not need microservices. You still can leverage a lot of approaches used in the microservices architecture though.

Add/Merge 2 Stages within a single Stage in MS Dynamics (on-premise or online)

Is there a way merge/show 2 stages within a bigger stage?
Some of our business processes have 2 stages within a certain macro-stage and we don't want our sales team to think they are on a separate stage.
If there isn't a point & click customization way, can it be hard coded?
There are no sub-stages in BPF. You can always have a sub-status dropdown on each stage or split them into two different stages.
If you are talking about either the branching or concurrent Processes on the same record, then it's little different. If that's the case, you have to retrieve the other stages/processes & display in web resource for clear picture.
I strongly believe you can fix this by simple user training :)

Sonarqube report in graph/chart for time (weekly/daily) and number of issues

I want to display a graphical report based on time (weekly/daily) which shows that what is the status of static code analysis over the period of time. E.g. vertical bar will denote number of issue and horizontal will display the time day/month/week. This will help to keep an watch of code quality easily over the period of time (something like burn down chart of scrum). Can someone help me for this?
The 5.1.2 issues search web service includes parameters which let you query for issues by creation date. Your best best is to use AJAX requests to get the data you need and build your widget from there.
Note that you can query iteratively across a date range using &p=1&ps=1 (page=1 and page size=1) to limit the volume of data flying around, and just mine the total value in the top level of the response to get your answer.
Here's an example on Nemo

EmberJS: Good separation of concerns for Models, Stores, Controllers, Views in a rather complex application?

I'm doing a fairly complex emberjs application, and tying it to a backend of APIs.
The API calls are not usually tied to any particular model, but may return objects of various types in different sections of the response, e.g. a call to Events API would return events, but also return media assets and individuals involved in those events.
I've just started with the project, and I'd like to get some expert guidance on how best to separate concerns to have a clean maintainable code base.
The way I am approaching this is:
Models: essentially handle records with their fields, and other computed properties. However, models are not responsible for making requests.
e.g. Individual, Event, Picture, Post etc.
Stores: They are essentially caches. For example, an eventStore would store all events received from the server so far (from possibly different requests) in an array, and also in an hash of events indexed by id.
e.g. individualStore, eventStore etc.
Controllers: They tie to a set of related API calls, e.g. eventsController would be responsible for fetching events or a particular event, or creating a new event etc. They would 'route' the response to different stores for later retrieval. They don't keep the response once it has been sent to stores.
e.g. eventsController, userSearchController etc.
Views: They are tied to a particular view. In general, my application may have several views at different places, e.g. latestEventsView on the Dashboard in addition to having a separate events page.
Templates: are what they are.
Quite often, my templates require to be bound directly to the stores (e.g. peopleView wants to list all the individuals in the individualStore in a list, sorted by some order).
And sometimes, they bind to a computed property
alivePeople: function () { ... }.property('App.individualStore.content.#each'),
The various filtering and sorting options 'chosen' in the view, should return different lists from the store. You can see my last question at what is the right emberjs way to switch between various filtering options?
Who should do this filtering, the view themselves or the stores?
Is this kind of binding across layers okay, or a code smell? Is the separation of concerns good, or am I missing something? Shouldn't controllers be doing something more here? Should my views directly bind to stores?
Any particular special case of MVC more suited to my needs?
Update 17 April 2012
My research goes on, particularly from http://vimeo.com/user7276077/videos and http://jzajpt.github.com/2012/01/17/emberjs-app-architecture.html and http://jzajpt.github.com/2012/01/24/emberjs-app-architecture-data.html
Some issues with my design that I've figured out are:
controllers making requests (stores or models or something else should do it, not controllers)
statecharts are missing -- they are important for view-controller interactions (after sometime you realize your interactions are no more simple)
This is a good example of state charts in action: https://github.com/DominikGuzei/ember-routing-statechart-example
UPDATE 9th JANUARY 2013
Yes, it's been long but this question is lately getting lots of views, and that's why I'd like to edit it so that people may get a sense.
Ember's landscape has changed a lot since this question was framed, and the new guides are much improved. EmberJS has come up with conventions (like Rails) and the MVC is much more well defined now.
Anybody still confused should read all the guides, and watch some videos:
Seattle Ember.js Meetup
At the moment, I'm upgrading my application to Ember.js 1.0.0-pre2.
You should think of your application in terms of states. Have a look at this
Initially, only a route and a template are required to describe
something and finally display it in the browser, that's what the new
API of Emberjs tries to enforce. As your requirements get more
elaborate you can throw in a view, a controller or an object. Each
though answers a specific need.
Consider a view if you need to handle any browser events or wrap
any 3rd party javascript lib you're using for animation, styling ..
Consider an Object if you need to capture domain specific
information, most likely mimics backend information.
A controller is merely a proxy for the domain object and may encapsulate logic that doesn't pertain necessarily to the object.
That's all what's to it. If you learn how to design your application in terms of states, the rest will fall into the right place, providing you're using the latest api, enforcing the rules i mentioned previously.
Since the release of Ember 1.0.0-pre4 with the new router implementation I've seen two good references describing a standardised EmberJS app structure.
Those of you familiar with Rails would find it fairly familiar.
https://github.com/trek/ember-todos-with-build-tools-tests-and-other-modern-conveniences
http://reefpoints.dockyard.com/ember/2013/01/07/building-an-ember-app-with-rails-api-part-1.html
The ember-rails project at https://github.com/emberjs/ember-rails includes a Rails generator for creating an EmberJS application directory structure that is essentially the same as the structure described in the two links above.
The EmberJS guides also now describe the new routing structure. http://emberjs.com/guides/
UPDATE 21/08/2013
If you are using Rails then the ember-rails gem is great. I've used it with a lot of success.
There are two efforts within the ember community to assist in providing a standardised ember application layout. Apparently they are going to be merged but for now check out:
https://github.com/rpflorence/ember-tools
https://github.com/stefanpenner/ember-app-kit
See also this http://addyosmani.com/largescalejavascript/ It is not about EmberJs in particular but it's a great article that gives you an idea how to write larg scale javascript apps.

Is there a web-based generic metric monitoring service?

First of all... I'm not looking for New Relic :-)
I'm looking for something very similar to Munin but hosted, and accessible (at least for pushing data) via an HTTP API. I want to monitor some custom metrics on a web-application and I'm looking for nice graphs, historical data, ease of setup and obviously the ability to use custom metrics that I'll measure and report myself. I'll be using it to monitor aspects of a NodeJS app, but the source of the data shouldn't matter much.
Try AlertGrid. It has extremely simple API (via HTTP), with only one method which is used to push any custom data. Then you build rules in a nice and simple editor to handle the incoming data (e.g. if metric1>10 and metric2 not in ['a','b','c'] then send email to X and sms to Y) or handle situations when expected event did not occur at all within a timeframe (e.g. when no data received from X for 15 minutes, then email to Y, sms to Z). It can also automatically draw simple graphs from the received data (for integer and float fields). Everything is web-based.
Unlike Nagios, AlertGrid is extremely simple to use and integrate, and requires no installation. If you know how to make a http request, then in 5 minutes you have a working solution (examples and wrapper classes are available). I'm on the dev team, so if you had any questions, feel free to ask.
You can try Nagios or write plugin for Munin.
I really like DataDog. I think it would check the boxes on all of your requirements. We've been using it to set up dashboards for a number of services at Mobify and so far it's been a pleasure to use.
I've recently released a NodeJS library that might be helpful: datadog-metrics.
Here's some example code:
var metrics = require('datadog-metrics');
metrics.init({ host: 'myhost', prefix: 'myapp.' });
function collectMemoryStats() {
var memUsage = process.memoryUsage();
metrics.gauge('memory.rss', memUsage.rss);
metrics.gauge('memory.heapTotal', memUsage.heapTotal);
metrics.gauge('memory.heapUsed', memUsage.heapUsed);
};
setInterval(collectMemoryStats, 5000);

Resources