application insights vs elastic (ELK) - elasticsearch

Or I am really bad at searching or there is no detailed comparison between App Insights and ELK stack ?
All monitoring is going to be used for simple Web API, there going to be tons of end points but user traffic should not be too high.
So my question.. Is there any general points/differences when choosing between ELK and App Insights, personally never had a chance to set up any of those, but before setting up test environment would be nice to know in advance, what to expect/look for.

I'm from App Insights team. I think the link provided by #rickvdbosch in a comment gives quite good perspective. It is 1+ years old at this point, so, some items regarding App Insights evolved since then.
I think App Insights and ELK are quite different offerings. The former is managed offering (you can set it up within couple minutes), focused on very broad range of out-of-the-box experiences (collecting incoming/outgoing requests, exceptions, smart alerts, availability monitoring, analytics, live metrics, application map, end-to-end transactions across apps).
My understanding of ELK is that it has very powerful UI visualization and powerful dashboards (though there are adapters for Kibana to work with Azure Monitor). For scenarios where there is a need to store a lot of data (highly loaded apps with adaptive sampling still store limited amount of data) ELK solution might be cheaper to run.

Final decision was to use ELK as servers already have all the configuration, because other team uses it and mainly because logging will need a lot customization.

Related

Laravel Containerization or Serverless

Just looking for opinions on which deployment better is more suitable for laravel apps. We currently deploy to EC2 and have recently been looking at modernising our approach.
Discussing with the dev teams there seems to be a real divide between which technology to use. While I can see the pros and cons of each approach I am edging towards a containerized deployment as it provides a more comfortable dev environment and tech like ECS Fargate can remove a lot of the infrastructure maintenance overhead.
Serverless while it maybe quicker to scale seems to have certain limitations in terms of response size. Some of our APIs have pretty huge response bodies (a problem for another day). API Gateway also has some limitations in the timeout which I think when we are under heavy load could cause issues for us.
Does anyone recommend one deployment method over the other? What experiences have you had? Anything to keep an eye out for?

Graphite vs Elastic Metrics Beat for Windows Performance Counters

I work with a web api, that makes heavy use of Windows Performance Counters. Until now this has not been collected in a good tool.
I would like to start making this data available in a place where we can create dashboards etc.
We already have an Elastic Search Cluster. I am only an enduser when it comes to Elastic. I do not have administrator knowledge. But I have heard about Metric Beats that as far as I can understand is intended for exactly Windows Performance Counters.
But I have also worked with Graphite and Grafana for these types of data in the past.
I have also heard that you can use Grafana as a dashboard tool on top of Metric Beats, is that correct?
I don't know what the best choice is, and I haven't been able to find comparisons on this on the web. So I am hoping someone here can enlighten me.
I also have a sneaky suspicion that I might have misunderstood something since I cannot find comparisons out there.
Thanks
This is a quite subjective question you'll get different answers depending on who you ask.
Anyway, there are three parts:
1) collections of metrics
2) storage of metrics
3) display of metrics
Metrics beats is a collector. I do not know which collectors are suitable for Windows, popular collectors are is Collectd, Telegraf, Beats, Diamond. You basically need to find one which collects the data you are interested in. If you are interested in application metrics you can also plug in a library in your application. A popular choice for Java is Dropwizzard.
Then you'll need some database to store those metrics in. For data storage you can use Graphite, InfluxDB, Elastic, etc, whatever suits your requirements.
And then for displaying the metrics you can basically choose between Grafana, Kibana, and think influxdata has something as well.
If you don't have any specific requirements most of the mentioned tools will do fine.

Passively Logging React App Performance in Production

I'm wondering if there are any utilities/patterns/paradigms/standards for monitoring React applications in production.
I've seen a lot of documentation about React performance debugging that recommends the Chrome Dev Tools (which are great, but aren't a passive way to monitor end user performance)
How could I log data to know how long users are waiting for components to mount or render?
The only thing I've thought of so far is creating a Loggable[Pure]Component that extends React.[Pure]Component whose constructor, componentWillMount/Update, and componentDidMount/Update methods log render/mount times to a server. Then, components I want to monitor can extend these components and, if need be, call super() in the lifecycle methods before doing their own work. To specifically know which components these metrics go to, I'd have to expose a method in the Loggable[Pure]Component class that does something silly like setUniqueId and then each derived class would have to call it in the constructor.
This all seems terrible and I'm very much hoping there are some things people out there have implemented, but I haven't found anything thus far.
I would have a look at some APM tools, they handle the frontend monitoring, and the backend monitoring as well. They all support react, and folks use these all the time for that use case. It really depends on your goals in the monitoring, are you doing this for fun? Do you have a startup? Are you working for a large enterprise? There are 3 major players in this market.
AppDynamics - Enterprise APM, handles the most complex apps. Unified product offering delivered SaaS or on-premises. Has deep database, server, and other monitoring.
Dynatrace - Enterprise APM, handles complex apps well. Fragmented portfolio, but the SaaS product is good. The SaaS product has limited depth in some ways. Handles server and cloud infrastructure monitoring well.
New Relic - Easy and cheap(er than others), not as in-depth as some other options. Tends to be popular with small companies. Does a good job monitoring cloud infrastructure services.
These products all do what you are looking for, but it depends on your goals with the data and how you plan to analyze it.
If you want something free and less functional there are ways to do this with open source, but you'll have to stand up and manage a pretty complex stack. Here is one option.
Check out boomerang, which can log/extract the metrics you are looking for, it doesn't "understand" react, but it should work. This data can be posted to many different systems. The best suited is likely the ELK stack (open source log analytics, and more). Here is one of several examples which marries these two together to provide analysis of the browser performance https://github.com/naukri-engineering/NewMonk

Monitoring & Alerting on production applications [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I've been looking for a discussion on ways to monitor and alert on production applications for a little while now, but haven't found any overwhelming information.
I'm in the process of converting a behemoth of an application into smaller microservices and thought now would be a great time to implement some better monitoring of this application. What are some ways, ideally without using paid applications, to monitor the health of the overall application, and individual microservices?
Some possibilities I've considered.
- Building a small application that periodically checks or receives heartbeats.
- Setting up logstash with kabana on openstack to monitor various logs that the services spit out.
Aaaannnddd that's about all I got.
We're running a fairly large environment (hundreds of servers) which is microservices/docker based, multi-tier, highly available and completely elastic.
When it comes to monitoring and alerting, we're using two different tools:
Nagios for availability monitoring - it basically sends us an email if a service is down, lacks resources or suffers from any other problem which prevents it from operating
ELK - We use it to find the root cause of the problem and to alert about issues, trends before they actually impact the application/business.
So when there is a significant issue, Nagios will alert and we will jump into the Log analytics console to try to find the problem. In some cases, the ELK will alert when issues start to build up before it is seen on Nagios. That way we can prevent the issue from deteriorating. You can read more about setting your own ELK setup on AWS here - http://logz.io/blog/deploy-elk-production/
There are obviously many commercial tools for both monitoring, alerting and log analytics but since you're looking for free/open-source tools I've recommended these.
**As a disclaimer, I'm the CEO and Co-Founder of Logz.io which amongst other things offers Enterprise-ELK as a service
There are two elements to monitoring:
Availability - will it work
Performance - is it working properly
Availability is easy, there are hundreds of tools which do synthetic transactions. You can use a service (I can provide a specific life, but there are so many out there from pingdom, to site 24x7, to various other point solutions)
If you want to understand performance have a look at the APM technologies. THey range from more simplistic tracing products which look at the end user and component level performance to more sophisticated tools which actually stitch the whole transaction path together including the browser data.
Gartner has research on both of these markets (I wrote a lot of it before I left). I work for a company AppDynamics which does all of the above in a single product including application availability and performance (mobile or web). We offer the solution SaaS or you can install it internally. FInally we also pull the data together including logs into a backend.
You can build availability monitoring and log collection, you can also collect client side data and other telemetry you emit, but there is no good open source APM tooling out there for a true transaction tracing technology. Also how much time do you want to spend managing ELK, opentsdb, graphite, statsd, collectd, Nagios, etc etc to get this done...
There are multiple way to monitor your production servers, you can go with some of the free limited server monitor like Nagios which is hard to configure and not as simple to work. Or you can look at some of the players in this market like Stackify, LogicMonitor or several others. If you want additional tools like code level monitoring, then you'll need to look on vendors that provide APM (application performance management) such as Stackify, New Relic, AppDynamics You'll find vast price differences and features, so it is really about what are your requirements.

Why are Elasticsearch service providers that pricey?

Why are elasticsearch service providers like Bonsai that expensive?
What is my advantage of using them?
What stops me from building and configuring elasticsearch on my own using a much more cheaper hosting server with no constraints?
(Full disclosure: I am a founder of Qbox, which provides Elasticsearch as a service)
It is indeed possible to run Elasticsearch on your own infrastructure or in any of the various cloud infrastructure providers. For some, this might be a requirement due to compliance restrictions, regulatory restrictions, or maybe you have your own pricing negotiated.
However, if your nodes become unresponsive, these infrastructure providers will only be able to tell you if your server is available or not. There are a million reasons why an Elasticsearch node is unresponsive. So, if you want production-quality uptime and availability support, a hosted provider is not a bad choice.
I can not speak for Bonsai, but I can speak for Qbox Elasticsearch hosting. In our case, the pricing is based on the on-demand price for the underlying infrastructure provider. Anybody can do back of the envelope calculations to see what our margin is. We feel that this is the best way to ensure that the pricing model doesn't conflict with the myriad ways in which Elasticsearch is used.
The same questions have been asked of managed service providers since the dawn of computing. You could definitely do it yourself, but is it the best use of your resources? We will concede that for some companies, self-hosting will always be the right decision, but we know that a meaningful percentage of the market appreciates the time and money saved by not having to hire full-time ES expertise or consultants.

Resources