I want to know how can monitor apache status of clients using icinga2.
I successfully monitoring sql and other basic monitoring.but i have no idea how to monitor apache status
You can use Nagios plugins in Icinga2. This plugin looks like it could work.
Most times it should be enough to monitor your HTTP and/or HTTPS Port.
Additionally you can monitor strings which should be on the page or the duration of your certificate.
Related
I need to implement a server to client communication. Long polling sounds like a non-optimal solution. Sockets would be great. I'm looking at this package:
https://github.com/beyondcode/laravel-websockets
My server is running on AWS Elastic Beanstalk (with a second worker environment for queue and cron).
Does anyone have experience setting up a socket connection in Elastic Beanstalk? In particular, how can I start up a socket server using ebextensions (or any way at all). Looks like I should be using supervisor for the server.
Should this server live in the worker environment? Can it? I don't know much about the moving parts here. Anything is helpful :)
Laravel comes with out of the box broadcasting tool : Laravel Echo.
And you can either use it with a local instance (by deploying Redis for example), or you can use an API or an external tool (Socket.IO, Pusher ..)
Take a look at the documentation https://laravel.com/docs/5.8/broadcasting
PerfMon Server Agent is working fine with JMeter's Listener jp#gc - PerfMon Metrics Collector.
But can it act as a standalone application performance monitoring (APM) agent?
I saw that I can connect and request specific metrics
Server Agent uses simple plain-text protocol, anyone can use agent's capabilities implementing client, based on kg.apc.perfmon.client.Transport interface. If anyone's interested, start the topic on the support forums and I'll describe how to connect third-party client app to agent.
But can I start PerfMon Server when my application is started and save metrics "always" (until application is down) without any listener?
Also can I display the results in a tool other than jp#gc - PerfMon Metrics Collector?
If you look into Server Agent documentation you'll learn that it can be used by any application capable of sending plain text message over TCP or UDP protocol (i.e. telnet or netcat) so you can trigger metrics collection by just sending metric name to the running Server Agent.
With regards to starting PerfMon when your application is started - Server Agent is normal pure Java application so the approach will vary depending on operating system you're using and the nature of your application. Most likely you will need to come up with a shell script which starts both.
For the moment you won't be able to use Server Agent without a "client" - an application which will request metrics from it over TCP or UDP. So if you don't plan to use JMeter you will need to come up with a TCP or UDP client solution which will periodically query the Server Agent for metrics. The output will be normal CSV file which can be visualised using any tool (Libre Office Calc, Grafana, Google Charts, whatever)
More information: How to Monitor Your Server Health & Performance During a JMeter Load Test
I am trying to monitor Nifi using Splunk monitoring tool. I need to send JMX metrics to splunk index, so that it processes data. is there a way to monitor nifi using JMX Port ?
FYI: I am not looking for rest API /rest endpoint
Yes, if you set the System property com.sun.management.jmxremote.port in $NIFI_HOME/conf/bootstrap.conf as described in Monitoring and Management Using JMX Technology, you can connect to this port with a remote monitoring client.
To monitor locally with jconsole, you don't need to do anything; just start Apace NiFi as normal and select the "org.apache.nifi.NiFi" process when presented with the list of running processes in JConsole.
It is strongly recommended to secure your monitoring connection with TLS and use a strong authentication mechanism like client certificates to ensure the application is safe.
We configured a centralized Nagios / Mod_Gearman with multiple Gearman worker to monitor our servers. I need to monitor remote servers deploying gearman workers on the remote site. But, for security reasons, I would like to reverse the direction of the initial connection between these workers and the neb module (inconming flows from workers to our network are forbidden). The Gearman proxy seems to be the solution since it just puts jobs from the "central" gearmand into another gearmand.
I would like to know if it's possible to configure the gearman proxy to send informations to a remote gearmand and get check results from it without having to open inbound flows ?
Unfortunately, the documentation does not give use cases about that. Do you know were I could find more documentation about gearman proxy configurations ?
I want to stop serving requests to my back end servers if the load on those servers goes above a certain level. Anyone who is already surfing the site will still get routed but new connection will be sent to a static server busy page until the load drops below a pre determined level.
I can use cookies to let the current customers in but I can't find information on how to to routing based on a custom load metric.
Can anyone point me in the right direction?
Nginx has an HTTP Upstream module for load balancing. Checking the responsiveness of the backend servers is done with the max_fails and fail_timeout options. Routing to an alternate page when no backends are available is done with the backup option. I recommend translating your load metrics into the options that Nginx supplies.
Let's say though that Nginx is still seeing the backend as being "up" when the load is higher than you want. You may be able to adjust that further by tuning the max connections of the backend servers. So, maybe the backend servers can only handle 5 connections before the load is too high, so you tune it only allow 5 connections. Then on the front-end, Nginx will time-out immediately when trying to send a sixth connection, and mark that server as inoperative.
Another option is to handle this outside of Nginx. Software like Nagios can not only monitor load, but can also proactively trigger actions based on the monitor it does.
You can generate your Nginx configs from a template that has options to mark each upstream node as up or down. When a monitor detects that the upstream load is too high, it could re-generate the Nginx config from the template as appropriate and then reload Nginx.
A lightweight version of the same idea could done with a script that runs on the same machine as your Nagios server, and performs simple monitoring as well as the config file updates.