OpenNMS passive nodes logs scanning - opennms

I would like to setup OpenNMS monitoring system where OpenNMS server will do all the job because I cannot modify nodes which must be scanned. I can although ssh and ftp to those nodes.
I am thinking about using some plugin which will ssh and tail logs.
Any suggestions for plugin I could use or good tutorial how to write my own plugin?

Opennms is at it's best when you have snmp access to your nodes. Snmp will give you the ability to monitor, collect and graph, and without it I believe you will need to create a custom collector in order to collect and graph useful information, if that is your aim. There are some standard collectors:
http://www.opennms.org/wiki/Docu-overview#Data_Collection
http://www.opennms.org/wiki/Documentation:Features_DataCollection
With regards to generic approaches to collecting data, you could use expect scripts Alternatively, you could write some scripts on the clients (if you have the relevant access) that collect data which could be retrieved by the server. You can use key-based SSH connections to ease the auth burden as long as you look after your keys.

Related

How to fetch information from process_events and file_events tables from osquery using golang?

I am new to osquery. I want to fetch real-time OS information using osquery (from these two tables: process_events and file_events). I understood that we could retrieve this information using osquery in daemon mode. I was even able to do the same.
My question now is, "How do I do the same thing in Golang?"
I do not want to create an extension. Simply, I want to start the osquery daemon and fetch information and store it.
To clarify something... Osquery gathers events from various APIs. Depending on the OS and version, those events might come from any of Auditd, BPF, OpenBSM, EndpointSecurity, ETW... To do the same thing with golang, you'd need to implement something that talks to those APIs.
But, I think the more interesting part of your question is how do you leverage osquery to get that data into something else, ideally with golang. There are (at least) 3 routes to pursue.
First, if you're doing this across a fleet on nodes, it is common to run osquery as an agent talking to a remote TLS server. The remote TLS server is responsible for distributing configuration and collecting logs. This is a common scenario, and there are both commercial and OSS tools in this space.
Second, if you're working locally, you can query a running osquery over the thrift socket. This is same interface the extensions would use, but it is not an extension. In the go SDK this is exposed as ExtensionManagerClient
Third, also local, you can have osquery run scheduled queries and log to a local file. Osquery filesystem logging is in json, and this could be ingested.
Generally speaking, I'd recommend towards the first or second approach.
Note that to use the events tables, osquery has to be running as a daemon, so you'll need to either have it running on it's own, or otherwise manage it as a persistent process.

Collect event logs remotelly

In your opinion, which is the best approach to collect the event logs remotely from several Windows machines in a network?
I need to collect the log events remotely and I have several approach (WMI, EventLog class, etc.) but I don't know what is the best way.
Can you help me?
Thanks
EDIT: Are you programming the remote event log access into an app? Maybe you can elaborate on that. If so, what language are you programming in, etc.
Check out OSSEC, perhaps in concert with Logstash and ElasticSearch.
Or you could look at Wevutil, pull event log data to a management workstation then push it into a database.
There's also PsLogList from the sysinternals guys, which you could also use to pull the event log data, then push it into a database.
In my opinion the best way to do so would be configure redis, rabbitMQ or ZeroMQ (well supported plugins) and send all your logs to a queue server from where your logstash indexer will pickup all the logs and processes it.
In this way all your logs will be on central server, which you can persist as well with messaging systems I mentioned above. All your existing system will be same and do not required additional package except a simple client to push to the messaging queue.
http://logstash.net/docs/1.4.0/

Flume to fetch logs over network

I have been working in Flume to fetch logs from a server machine to HDFS. I was able to achieve this if the server and client machines are connected in same network. But how can i achieve the same if the server and client are in different networks.
Do i need to write a custom source for this? [Just checked with twitter example from cloudera in which they're using their own custom source to fetch twitter tweets.]
Any help would be greatly appreciated.
Thanks,
Kalai
If you have a multihomed host joining two non-talking networks you'd like to ship across, you can have a flume agent running there to bridge logs incoming from one network and deliver it to the other one. So your multihomed host will act as a sort of proxy. I don't know if this is necessarily a good idea, as your proxy is probably already busy doing other things if it's the only link between the networks. But if you can set this up, you won't need custom sinks or sources.
If you have two disjoint networks that can both see the internet, you can have one agent post to a web server over HTTP (or TCP for that matter, but it's more work), and another fetch it from the same website. You'll need to write two custom agents (source & sink) for that to work in a performant, reliable and secure fashion, not to mention the web service itself.
Finally, if you have two networks that are completely disconnected (with an air gap), then you may consider writing a custom sink that will, for example, autodetect an inserted tape and copy logs to the tape. Then you take the tape, walk over to another network, plug it in, and have another agent there autodetect it as well and ingest the data :)
Flume agents need to be able to connect to transport events. This means they need to be on the same network.
I'm not sure I understand your question. Why would you expect it to work at all?

Remote Execution in Ruby (Capistrano or MCollective) to collect cloud server performance metrics

I am looking for a way to collect data remotely from various cloud instances (EC2, Rackpsace). The Rackspace API provides no way for collecting server performance metrics (ie load average, cpu usage, memory) via it's API, otherwise this would have never been asked.
I started looking at solutions like Capistrano or Mcollective (I have also considered collectd), but I am unsure of which one would best suit my application. I am trying to avoid using ssh keys for trending purposes (I don't want to have to keep logging in to collect these metrics) The script I am writing is a Ruby script which reboots a cloud server if it's load average is over a certain number. Because these providers don't expose these metrics via their API, I am looking at a way to gather them myself, and I am new to the Ruby community so after briefing over the documentation for all of these tools, I still haven't been able to get a sense of which framework would work best, or if there are other alternatives.
It sounds like Capistrano is more suited to be a deployment tool, although it can perform remote tasks, so after I read the documentation for that it was pretty much out for the purposes of my script.
MCollective looks really attractive for what I am trying to do but it seems I would have to write my own RPC style plugin for this purpose.
I've also considered plugging into some greater monitoring system such as Nagios, Munin, Zenoss, Hyperic, etc, but I'd rather not install some large bulk monitoring system when all I want to collect is but a few simple metrics.
If your intention is to trigger certain actions based on the system performance (like restarting when cpu usage is too high), you should check out god.
I'm not sure if this is also useful when you want to generate some performance statistics over a longer time period. Personally, I'm using Munin for this, but if you don't like it maybe you can find something on Ruby Toolbox | Server Monitoring.

How to extract information from client/server communication with no documentation?

What are methods for undocumented client/server communication to be captured and analyzed for information you want and then have your program looking for this information in real time? For example, programs that look at online game client/server communication and get information and use it to do things like show location on a 3rd party map, etc.
Wireshark will allow you to inspect communication between the client-server (assuming you're running one of them on your machine). As you want to perform this snooping in your own application, look at WinPcap. Being able to reverse engineer the protocol is a whole other kettle of fish, mind.
In general, wireshark is an excellent recommendation for traffic/protocol analysis- however, you seem to be looking for something else:
For example, programs that look at online game client/server communication and get information and use it to do things like show location on a 3rd party map, etc.
I assume you are referring to multiplayer games and game servers?
If so, these programs are usually using a dedicated service connection to query the corresponding server for positional updates and other meta information on a different port, they don't actually intercept or inspect client/server communciations at realtime, and they don't really interfere with these updates, either.
So, you'll find that most game servers provide support for a very simply passive connection (i.e. output only), that's merely there for getting certain runtime state, which in turn is often simply polled by a corresponding external script/webpage.
Similarly, there's often also a dedicated administration interface provided on a different port, as well as another one that publishes server statistics, so that these can be easily queried for embedding neat stats in webpages.
Depending on the type of game server, these may offer public/anonymous use, or they may require certain credentials to access such a data port.
More complex systems will also allow you to subscribe only to specific state and updates, so that you can dynamically configure what data you are interested in.
So, even if you had complete documentation on the underlying protocol, you wouldn't really be able to directly inspect client/server communications without being in between these communications. This can however not be easily achieved. In theory, this would basically require a SOCKS proxy server to be set up and used by all clients, so that you can actually inspect the communications going on.
Programs like wireshark will generally only provide useful information for communications going on on your own machine/network, and will not provide any information about communications going on in between machines that you do not have access to.
In other words, even if you used wireshark to a) reverse engineer the protocol, b) come up with a way to inspect the traffic, c) create a positional map - all this would only work for those communications that you have access to, i.e. those that are taking place on your own machine/network. So, a corresponding online map would only show your own position.
Of course, there's an alternative: you can emulate a client, so that you are being provided with server-side updates from other clients, this will mostly have to be in spectator mode.
This in turn would mean that you are a passive client that's just consuming server-side state, but not providing any.
So that you can in turn use all these updates to populate an online map or use it for whatever else is on your mind.
This will however require your spectator/client to be connected to the server all the time, possibly taking up precious game server slots.
Some game servers provide dedicated spectator modes, so that you can observe the whole game play using a live feed. Most game servers will however automatically kick spectators after a certain idle timeout.

Resources