Collect event logs remotelly - windows

In your opinion, which is the best approach to collect the event logs remotely from several Windows machines in a network?
I need to collect the log events remotely and I have several approach (WMI, EventLog class, etc.) but I don't know what is the best way.
Can you help me?
Thanks

EDIT: Are you programming the remote event log access into an app? Maybe you can elaborate on that. If so, what language are you programming in, etc.
Check out OSSEC, perhaps in concert with Logstash and ElasticSearch.
Or you could look at Wevutil, pull event log data to a management workstation then push it into a database.
There's also PsLogList from the sysinternals guys, which you could also use to pull the event log data, then push it into a database.

In my opinion the best way to do so would be configure redis, rabbitMQ or ZeroMQ (well supported plugins) and send all your logs to a queue server from where your logstash indexer will pickup all the logs and processes it.
In this way all your logs will be on central server, which you can persist as well with messaging systems I mentioned above. All your existing system will be same and do not required additional package except a simple client to push to the messaging queue.
http://logstash.net/docs/1.4.0/

Related

How to fetch information from process_events and file_events tables from osquery using golang?

I am new to osquery. I want to fetch real-time OS information using osquery (from these two tables: process_events and file_events). I understood that we could retrieve this information using osquery in daemon mode. I was even able to do the same.
My question now is, "How do I do the same thing in Golang?"
I do not want to create an extension. Simply, I want to start the osquery daemon and fetch information and store it.
To clarify something... Osquery gathers events from various APIs. Depending on the OS and version, those events might come from any of Auditd, BPF, OpenBSM, EndpointSecurity, ETW... To do the same thing with golang, you'd need to implement something that talks to those APIs.
But, I think the more interesting part of your question is how do you leverage osquery to get that data into something else, ideally with golang. There are (at least) 3 routes to pursue.
First, if you're doing this across a fleet on nodes, it is common to run osquery as an agent talking to a remote TLS server. The remote TLS server is responsible for distributing configuration and collecting logs. This is a common scenario, and there are both commercial and OSS tools in this space.
Second, if you're working locally, you can query a running osquery over the thrift socket. This is same interface the extensions would use, but it is not an extension. In the go SDK this is exposed as ExtensionManagerClient
Third, also local, you can have osquery run scheduled queries and log to a local file. Osquery filesystem logging is in json, and this could be ingested.
Generally speaking, I'd recommend towards the first or second approach.
Note that to use the events tables, osquery has to be running as a daemon, so you'll need to either have it running on it's own, or otherwise manage it as a persistent process.

What would be the advantages of using ELK for log management over a simple python logging + existing database log table combo?

Assuming I have many Python processes running on an automation server such as Jenkins, let's say I want to use Python's native logging module and, other than writing to the Jenkins console or to a log file, I want to store & centralize the logs somewhere.
I thought of using ELK for that, but then I realized that I can just as well create a dedicated log table in an existing database (I'm using Redshift), use something like Grafana for log dashboards/visualization and save myself the trouble of deploying a new system (most of the people in my team are familiar with Redshift but not with ElasticSearch).
Although it sounds straightforward, I feel like I'm not looking at the big picture and that I would be missing some powerful capabilities that components like Logstash were written for the in the first place. What would these capabilities be and how would it be advantageous to use ELK instead of my solution?
Thank you!
I have implemented a full ELK stack in my company in the past year.
The project was huge and took a lot of time to properly implement. The advantages of using ELK and not implementing our own centralized logging solution would be:
Not needing to re-invent the wheel- There is already a product that is doing just that. (and the installation part is extremely easy)
It is battle tested and can stand huge amount of logs in a short time.
As your business and product grows and shift you will need to parse more logs with different structure which will mean DB changes on self built system. logstash will give you endless possibilities of filtering and parsing those new formatted logs.
It has Cluster and HA capabilities, and you can scale your logging system vertically and horizontally.
Very easy to maintain and change over time.
It can send the needed output to a variety of products including Zabbix, Grafana, elasticsearch and many more.
Kibana will give you ability to view the logs, build graphs and dashboards, alerts and more...
The options with ELK are really endless and the more I work with it, the more I find new ways it can help me. not just from viewing logs on distributed remote server systems, but also security alerts and SLA graphs and many other insights.

Transaction Log in Aerospilke

What I have?
A lot of different microservices managing by different teams. All microservices persist data in Aerospike database.
What I want to achieve?
I'm building new microservice that relies on data handled by another services. I want to listen the changes in entities, but unfortunately that microservices don't put anything in message queue, they have only usual REST APIs, so I cant just subscribe to events.
The idea is to listen a transaction log(event log/commit log/WAL) of database. This approach is also using in different Event Sourcing systems, but I cant found any Aerospike API that would stream this log. So the question - does Aerospike provide any similar functionality, may be with different name?
Aerospike, in its enterprise edition, has a feature called change notification framework which may fit your requirements. It informs an external agent about all the write operations. This is built over the XDR functionality which is meant for replicating across data centers using a digestlog.
If you are not planning for enterprise, you should reconsider having your own message queue in front of Aerospike.

logstash to receive log from android? or is this elasticsearch?

I'm still a bit confused after reading documentation provided by logstash. I'm planning on writing an Android app, and I want to log the activity of the app. Logs will be sent over the network. is logstash not the right solution? because it needs to have an "agent" installed on systems that produces log.
I want a system that can store log from the app activity, but it also needs to be able to export the collected logs into plain text file. I know logstash can output to elasticsearch, but i'm not sure if it can export to plaintext file at the same time. or is this a task that ElasticSearch should do?
thanks a ton for any input you can provide
Logstash forwarder isn't currently available for android/ios unfortunately, nor could I find any existing solution for it from the community. (I asked the same question here but was voted off-topic because it was deemed asking for tool/library suggestions).
Your best bet unfortunately is either to write one yourself (which isn't trivial: you'll need to factor in offline connectivity, batching, scheduling, compressions, file-tracking, and so on), or to use other (usually commercial) logging services such as LogEntries.
By the way, the android/ios clients for LogEntries is open source. I'm not clear on its OSS licensing, but if you're to write an agent for logstash yourself, you could perhaps start by looking at LogEntries' android agent implementation, which already solves all the technical problems mentioned above. https://github.com/logentries/le_android.
And to answer your other question, yes logstash should receive your log (from the mobile-device), usually via lumberjack input (aka logstash forwarder). Logstash can then persist & index these log files to elasticsearch, providing it's configured that way

Is there an API for listing queues and exchanges on RabbitMQ?

I've looked quite a bit, but I haven't been able to find a good programmatic way to list the queues on a RabbitMQ server.
This is important because I need to clean up my queues and exchanges when I'm done with them. I don't always have a good "done" event that can be used to trigger a cleanup, so I'd like to do it with more of a garbage collection model. If I can list the queues, I can verify that the objects that they're related to shouldn't be producing more entries and clean them up.
I know I can use rabbitmqctl to do it, but that needs elevated privileges.
Since I haven't been able to find a way to list the queues programmatically, I've been keeping a list of names in the database. That works, but it's ugly.
You could use Alice - http://github.com/auser/alice. It is a REST interface to execute rabbitmqctl commands
2012 update
The RabbitMQ development has probably made the question and other answers out-of-date. Management Plugin that provides REST API is now a part of RabbitMQ. The plugin might be disabled by default, thought.
If what you need is to auto delete the exchange and the queues when you are done, then you can accomplish that based on the options that you use for exchange_declare and queue_declare.
Coming back to your question of listing queues and exchanges, you can use a tool like this one: http://github.com/tnc/rac
With a little tweaking you can write a PHP script to get what you need. Just check under the lib folder of that project.

Resources