I am using Elastic/Filebeat/Kibana and want to monitor users who ssh into a Jump Box specifically
What IPs are they ssh'ng to
Which users are connecting to those IP's
What are the most connected to machines
Which user is creating the most outbound connections
I have the system module enabled and all I can see is "related.user" to tell me who connects to the server via ssh but that's it.
You need to adjust your configuration in order to see all the information that you want.
What IPs are they ssh'ng to?
You are missing the destination.ip, you can easily just pick it up from it. Changes are you want to write some code and you can also extract it from the ssh command itself, you can see in the command the user, other arguments, and the destination ip in there as well, but you will need to parse that list. (process.parent.args), additionally, you can get the list count, and get the last element which is usually the IP, but I think it is easier to use the destination.ip itself.
Which users are connecting to those IP's?
For this, once you have the source and destination details, you need to create the Kibana report, you can run several aggregations and add different panels. A simple aggregation by IP will show you this, it is a matter of preference how you want it displayed.
What are the most connected to machines?
The same, you first run a count on the sources, or destinations (or both), then run a max on them.
Which user is creating the most outbound connections?
Here you can do all the users at once by running a count and grouping by user, then you list in descending order.
You can see a full list of properties here (ecs fields)
Summary:
You need some extra fields, destiantion.ip, source.ip, eventually parse your arguments, then for reporting you need to count them and aggregate them, but once you have that data you can easily pull them and run the aggregations on them. I think the related user is a good one since it is the only one shown in the event itself, but how about if this user A actually uses an account B to connect to SSH, in that case you need to part the arguments from the process.parent.args .
Cheers.
Related
I want to have a graph where all recent IPs that requested my webserver get shown as total request count. Is something like this doable? Can I add a query and remove it afterwards via Prometheus?
Technically, yes. You will need to:
Expose some metric (probably a counter) in your server - say, requests_count, with a label; say, ip
Whenever you receive a request, inc the metric with the label set to the requester IP
In Grafana, graph the metric, likely summing it by the IP address to handle the case where you have several horizontally scaled servers handling requests sum(your_prometheus_namespace_requests_count) by (ip)
Set the Legend of the graph in Grafana to {{ ip }} to 'name' each line after the IP address it represents
However, every different label value a metric has causes a whole new metric to exist in the Prometheus time-series database; you can think of a metric like requests_count{ip="192.168.0.1"}=1 to be somewhat similar to requests_count_ip_192_168_0_1{}=1 in terms of how it consumes memory. Each metric instance currently being held in the Prometheus TSDB head takes something on the order of 3kB to exist. What that means is that if you're handling millions of requests, you're going to be swamping Prometheus' memory with gigabytes of data just from this one metric alone. A more detailed explanation about this issue exists in this other answer: https://stackoverflow.com/a/69167162/511258
With that in mind, this approach would make sense if you know for a fact you expect a small volume of IP addresses to connect (maybe on an internal intranet, or a client you distribute to a small number of known clients), but if you are planning to deploy to the web this would allow a very easy way for people to (unknowingly, most likely) crash your monitoring systems.
You may want to investigate an alternative -- for example, Grafana is capable of ingesting data from some common log aggregation platforms, so perhaps you can do some structured (e.g. JSON) logging, hold that in e.g. Elasticsearch, and then create a graph from the data held within that.
I currently have Syslog-ng set up to aggregate my logs. I want to show these logs in real time to my web frontend users. I however have no clue how to do this, is it possible to connect directly to Syslog-ng using WebSockets? Or do I need to first pass it on to something like elasticsearch, if so, how do I get my data live from elasticsearch?
I found this table in the Syslog-ng documentation, but iIcould not find any output destination that would solve my problem.
Unfortunately currently there's no mechanism to export real-time log traffic for a generic destination. You could however write your configuration in a way that places log information for a frontend to read.
For instance, if you have a log statement delivering messages to elastic:
log {
source(s_network);
destination(d_elastic);
};
you could add an alternative destination to the same log statement, which would only serve as a buffer for exporting real-time log data. For instance:
log {
source(s_network);
destination(d_elastic);
destination { file("/var/log/buffers/elastic_snapshot.$SEC" overwrite-if-older(59)); };
};
Notice the 2nd destination in the log statement above, with curly braces you tell syslog-ng to use an in-line destination instead of a predefined one (or you could use a full-blown destination declaration, but I omitted that for brevity).
This new file destination would write all messages that elastic receives to a file. The file contains the time based macro $SEC, meaning that you'd get a series of files: one for each second in a minute.
Your frontend could just try to find the file with the latest timestamp and present that as the real-time traffic (from the last second).
The overwrite-if-older() option tells syslog-ng that if the file is older than 59 seconds, then it should overwrite it instead of appending to it.
This is a bit hacky, I even intend do implement something what you have asked for in a generic way, but it's doable even today, as long as the syslog-ng configuration is in your control.
As explained by this post, when the client first connects with the server, a “main socket/process” gets created and holds its assigns. Later, when the client joins specific channels/topics, each channel’s socket/process copies those assigns and can add to them as it will.
I now have a use case where, upon the user joining their own individual channel (i.e. user:#{user_id}), I retrieve some information about the user from the DB, which should then be globally available to all channels this user later joins. However I haven’t been able to find a way to put those information into socket.assigns so that they can be available everywhere. If I try to assign them, they will only be available in the socket.assigns of this particular user:#{user_id} channel.
Is there a way to do it? Should I just instead simply try to fetch all those information in one go when the user first connects, instead of when they join the individual user channel?
Different channels mean different sockets.
The easiest solution would be to maintain the permanent state (Agent, ETS, DETS, mnesia, ...), holding a map user_id => user_info and query this state whenever you need this info.
Problem:
Our build configuration to deploy requires a small number of typed parameters to allow for exclusion/inclusion of some service deploys. The parameters are set to prompt for review and the build is triggered manually from teamcity's button to run custom build.
I haven't yet found any documentation for (or hackish example of) ordering or sorting rules that TeamCity uses to display those typed parameters.
As a quick sketch of an example, we're hoping to display this:
1. Stop service X
2. Start service X
3. Stop service Y
4. Start service Y
Or:
1. Stop service X
2. Stop service Y
3. Start service X
4. Start service Y
Note: the actual order of the build steps is fine and is not part of the objective here. We don't need to re-order those; I'm hoping to avoid user error by keeping either the services grouped together or the choices grouped together.
It almost seems that the run custom build's dialog is ordered by the internal id (or creation time) of each parameter.
We're not using TeamCity's internal database but rather a MySql installation on the same host; we're open to the option of re-ordering the parameters directly in the database if necessary.
Is there another way to influence the sorting or display order of these parameters when prompting the user for their review?
I would suggest one of these approaches:
Removing parameters at all and creating separate build for specific action: the builds are sorted alphabetically, so you can order then how you want. Besides that you could trigger each build automatically without worrying about selecting some parameters, you will see who and when did specific action for specific service (when you have a single build you have to take a look at parameters or logs to get this information).
If you need parameters and would like to select them, then most obvious choice is Select box from Typed Parameters. You can change the order in build configuration and that should automatically make correct order in the UI
You can try my plugin for dynamic select parameters -- this way you can control order of the parameters from remote service.
Ok so this question is a hard one to answer for those who do not have much experience with Novell eDirectory.
What I am trying to do is create a work flow that will delete user objects in my eDirectory tree if upon meeting of certain requirements. The problem is that one of these requirements is dependant on a timestamp. The attribute I am looking at is the LoginDisabled attribute. This is a Boolean value, so either True or False. When looking at this attribute via LDAP methods you only get back the Boolean which is fine.
However my requirements as set forth by internal policy state that Only accounts that have been set to True for a minimum amount of 30 days can have actions performed against them. The only place I can see this timestamp is through the NDS iMonitor tool.
So my question is how do I query this timestamp that is stored outside of LDAP without having to look up each user individually in iMonitor?
If possible I would prefer a script that utilizes Powershell, but I can also use C# or Python.
Yes there are other things that are capable of being done to extend the schema and what have you but for the sake running down a rabbit hole, lets just say that modifications to the server configurations are not authorized. I am only allowed to query and It appears I need to be able to query NDS directly.
We are using loginTime attribute.
After the ban connections (set LoginDisabled attribute to TRUE) the user can not connect to a Tree. We wait 35 days after the last login and delete user.
It is not possible to get this attribute over LDAP by default. However, you can try to add the attribute to LDAP. I cannot test this for I have no test server at the moment. But the theory is that you go into iManager, find the LDAP Group object for the server you want to use for LDAP.
Then click the object, goto General, Tab Attribute Map.
In there, add the attribute you want and map it.