Exclude information from metrics endpoint - spring-boot

The metrics endpoint can respond with a lot of data, especially for cache. and gauge.. Is there a way to exlude this and only show the system metrics?
I'm asking this because the endpoint is polled by Logstash and it exceeds the maximum number of fields per index.

I found a solution for cache. Add this to one of your configuration classes:
#Bean
public CachePublicMetrics emptyCachePublicMetrics() {
return null;
}
Also, in my particular case with Logstash, the problem can be solved by adding a prune filter to the Logstash config:
prune {
blacklist_names => [ "^cache\.", "^gauge\." ]
}

Related

Hazelcast persisting and loading data on all nodes

I have a 2 node setup distributed cache setup which needs persistence setup for both members.
I have MapSore and Maploader implemented and the same code is deployed on both nodes.
The MapStore and MapLoader work absolutely ok on a single member setup, but after another member joins, MapStore and Maploader continue to work on the first member and all insert or updates by the second member are persisted to disk via the first member.
My requirement is that each member should be able to persist to disk independently so that distributed cache is backed up on all members and not just the first member.
Is there a setting I can change to achieve this.
Here is my Hazlecast Spring Configuration.
#Bean
public HazelcastInstance hazelcastInstance(H2MapStorage h2mapStore) throws IOException{
MapStoreConfig mapStoreConfig = new MapStoreConfig();
mapStoreConfig.setImplementation(h2mapStore);
mapStoreConfig.setWriteDelaySeconds(0);
YamlConfigBuilder configBuilder=null;
if(new File(hazelcastConfiglocation).exists()) {
configBuilder = new YamlConfigBuilder(hazelcastConfiglocation);
}else {
configBuilder = new YamlConfigBuilder();
}
Config config = configBuilder.build();
config.setProperty("hazelcast.jmx", "true");
MapConfig mapConfig = config.getMapConfig("requests");
mapConfig.setMapStoreConfig(mapStoreConfig);
return Hazelcast.newHazelcastInstance(config);
}
Here is my hazlecast yml config - This is placed in /opt/hazlecast.yml which is picked up by my spring config up above.
hazelcast:
group:
name: tsystems
management-center:
enabled: false
url: http://localhost:8080/hazelcast-mancenter
network:
port:
auto-increment: true
port-count: 100
port: 5701
outbound-ports:
- 0
join:
multicast:
enabled: false
multicast-group: 224.2.2.3
multicast-port: 54327
tcp-ip:
enabled: true
member-list:
- 192.168.1.13
Entire code is available here :
[https://bitbucket.org/samrat_roy/hazelcasttest/src/master/][1]
This might just be bad luck and low data volumes, rather than an actual error.
On each node, try the running the localKeySet() method and printing the results.
This will tell you which keys are on which node in the cluster. The node that owns key "X" will invoke the map store for that key, even if the update was initiated by another node.
If you have low data volumes, it may not be a 50/50 data split. At an extreme, 2 data records in a 2-node cluster could have both data records on the same node.
If you have a 1,000 data records, it's pretty unlikely that they'll all be on the same node.
So the other thing to try is add more data and update all data, to see if both nodes participate.
Ok after struggling a lot I noticed a teeny tiny buy critical detail.
Datastore needs to be a centralized system that is accessible from all Hazelcast members. Persistence to a local file system is not supported.
This is absolutely in line with what I was observing
[https://docs.hazelcast.org/docs/latest/manual/html-single/#loading-and-storing-persistent-data]
However not be discouraged, I found out that I could use event listeners to do the same thing I needed to do.
#Component
public class HazelCastEntryListner
implements EntryAddedListener<String,Object>, EntryUpdatedListener<String,Object>, EntryRemovedListener<String,Object>,
EntryEvictedListener<String,Object>, EntryLoadedListener<String,Object>, MapEvictedListener, MapClearedListener {
#Autowired
#Lazy
private RequestDao requestDao;
I created this class and hooked it into the config as so
MapConfig mapConfig = config.getMapConfig("requests");
mapConfig.addEntryListenerConfig(new EntryListenerConfig(entryListner, false, true));
return Hazelcast.newHazelcastInstance(config);
This worked flawlessly, I am able to replicate data over to both the embedded databases on each node.
My use case was to cover HA failover edge-cases. During HA failover, The slave node needed to know the working memory of the active node.
I am not using hazelcast as a cache, rather I am using as a data syncing mechanism.

Ship only a percentage of logs to logstash

How can I configure filebeat to only ship a percentage of logs (a sample if you will) to logstash?
In my application's log folder the logs are chunked to about 20 megs each. I want filebeat to ship only about 1/300th of that log volume to logstash.
I need to pare down the log volume before I send it over the wire to logstash so I cannot do this filtering from logstash it needs to happen on the endpoint before it leaves the server.
I asked this question in the ES forum and someone said it was not possible with filebeat: https://discuss.elastic.co/t/ship-only-a-percentage-of-logs-to-logstash/77393/2
Is there really no way I can extend filebeat to do this? Can nxlog or another product to this?
To the best of my knowledge, there is no way to do that with FileBeat. You can do it with Logstash, though.
filter {
drop {
percentage => 99.7
}
}
This may be a use-case where you would use Logstash in shipping mode on the server, rather than FileBeat.
input {
file {
path => "/var/log/hugelogs/*.log"
add_tags => [ 'sampled' ]
}
}
filter {
drop {
percentage => 99.7
}
}
output {
tcp {
host => 'logstash.prod.internal'
port => '3390'
}
}
It means installing Logstash on your servers. However, you configure it as minimally as possible. Just an input, enough filters to get your desired effect, and a single output (Tcp in this case, but it could be anything). Full filtering will happen down the pipeline.
There's no way to configure Filebeat to drop arbitrary events based on a probability. But Filebeat does have the ability to drop events based on conditions. There are two way to filter events.
Filebeat has a way to specify lines to include or exclude when reading the file. This is the most efficient place to apply the filtering because it happens early. This is done using include_lines and exclude_lines in the config file.
filebeat.prospectors:
- paths:
- /var/log/myapp/*.log
exclude_lines: ['^DEBUG']
All Beats have "processors" that allow you to apply an action based on a condition. One action is drop_events and the conditions are regexp, contains, equals, and range.
processors:
- drop_event:
when:
regexp:
message: '^DEBUG'

Logstash - Elasticsearch filter unable to fetch start events

I'm trying to replicate the exact use case for elasticsearch filter detailed in the docs
https://www.elastic.co/guide/en/logstash/current/plugins-filters-elasticsearch.html
My output is also the same elasticsearch server.
I need to compute the time duration between two events. And the end events appear # <10ms after the start events.
What I'm observing is logstash is failing to fetch the start event for some end events.
My guess is, such start events are still buffered when logstash looks for them in ES.
I have tried setting the flush_size property to a low value in the output filter, this only helped a little. There were fewer "miss" cases when its configured to a low value. I'd tried setting it to 1 too, just to confirm this. There were still a few exit events that couldnt find their entry events.
Is there anything else that I should look for, that could possibly be causing the issue, as setting flush_size to too low a value didnt help and doesnt look like an optimal solution either.
Here's my logstash config :
filter{
elasticsearch {
hosts => ["ES_SERVER_IP:9200"]
index=>"logstash-filebeat-*"
query => "event:ENTRY AND id:%{[id]}"
fields => {"log-timestamp" => "started"}
sort => ["#timestamp:desc"]
}
ruby {
code => "event['processing_time'] = event['log-timestamp'] - event['started']"
}
}
output{
elasticsearch{
hosts=>["ES_SERVER_IP:9200"]
}
}

Advice on ElasticSearch Cluster Configuration

Brand new to Elasticsearch. I've been doing tons of reading, but I am hoping that the experts on SO might be able to weigh in on my cluster configuration to see if there is something that I am missing.
Currently I am using ES (1.7.3) to index some very large text files (~700 million lines) per file and looking for one index per file. I am using logstash (V2.1) as my method of choice for indexing the files. Config file is here for my first index:
input {
file {
path => "L:/news/data/*.csv"
start_position => "beginning"
sincedb_path => "C:/logstash-2.1.0/since_db_news.txt"
}
}
filter {
csv {
separator => "|"
columns => ["NewsText", "Place", "Subject", "Time"]
}
mutate {
strip => ["NewsText"]
lowercase => ["NewsText"]
}
}
output {
elasticsearch {
action => "index"
hosts => ["xxx.xxx.x.xxx", "xxx.xxx.x.xxx"]
index => "news"
workers => 2
flush_size => 5000
}
stdout {}
}
My cluster contains 3 boxes running on Windows 10 with each running a single node. ES is not installed as a service and I am only standing up one master node:
Master node: 8GB RAM, ES_HEAP_SIZE = 3500m, Single Core i7
Data Node #1: 8GB RAM, ES_HEAP_SIZE = 3500m, Single Core i7
This node is currently running the logstash instance with LS_HEAP_SIZE= 3000m
Data Node #2: 16GB RAM, ES_HEAP_SIZE = 8000m, Single Core i7
I have ES currently configured at the default 5 shards + 1 duplicate per index.
At present, each node is configured to write data to an external HD and logs to another.
In my test run, I am averaging 10K events per second with Logstash. My main goal is to optimize the speed at which these files are loaded into ES. I am thinking that I should be closer to 80K based on what I have read.
I have played around with changing the number of workers and flush size, but can't seem to get beyond this threshold. I think I may be missing something fundamental.
My questions are two fold:
1) Is there anything that jumps out as fishy about my cluster configuration or some advice that may improve the process?
2) Would it help if I ran an instance of logstash on each data node indexing separate files?
Thanks so much for any and all help in advance and for taking the time to read.
-Zinga
i'd first have a look at whether logstash or es is the bottleneck in your setup. try ingesting the file without the es output. what throughtput are you getting from plain/naked logstash.
if this is considerably higher then you can continue on the es side of things. A good starter might be:
https://www.elastic.co/guide/en/elasticsearch/guide/current/indexing-performance.html
If plain logstash doesn't yield a significant increase in throughput you can can try increasing / parallelising logstash across your machines.
hope that helps

How do I configure elasticsearch to retain documents up to 30 days?

Is there a default data retention period in elasticsearch? If yes can you help me find the configuration?
This is no longer supported in Elasticsearch 5.0.0 or greater. The best practice is to create indexes periodically (daily is most common) and then delete the index when the data gets old enough.
Here's a reference to how to delete an index (https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-delete-index.html)
This article (though it's old enough to reference _ttl) also gives some insight: https://www.elastic.co/blog/using-elasticsearch-and-logstash-to-serve-billions-of-searchable-events-for-customers
As a reminder, it's best to protect your Elasticsearch cluster from the outside world via a proxy and restrict the methods that can be sent to your cluster. This way you can prevent your cluster from being ransomed.
Yeah you can set a TTL on the data. Take a look here for the configuration options available.
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-ttl-field.html
Elasticsearch curator is the tool to use if you want to manage your indexes: https://www.elastic.co/guide/en/elasticsearch/client/curator/current/index.html
Here's an example of how to delete indices based on age: https://www.elastic.co/guide/en/elasticsearch/client/curator/current/ex_delete_indices.html
Combine with cron to have this done at regular intervals.
There is no default retention period but new versions of Elasticsearch have index lifecycle management (ILM). It allows to:
Delete stale indices to enforce data retention standards
Documentation.
Simple policy example:
PUT _ilm/policy/my_policy
{
"policy": {
"phases": {
"delete": {
"min_age": "30d",
"actions": {
"delete": {}
}
}
}
}
}
If you use OpenSearch in AWS then take a look at this documentation for the same thing.
Pretty old question but I got the same question just now. Maybe it will be helpful for somebody else.
Just FYI if you are using AWS's Elasticsearch service, they have a great doc on Using Curator to Rotate Data which includes some sample python code that can be used in a Lambda function.

Resources