I am using elk stack to retain and monitor the nginx-ingress logs of my k8s cluster. Instead of kibana Im using Grafana and instead of fluentd Im using fuent-bit. I found one documentation saying elasticsearch retain logs for 7 days and I also found an article where it said it retains logs life long.
All I want is the logs for last 6 months and any logs beyond that is not needed.
I have checked values.yaml file of elasticsearch to see if I can find the configuration option to change logs retention time but to no avail.
Has anyone worked with similar stack and knows how to change logs retention time???
Your time will be highly appreciated.
For retaining data, You need to configure Index Lifecycle Policy. Currently, if you have not configured ILM policy, then Elastic will retain log data for a lifetime and it will not automatically deleted. You can create policy from Kibana as well but as you are mentioning you are not using Kibana, you can follow below command.
To create a lifecycle policy from Kibana, open the menu and go to
Stack Management > Index Lifecycle Policies. Click Create policy.
You can configured ILM Policy using below API:
PUT _ilm/policy/my_policy
{
"policy": {
"phases": {
"hot": {
"actions": {
"rollover": {
"max_age": "7d"
}
}
},
"delete": {
"min_age": "30d",
"actions": {
"delete": {}
}
}
}
}
}
The above policy will rolls the index over if it was created at least 7 days ago and Delete the index 30 days after rollover.
You can assign created policy to your index using below command:
PUT logs-my_app-default/_settings
{
"index": {
"lifecycle": {
"name": "my_policy"
}
}
}
Update
You can use Explain lifecycle API to validate if ILM is working properly or not.
GET my-index-000001/_ilm/explain
Related
We are a team that uses Heroku pipeline. By default, each review app uses hobby dyno. Is it possible to configure the pipeline to use free dyno for each review app by default?
Yes. One of of the configuration options you can place inside of app.json can be used to configure the number and type of dynos used when spinning up your review app.
Documentation app.json
Configuring Review Apps
The majority of the configuration options you can use at the root of app.json can be overridden in the environment.review key.
Example:
{
"environments": {
"review": {
"formation": {
"web": {
"quantity": 1,
"size": "free"
}
}
}
}
}
I'm using elastic cloud and I need to configure synonyms. I've done that successfully by uploading a zip file containing a elasticsearch/dictionaries/synonyms.txt file and creating an index that uses them by doing:
PUT /synonym_test
{
"settings": {
"index": {
"analysis": {
"analyzer": {
"synonym_analyzer": {
"tokenizer": "lowercase",
"filter": ["porter_stem", "my_synonyms"]
}
},
"filter": {
"my_synonyms": {
"type": "synonym",
"synonyms_path": "elasticsearch/dictionaries/synonyms.txt",
"updateable": true
}
}
}
}
}
}
GET /synonym_test/_analyze
{
"analyzer": "synonym_analyzer",
"text": "notebook"
}
This works correctly. Now I want to update my synonyms.txt file with new entries, which I can do by using the Extensions API. Once I've uploaded the new file correctly, I need all the nodes to pick up the new configuration and this is where I'm getting in trouble.
How do I restart the nodes so that they pick up the new config files?
I've tried using the API /deployments/:deployment_id/elasticsearch/:ref_id/_restart, which does restart the deployment, but the new synonyms file is not picked up.
I'm looking for a way to programmatically update the deployment, i.e without going to the UI and rebuilding manually
After updating your extension with the new synonyms file, you need to Edit your deployment, click on "Settings and Plugins" to make sure your extension is still checked (it should) and then simply Save your deployment again.
A popup will tell you that no significant changes have been detected, but you can ignore that (i.e. ES Cloud most probably doesn't inspect the content of files) and simply Save your deployment and it will restart again with the new synonyms file.
When updating your extensions, you could also change the name of your extension (e.g. with a version number), then Edit your deployment, click "Setting and Plugins" and swap the old for the new extension. That way ES Cloud will tell you that it detected a change and will restart the nodes upon saving the deployment.
I was following the experimental features of Built-in log pulls
https://github.com/Azure/iotedge/blob/master/doc/built-in-logs-pull.md
When I am trying to upload logs using the following payload from the azure portal(using Direct Method under each module)
PAYLOAD:
{
"schemaVersion": "1.0",
"sasUrl":"https://veeaiotcentralstorage.blob.core.windows.net/iotedgeruntimelogs/iotedgeruntimelogs.txt?sv=2019-02-02&st=2020-08-08T08%3A56%3A00Z&se=2020-08-14T08%3A56%3A00Z&sr=b&sp=rw&sig=xyz",
"items": [
{
"id": "zigbee_template-arm64v8",
"filter": {
"tail": 10
}
}
],
"encoding": "none",
"contentType": "text"
}
I am getting the error mentioned below after checking the task status
ERROR:
{"status":200,"payload":{"status":"Failed",
"message":"Task upload logs failed because of error Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.",
"correlationId":"b85002d8-d8f9-49d5-851d-9123a8d7d740"}}
Please let me know where I am having an issue
Digging into the code some more, I noticed the UploadLogs implementation doesn't create a container, but rather a folder structure within the container that you supply. As far as I can tell, the casing restriction applies when creating a blob container, but there are no such restriction on creating folders within the container.
Please check the SAS URL that you supplied, or something on the storage end. Double check that your SAS URL is generated for a pre-existing blob container.
We have a lot of AWS quicksight reports in one account, which needs to be migrated to another account.
Within the same account, we can use the 'save-as' feature of the dashboard to create a copy of the report, but is there any way to export the analysis from one account and import into another account?
At present, it appears only we way is to recreate all the reports again from scratch in the new account, but anyone has any other options?
You can do this programmatically through the API:
QuickSight API
However, it will take a bit of scripting. You will need to pull out the pieces with the API, and then rebuild on a new account.
For example, DescribeTemplate will pull the JSON defining a template. Then you can use CreateTemplate to create on another account.
In my organization, we are using the QuickSight APIs in AWS Lambda Functions and save the Analysis template in JSON format in an S3 bucket. This S3 bucket has access to multiple environments like Dev, QA, Staging, and Production. Leveraging the API again, we create analysis in other environments by using the template JSON file. We also store version information of the templates in a PostgreSQL database.
PS - The dataset in each environment needs to be created prior to migrating the analysis.
1.Create template in account1 from Analysis
--template.json:
{
"SourceAnalysis": {
"Arn": "arn:aws:quicksight:<AccountID-1>:analysis/<Analysis ID>",
"DataSetReferences": [
{
"DataSetPlaceholder": "DS1",
"DataSetArn": "arn:aws:quicksight:AccountID-1:dataset/<DatasetID>"
}
]
}
}
aws quicksight create-template --aws-account-id AccountID-1 --template-id <templateId> --source-entity file://template.json --profile default
2.Update Template Permissions in Account1->root(Account2):
--TemplatePermission.json
[{
"Principal": "arn:aws:iam::AccountID-2:root",
"Actions": ["quicksight:UpdateTemplatePermissions", "quicksight:DescribeTemplate"]
}]
aws quicksight update-template-permissions --aws-account-id AccountID-1 --template-id <templateId> --grant-permissions file://TemplatePermission.json --profile default
3.Create Analysis in Account2 using Account1 Template
createAnalysis.json
{
"SourceTemplate": {
"DataSetReferences": [
{
"DataSetPlaceholder": "DS1",
"DataSetArn": "arn:aws:quicksight:us-east-1:AccountID-2:dataset/<DatasetID in account 2>"
}
],
"Arn": "arn:aws:quicksight:us-east-1:AccountID-1:template/testTemplate"
}
}
aws quicksight create-analysis --aws-account-id AccountID-2 --analysis-id testanalysis --name test1 --source-entity file://createAnalysis.json
4.Update Permissions to view analysis for your user
UpdateAnalysisPermission.json
[
{
"Principal": "arn:aws:quicksight:us-east-1:AccountID-2:user/default/<username>",
"Actions": [
"quicksight:RestoreAnalysis",
"quicksight:UpdateAnalysisPermissions",
"quicksight:DeleteAnalysis",
"quicksight:DescribeAnalysisPermissions",
"quicksight:QueryAnalysis",
"quicksight:DescribeAnalysis",
"quicksight:UpdateAnalysis"
]
}
]
aws quicksight update-analysis-permissions --aws-account-id AccountID-2 --analysis-id testanalysis --grant-permissions file://UpdateAnalysisPermission.json
UPDATE.
As #yottabrain clarified, at the moment (February 2020) you can share analysis with other users in your Amazon QuickSight account only.
Sure, you can share your analysis as well:
Go to Share > Share analysis > Manage analysis access > Invite Users
See the detailed manual from AWS: Sharing an Analysis
I am trying to set up streaming from an Azure VM scale set to an event hub via Diagnostics configuration.
I have my public config which includes the SinksConfig as follows (I have omitted the rest of the config for the sake of brevity):
{
"WadCfg": {
"DiagnosticMonitorConfiguration": {
*** config for performance counters and ETW ***
"SinksConfig": {
"Sink": [
{
"name": "eventhub",
"EventHub": {
"Url": "sb://myhub.servicebus.windows.net/mycompanyapplication",
"SharedAccessKeyName": "RootManageSharedAccessKey"
}
}
]
}
},
"StorageAccount": "<storageaccount>"
}
and the private config:
{
"storageAccountName": "<storageaccountname>",
"storageAccountKey": "<storageaccountkey>",
"storageAccountEndPoint": "https://core.windows.net",
"EventHub": {
"Url": "sb://myhub.servicebus.windows.net/mycompanyapplication",
"SharedAccessKeyName": "RootManageSharedAccessKey",
"SharedAccessKey": "<sharedaccesskey>"
}
}
However, nothing is being received by the event hub. I can see in the storage account logs that the Diagnostics extension is running:
but in the substatus there are many errors around the SAS key and the event hub:
When I check back in the Visual Studio Diagnostics configuration on the Scale set I see this error:
I have checked the naming convention on the SharedAccessKeyName (which is the default provided when the event hub was set up) know that the SAS key works as I wrote a console app to send messages to the same event hub with the same credentials and it worked fine.
So there is obviously a problem with the authentication to the event hub as it can't read the access key from the config file. However, I can't see any other way of providing it.
Am I missing something obvious here in my config?
Turns out the problem was quite simple, I had grabbed the URL from the connection string in the portal which was
sb://myhub.servicebus.windows.net/mycompanyapplication
when it should have been
https://myhub.servicebus.windows.net/mycompanyapplication
Now the data is flowing freely into the event hub.
However, the diagnostics config in VS still shows the warning about not being able to read the SAS key, which now looks like a "red herring" that ended up costing me a lot of time :(