I'm using elastic cloud and I need to configure synonyms. I've done that successfully by uploading a zip file containing a elasticsearch/dictionaries/synonyms.txt file and creating an index that uses them by doing:
PUT /synonym_test
{
"settings": {
"index": {
"analysis": {
"analyzer": {
"synonym_analyzer": {
"tokenizer": "lowercase",
"filter": ["porter_stem", "my_synonyms"]
}
},
"filter": {
"my_synonyms": {
"type": "synonym",
"synonyms_path": "elasticsearch/dictionaries/synonyms.txt",
"updateable": true
}
}
}
}
}
}
GET /synonym_test/_analyze
{
"analyzer": "synonym_analyzer",
"text": "notebook"
}
This works correctly. Now I want to update my synonyms.txt file with new entries, which I can do by using the Extensions API. Once I've uploaded the new file correctly, I need all the nodes to pick up the new configuration and this is where I'm getting in trouble.
How do I restart the nodes so that they pick up the new config files?
I've tried using the API /deployments/:deployment_id/elasticsearch/:ref_id/_restart, which does restart the deployment, but the new synonyms file is not picked up.
I'm looking for a way to programmatically update the deployment, i.e without going to the UI and rebuilding manually
After updating your extension with the new synonyms file, you need to Edit your deployment, click on "Settings and Plugins" to make sure your extension is still checked (it should) and then simply Save your deployment again.
A popup will tell you that no significant changes have been detected, but you can ignore that (i.e. ES Cloud most probably doesn't inspect the content of files) and simply Save your deployment and it will restart again with the new synonyms file.
When updating your extensions, you could also change the name of your extension (e.g. with a version number), then Edit your deployment, click "Setting and Plugins" and swap the old for the new extension. That way ES Cloud will tell you that it detected a change and will restart the nodes upon saving the deployment.
Related
I am using elk stack to retain and monitor the nginx-ingress logs of my k8s cluster. Instead of kibana Im using Grafana and instead of fluentd Im using fuent-bit. I found one documentation saying elasticsearch retain logs for 7 days and I also found an article where it said it retains logs life long.
All I want is the logs for last 6 months and any logs beyond that is not needed.
I have checked values.yaml file of elasticsearch to see if I can find the configuration option to change logs retention time but to no avail.
Has anyone worked with similar stack and knows how to change logs retention time???
Your time will be highly appreciated.
For retaining data, You need to configure Index Lifecycle Policy. Currently, if you have not configured ILM policy, then Elastic will retain log data for a lifetime and it will not automatically deleted. You can create policy from Kibana as well but as you are mentioning you are not using Kibana, you can follow below command.
To create a lifecycle policy from Kibana, open the menu and go to
Stack Management > Index Lifecycle Policies. Click Create policy.
You can configured ILM Policy using below API:
PUT _ilm/policy/my_policy
{
"policy": {
"phases": {
"hot": {
"actions": {
"rollover": {
"max_age": "7d"
}
}
},
"delete": {
"min_age": "30d",
"actions": {
"delete": {}
}
}
}
}
}
The above policy will rolls the index over if it was created at least 7 days ago and Delete the index 30 days after rollover.
You can assign created policy to your index using below command:
PUT logs-my_app-default/_settings
{
"index": {
"lifecycle": {
"name": "my_policy"
}
}
}
Update
You can use Explain lifecycle API to validate if ILM is working properly or not.
GET my-index-000001/_ilm/explain
I am using Azure API for FHIR. I have a Claims payload that requires some additional fields, which I am adding to the extension structure like:
extension: [
{
"url": "ROW_ID",
"valueString": "1"
},
{
"url": "LOB",
"valueString": "MAPD"
}
]
To perform a search on ROW_ID, and LOB, I need to publish this extension which I would be using in my SearchParameter.
How and where do I publish the extension ?
To publish an extension, you post the structure definition to your FHIR server. In addition, if you need to search for it you will need to create a custom search parameter and reindex the database. You can read more about that here: https://learn.microsoft.com/en-us/azure/healthcare-apis/fhir/how-to-do-custom-search
I was following the experimental features of Built-in log pulls
https://github.com/Azure/iotedge/blob/master/doc/built-in-logs-pull.md
When I am trying to upload logs using the following payload from the azure portal(using Direct Method under each module)
PAYLOAD:
{
"schemaVersion": "1.0",
"sasUrl":"https://veeaiotcentralstorage.blob.core.windows.net/iotedgeruntimelogs/iotedgeruntimelogs.txt?sv=2019-02-02&st=2020-08-08T08%3A56%3A00Z&se=2020-08-14T08%3A56%3A00Z&sr=b&sp=rw&sig=xyz",
"items": [
{
"id": "zigbee_template-arm64v8",
"filter": {
"tail": 10
}
}
],
"encoding": "none",
"contentType": "text"
}
I am getting the error mentioned below after checking the task status
ERROR:
{"status":200,"payload":{"status":"Failed",
"message":"Task upload logs failed because of error Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.",
"correlationId":"b85002d8-d8f9-49d5-851d-9123a8d7d740"}}
Please let me know where I am having an issue
Digging into the code some more, I noticed the UploadLogs implementation doesn't create a container, but rather a folder structure within the container that you supply. As far as I can tell, the casing restriction applies when creating a blob container, but there are no such restriction on creating folders within the container.
Please check the SAS URL that you supplied, or something on the storage end. Double check that your SAS URL is generated for a pre-existing blob container.
I am trying to create linked ARM tempIates using Visual Studio. For creating a VM I need to pass the variable for vnet prefix, subnet name etc to a different template using a parameter file or a template file. I couldn't get a relevant example in the microsoft site. Please help.
There is a well defined way of doing this. You have a 'deployment' resource in your template that references another template with a uri.
"resources": [
{
"name": "myNestedTemplate",
"type": "Microsoft.Resources/deployments",
"apiVersion": "2015-01-01",
"properties": {
"mode": "Incremental",
"templateLink": {
"uri": "[concat(variables('template').base, 'nested/', variables('template').nested2)]",
"contentVersion": "1.0.0.0"
},
"parameters": {
"apiVersion": {
"value": "[variables('sharedState')]"
}
}
}
So you need to have the other template accessible. In Visual studio, you can make sure it gets uploaded along with the rest of your artifacts to the storage account.
Check out Mark van Eijk's blog for this particular solution, but also the quick start templates on GitHub are a great resource for finding how to do something.
Also, you must've not looked very hard on the msft website...: Linked Template Example on msft
My company's IPs range seems to be blocked by packagist.org's hosting service and I can't reach that domain. I've already contacted them but I don't know how much long it will take to remove the blockage. Moreover, every external web proxy I try to use falls into my company's firewall so I'm stuck.
Is there any public mirror for composer packages so I don't have to depend on packagist.org domain?
Any other solution is welcome as well.
I couldn't find a public mirror but I was able to solve packagist.org dependency by editing ~/.composer/config.json and adding dependent projects' GitHub links as repositories, eg:
{
"repositories": [
{ "type": "vcs", "url": "https://github.com/smarty-php/smarty" },
{ "type": "vcs", "url": "https://github.com/sebastianbergmann/phpunit" },
{ "type": "vcs", "url": "https://github.com/sebastianbergmann/php-code-coverage" },
{ "type": "vcs", "url": "https://github.com/sebastianbergmann/php-file-iterator" },
{ "type": "vcs", "url": "https://github.com/sebastianbergmann/php-text-template" },
{ "type": "vcs", "url": "https://github.com/sebastianbergmann/php-timer" },
{ "type": "vcs", "url": "https://github.com/sebastianbergmann/phpunit-mock-objects" },
{ "type": "vcs", "url": "https://github.com/phpspec/prophecy" },
...
{ "packagist": false }
]
}
The drawback is pretty obvious: I had to map every dependency and dependency's dependencies and point out theirs GitHub link. At least it's been faster to do this than to wait for OVH hosting service to solve the blockage problem.
The team responsible for packagist.org states that they don't block anyone within their server. They cannot vouch for the hosting company though.
There is no mirror server that I know of. Are you positively sure that this isn't an issue with your firewall? If you say that you cannot use public proxies because of it, I would say that it might block too much.
On the other side, relying on certain external servers to be up when you need them probably is an expectation that cannot be met all the time. This isn't just packagist.org, but all other hosting web servers with the software you want, like Github, Bitbucket etc. I'd say this would be an ideal opportunity to start creating a local copy for you, but of course this would need a working first time contact with packagist.org.