MINIO Is it possible to set automatic deletion from bucket of the object after the retention date? - minio

I want to delete object after the retention date is end.
Can I do it with Bucket Lifecycle? If so, how?
And the second question, is it possible to automatically delete an object if there is a newer one available?

This will only work with a versioned bucket
Enable object lifecycle configuration on buckets to setup automatic deletion of objects after a specified number of days or a specified date.
Example:
Create a bucket lifecycle configuration which expires the objects under the prefix old/ on 2020-01-01T00:00:00.000Z date and the objects under temp/ after 7 days.
Enable bucket lifecycle configuration using mc:
{
"Rules": [{
"Expiration": {
"Date": "2020-01-01T00:00:00.000Z"
},
"ID": "OldPictures",
"Filter": {
"Prefix": "old/"
},
"Status": "Enabled"
},
{
"Expiration": {
"Days": 7
},
"ID": "TempUploads",
"Filter": {
"Prefix": "temp/"
},
"Status": "Enabled"
}
]
}
the same can be found in :https://docs.min.io/docs/minio-bucket-lifecycle-guide.html
Remove non current versions
{
"Rules": [
{
"ID": "Removing all old versions",
"Filter": {
"Prefix": "users-uploads/"
},
"NoncurrentVersionExpiration": {
"NoncurrentDays": 365
},
"Status": "Enabled"
}
]
}

Related

Elasticsearch alias not being created on index creation

I'm using the go-elasticsearch API in my application to create indices in an Elastic.co cloud cluster. The application dynamically creates an index with a template and then starts indexing documents. The template includes an alias name and look like this:
{
"settings": {
"number_of_shards": 1,
"number_of_replicas": 0
},
"mappings": {
"properties": {
"title": {
"type": "text"
},
"created_at": {
"type": "date"
},
"updated_at": {
"type": "date"
},
"status": {
"type": "keyword"
}
}
},
"aliases": {
"rollout-nodes-f0776f0": {}
}
}
The name of the alias can change, so we pass it to the template when we create a new index. This is done with the Create indices API in Go:
indexTemplate := getIndexTemplate()
res, err := n.client.Indices.Create(
indexName,
n.client.Indices.Create.WithBody(indexTemplate),
n.client.Indices.Create.WithContext(ctx),
n.client.Indices.Create.WithTimeout(time.Second),
)
Doing some testing, this code works on localhost (without security enabled) but is not working with the cluster in Elastic.co, the index is created but not the alias.
I think it should be a problem related with either the API Key permissions or some configuration in the server, but I was unable to find yet which permission I'm missing.
For more context, this is the API Key I'm using:
{
"id": "fakeID",
"name": "index-service-key",
"creation": 1675350573126,
"invalidated": false,
"username": "fakeUser",
"realm": "cloud-saml-kibana",
"metadata": {},
"role_descriptors": {
"logstash_writer": {
"cluster": [
"monitor",
"transport_client",
"read_ccr",
"read_ilm",
"manage_index_templates"
],
"indices": [
{
"names": [
"*"
],
"privileges": [
"all"
],
"allow_restricted_indices": false
}
],
"applications": [],
"run_as": [],
"metadata": {},
"transient_metadata": {
"enabled": true
}
}
}
}
Any ideas? I know I can use the POST _aliases API, but the index creation option should be working too.

Azure Data Factory how to deploy Alerts & Metrics to other environments with DevOps

We have a Azure datafactory fully integrated with DevOps. Every change I make to the datafactory is deployed to all environments (OTAP), except alerts & metrics. I cannot find anything on how to deploy these to the other environments. Is this possible at all?
Is this possible at all?'
Quick answer is NO so far. I contacted Microsoft ADF team and got below response:
Azure Data Factory utilizes Azure Resource Manager templates to store
the configuration of your various ADF entities. Entities on Alerts &
Metrics does not get exported in the ARM template, so Alerts & Metrics
won’t be integrated using DevOps.
I did 2 verifications:
1.Check ARM template supported entities in ADF, Alerts & Metrics doesn't exist.
2.Try to export ARM template in the ADF UI but still no Alerts & Metrics
Really understand you would like to integrate all elements in Data Factory including Alerts & Metrics with DevOps. I suggest you submitting feedback here to push improvements of ADF, any voice is welcome.
There is a way to work around this one.
ADF alert is a "Microsoft.Insights/metricalerts" resource that you can deploy using ARM deployment operation from Azure Devops.
You can try to create an alert in ADF and then go to Portal, search for: Monitor > Alert > Alert Rule, and find the Alert you created in ADF. In my case there is an Alert called Test
Here is the ARM template exported from the alert
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"metricalerts_Alert_name": {
"defaultValue": "Alert",
"type": "String"
},
"factories_test_externalid": {
"defaultValue": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/yyyyy/providers/Microsoft.DataFactory/factories/test",
"type": "String"
},
"actionGroups_actiongroup1_externalid": {
"defaultValue": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/yyyyy/providers/microsoft.insights/actionGroups/actiongroup1",
"type": "String"
}
},
"variables": {},
"resources": [
{
"type": "Microsoft.Insights/metricalerts",
"apiVersion": "2018-03-01",
"name": "[parameters('metricalerts_Alert_name')]",
"location": "global",
"tags": {
"CreatedTimeUtc": "2022-09-13T05:28:46.0663823Z"
},
"properties": {
"severity": 0,
"enabled": true,
"scopes": [
"[parameters('factories_test_externalid')]"
],
"evaluationFrequency": "PT1M",
"windowSize": "PT15M",
"criteria": {
"allOf": [
{
"threshold": 1,
"name": "PipelineFailedRuns",
"metricNamespace": "Microsoft.DataFactory/factories",
"metricName": "PipelineFailedRuns",
"dimensions": [
{
"name": "Name",
"operator": "Include",
"values": [
"pipeline2"
]
},
{
"name": "FailureType",
"operator": "Include",
"values": [
"UserError",
"SystemError",
"BadGateway"
]
}
],
"operator": "GreaterThanOrEqual",
"timeAggregation": "Total",
"criterionType": "StaticThresholdCriterion"
}
],
"odata.type": "Microsoft.Azure.Monitor.SingleResourceMultipleMetricCriteria"
},
"actions": [
{
"actionGroupId": "[parameters('actionGroups_actiongroup1_externalid')]",
"webHookProperties": {}
}
]
}
}
]
}
For the actionGroups you can refer to
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"groupName": {
"defaultValue": "actiongroup1",
"type": "string"
},
"email_receiver_address": {
"defaultValue": "someEmail#gmail.com",
"type": "string"
}
},
"variables": {},
"resources": [
{
"type": "microsoft.insights/actionGroups",
"apiVersion": "2019-03-01",
"name": "[parameters('groupName')]",
"location": "global",
"tags": {
"CreatedTimeUtc": "2020-10-21T07:24:08.2808723Z",
},
"properties": {
"groupShortName": "test",
"enabled": true,
"emailReceivers": [
{
"name": "test email received",
"emailAddress": "[parameters('email_receiver_address')]",
"useCommonAlertSchema": false
}
],
"smsReceivers": [],
"webhookReceivers": [],
"itsmReceivers": [],
"azureAppPushReceivers": [],
"automationRunbookReceivers": [],
"voiceReceivers": [],
"logicAppReceivers": [],
"azureFunctionReceivers": []
}
}
]
}

Merge profile based on 2 property in Apache-Unomi

I am trying to build an customize logic in action for profile merging, can anybody suggest me how to create a rule where I can merge profile based on email and phone-number, as of now I am able to do with only one property value email. you can find the sample rule below in code :
"metadata": {
"id": "exampleLogin",
"name": "Example Login",
"description": "Copy event properties to profile properties on login"
},
"condition": {
"parameterValues": {
"subConditions": [
{
"type": "eventTypeCondition",
"parameterValues": {
"eventTypeId": "click"
}
}
],
"operator": "and"
},
"type": "booleanCondition"
},
"actions": [
{
"parameterValues": {
"mergeProfilePropertyValue": "eventProperty::target.properties(email)",
"mergeProfilePropertyName": "mergeIdentifier"
},
"type": "mergeProfilesOnPropertyAction"
},
{
"parameterValues": {
},
"type": "allEventToProfilePropertiesAction"
}
]
}
In order to be able to merge based on multiple identifiers you would have to extend the default built-in action to support that.
This can be done by creating a module but it will require some Java knowledge since this is how Unomi is implemented.
The code for the default merge action is available here:
https://github.com/apache/unomi/blob/master/plugins/baseplugin/src/main/java/org/apache/unomi/plugins/baseplugin/actions/MergeProfilesOnPropertyAction.java

scrapoxy - How to limit the number of instances that get created

I am using this library to setup a proxy on digital ocean and I am looking for a way to limit the number of droplets that get created in DigitalOcean so while looking at the documentation I came across this link which suggests that to limit the droplet I need to add max which I did however that does not seem to work, right now I am seeing a new droplet being created every few seconds and i want to reduce that to 2 if possible just to test things out.
This is what i have in my configuration
{
"commander": {
"password": "password"
},
"instance": {
"port": 3128,
"scaling": {
"min": 1,
"max": 2
},
"log": {
"path": "/log"
}
},
"providers": [
{
"type": "digitalocean",
"token": "the-token-goes-here",
"region": "SGP1",
"size": "s-1vcpu-2gb",
"sshKeyName": "ubuntu",
"imageName": "name-of-image",
"tags": "proxy,instance",
"max": 2
}
]
}
My understanding is by adding the max I should have been able to limit the droplets but I see no difference, is this the right place to add? What am i missing here?

How to get the number of returning users over a custom date range using the V4 Reporting API

I am trying to figure out how to get the number of returning users over a custom date range using the V4 Reporting API.
Like the 'All Users' row in this screenshot, but instead of the last 14 days, it should be any date range.
After reading through the documentation, the only thing I came up with, was to create cohorts for every day I want to query, and then sum up the values for each day to get the totals:
"reportRequests": [
{
"viewId": "147125344",
"dimensions": [
{
"name": "ga:cohort"
},
{
"name": "ga:cohortNthDay"
}
],
"metrics": [
{
"expression": "ga:cohortActiveUsers"
}
],
"cohortGroup": {
"cohorts": [
{
"name": "date 1",
"type": "FIRST_VISIT_DATE",
"dateRange": {
"endDate": "2017-07-08",
"startDate": "2017-07-08"
}
},
{
"name": "date 2",
"type": "FIRST_VISIT_DATE",
"dateRange": {
"endDate": "2017-07-09",
"startDate": "2017-07-09"
},
{
"name": "more days",
"type": "FIRST_VISIT_DATE",
"dateRange": {
"endDate": "next day",
"startDate": "next day"
}
}
]
}
}
]
}
However, this is limited by the fact that there can only be 12 cohorts in total, so I could not query more than 12 days at once.
Is there any way to get this data directly through the API?

Resources