How to set a WORM Strategy in OSS - alibaba-cloud

WORM (Write Once Read Many) strategy is used to specify the protection period of files in OSS Bucket in Alibaba Cloud.
How can I set WORM strategy in OSS.
I appreciate any assistance with this.

You can set WORM Strategy at bucket level. To set the WORM Strategy, Please follow the below-stated procedure:
1) Log-on to the OSS console
2) Create or Select the bucket
3) Click the Basic Settings and locate the WORM Settings and Configure
as per your requirements - Set the Lifecycle of WORM Strategy and
click OK
4) WORM strategy will be created, it will be in IN_PROGRESS state.
5) Click Lock.
Note: After a WORM strategy is locked, it cannot be deleted, but you can extend the lifecyle

Related

How to have different work space (or canvas) in Apache Nifi?

I just started a project with Apache Nifi and I am new to this orchestration tool. From a Azure's standpoint in ADF, I would like to create a branch so that I can work on my own development or at least I want to create a separate pipeline in the workspace. In Apache Nifi, I have an user Interface that multiple people can work on. Even though the activities (or processors) in Nifi seems dependent unless specify otherwise, I would like to have my own work space as a separate canvas.
Is it possible to have multiple canvas as workspace in Apache Nifi on a single address ?
Kind regards,
Ken
What I would do is create a new process group with a unique name. Process Groups are a way to have a complete canvas to yourself that doesnt interfere with other canvases.
using a "Process Group" is the easiest way.
if needed you can apply policies on each "Process Group".
for this you need to add some users (put them in a group) and create policies which fits to your needs. while creating the policies you can add users (and groups) to your policy to grant access, view, modify, ...
btw, you can route flowfiles IN and OUT of your "Process Group" using "Input Port" and "Output Port" (next to the processors in the menu bar of the NiFi canvas)

Clean Up Azure Machine Learning Blob Storage

I manage a frequently used Azure Machine Learning workspace. With several Experiments and active pipelines. Everything is working good so far. My problem is to get rid of old data from runs, experiments and pipelines. Over the last year the blob storage grew to enourmus size, because every pipeline data is stored.
I have deleted older runs from experimnents by using the gui, but the actual pipeline data on the blob store is not deleted. Is there a smart way to clean up data on the blob store from runs which have been deleted ?
On one of the countless Microsoft support pages, I found the following not very helpfull post:
*Azure does not automatically delete intermediate data written with OutputFileDatasetConfig. To avoid storage charges for large amounts of unneeded data, you should either:
Programmatically delete intermediate data at the end of a pipeline
run, when it is no longer needed
Use blob storage with a short-term storage policy for intermediate data (see Optimize costs by automating Azure Blob Storage access tiers)
Regularly review and delete no-longer-needed data*
https://learn.microsoft.com/en-us/azure/machine-learning/how-to-move-data-in-out-of-pipelines#delete-outputfiledatasetconfig-contents-when-no-longer-needed
Any idea is welcome.
Have you tried applying an azure storage account management policy on the said storage account ?
You could either change the tier of the blob from hot -> cold -> archive and thereby reduce costs or even configure a auto delete policy after a set number of days
Reference : https://learn.microsoft.com/en-us/azure/storage/blobs/lifecycle-management-overview#sample-rule
If you use terraform to manage your resources this should be available a
Reference : https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/storage_management_policy
resource "azurerm_storage_management_policy" "example" {
storage_account_id = "<azureml-storage-account-id>"
rule {
name = "rule2"
enabled = false
filters {
prefix_match = ["pipeline"]
}
actions {
base_blob {
delete_after_days_since_modification_greater_than = 90
}
}
}
}
Similar option is available via the portal settings as well.
Hope this helps!
Currently facing this exact problem. The most sensible approach is to enforce retention schedules at the storage account level. These are the steps you can follow:
Identify which storage account is linked to your AML instance and pull it up in the azure portal.
Under Settings / Configuration, ensure you are using StorageV2 (which has the desired functionality)
Under Data management / Lifecycle management, create a new rule that targets your problem containers.
NOTE - I do not recommend a blanket enforcement policy against the entire storage account, because any registered datasets, models, compute info, notebooks, etc will all be target for deletion as well. Instead, use the prefix arguments to declare relevant paths such as: storageaccount1234 / azureml / ExperimentRun
Here is the documentation on Lifecycle management:
https://learn.microsoft.com/en-us/azure/storage/blobs/lifecycle-management-overview?tabs=azure-portal

Trace failed login attempts Windows Server

We have noticed ~15k failed login attempts a day on one of our admin-accounts in the domain.
The source server is found and the event type is "Network", the source is a DC that has not been touched (except WinUpd) for years so a virus seems unlikely but of course possible.
Is there a way to trace exactly what the failed attempts point at? We have recently changed FSMO roles between two other DCs in the domain, maybe that has something to do with it?
You can check the login failed attemps based in audit logon events local computer policy.
use the keyboard shortcut Windows Key + R and type:gpedit.msc in the Run line and hit Enter.
In Group Policy Editor, navigate to Windows Settings >> Security Settings >> Local Policy >> Audit Policy.
Then double click on Audit Logon Events.
From there, check the boxes to audit failed audit attempts and click OK.
There you go! Now you’ll be able to see the complete logon activities (failed l) for your Windows computer.
Please refer this one as well based on event id you can know exactly what the failed attempts point at. : https://social.technet.microsoft.com/Forums/en-US/f49cd4d6-a7d5-4213-8482-72d1d5306dab/windows-server-2012-r2-help-finding-failed-logon-attempts-source?forum=winserversecurity
Reference: https://www.groovypost.com/howto/pin-windows-8-start-screen-programs-desktop/

Alfresco - Download statistics and user permissions report

Hey to every alfresco pro out there!
Is there any way to create a report (graphical or textually, i don't care) to see the following information:
download count per file
how many times did user X download a specific file
which permissions do the users have
Are my goals easy to realize? Is there any plugin out there that i can use for this? (Already searched for some but couldn't find one) Hope that you can help me :)
mtzE
There is nothing out-of-the-box that is counting downloads. Maybe the audit service can be used to count reads, but you'll have to turn it on and configure it. Once turned on, the audit service writes records to a set of audit tables in your Alfresco database. You can then use any reporting tool to query those tables.
If you want to check the permissions a user has you can use something like OpenCMIS to connect to the repository, traverse a folder path, and then, for each object, you can inspect the ACL of that object to use as data in your report.
As Lista said, one way to create such reports is to use AAAR, but that is not required.

script for Local Security Policy

I'm looking for some guidance on how to automat applying a set of permissions withn the local security policy to a multiple users on multiple servers.
For example, via a script, I want to apply "act as part of the operating system" and "adjust memoroy quotas for a process" to user TEST1 and TEST2.
Any feedback on how to get started would be appreciated. thanks!
From a command line, the Microsoft-provided solution is secedit. AppDeploy is a great resource for packaging in general, and they have a good page on secedit here: http://www.osdeploy.com/tips/detail.asp?id=23
In short, change your policies using the Local Security Settings MMC snap-in, then export with secedit as in this page (http://www.webservertalk.com/message534715.html -- also assuming this computer isn't a member of a domain), then import as usual.
Is this machine domain joined? If so, you'll need to make sure no domain policies are applied. Otherwise the domain policies will be exported along with the local ones.
Simpler answer here:
Scripting Local Security Policy
Use ntrights.exe from the Windows 2003 Resource Kit.
However, this doesn't seem to help with the "adjust memory quotas for a process" right.

Resources