Deleted analysis in quicksight - amazon-quicksight

Can I recover a deleted analysis in quicksight or can I copy it from its dashboard?
Is there any way I can do it?
In quicksight documentation there is a way to recover a schedule for deletion analysis, but there is nothing about a normal user deletion.
I moved the analysis to a shared folder and then I had to remove it from there but didn't realize there is no other copy of that analysis. The analysis has a dashboard, but I can't find a way to reverse engineer it back to analysis without starting from scratch.

I recall that once deleted, they are subject to a default 30 day recovery period via API. See the following for details:
https://docs.aws.amazon.com/quicksight/latest/developerguide/restore-analysis.html

Related

Will reloading all TDE templates tigger reindexing cause ML Performance issue?

Now I am using gradle mlReloadSchemas tasks to reload TDE templates.
I guess even if the change is for one tde file only, the reload schemas task may delete all in DB and load all TDE templates to ML DB.
I wonder whether it will cause a performance issue for ML. Will that trigger indexing even for the TDE files that have not yet changed?
I am using DevOps pipeline to trigger the schema reload from GIT repository. As such, I could not load only the change TDE file. I have to reload everything. If there is performance issue, how to load only changed file with the pipeline?
Redeploying TDE can cause reindexing. How many records to be reindexed depends upon the context matching for those TDE.
A properly resourced cluster should be able to handle the load of reindexing.
That being said, the merging activities can compete with online traffic and query demands. You can help minimize the impact by setting the reindex throttle to a lower level (1-5 with 1 being the lowest), and you can set a background-io limit to restrict the amount of IO any node will use for background activities such as merges and backups.
You can also choose when to enable/disable reindexing, and adjust the reindexing level to a higher/lower level at different periods.
https://help.marklogic.com/Knowledgebase/Article/View/how-reindexing-works-and-its-impact-on-performance
https://help.marklogic.com/Knowledgebase/Article/View/indexing-best-practices

Azure cognitive search indexer blob storage

I am stuck in a complicated situation and appreciate that if somebody can help.
So I was testing indexing blob storage( pdf files) and indexed a copy of my storage in qa environment that cost me some money.
My question is that:
Is there any solution to use this index in production without indexing again?
I found a solution to copy the index and that works fine but when I add an indexer that is connect to production blob storage it start indexing from scratch again( as I expected). Is there any solution to avid this? Is there any solution to ask indexer to index from now on?
I tried to use the index and the indexer that I already have by changing the subscription to prod. But I have to change the data source for indexer to point at production blob storage and in this case I get an error :
Indexer 'filesIndexer' currently references data source 'qafilesds' and cannot be updated to reference a different datasource 'prodfilesds' because it has a non-empty change tracking state, or it is currently in progress. You can use Reset API to reset the indexer's change tracking state when it is no longer in progress, and retry this call.
A simple answer to your first question is to simply use the qa index you built.
A more complicated answer is to switch from the push model you are using now to a pull model. From your explanation above I assume all of your content comes from blob storage. And you have configured an indexer to do the indexing for you. This is known as the pull model.
The alternative is to use the Azure Cognitive Search SDK to write your own application that submits content to the index instead. In this case you do not use the built-in indexer, only the index itself. Then you are free to use whatever logic you want to determine what to index and what to skip. You can even enable your storage accounts to notify your application with events when content is updated.

Azure Blob Storage lifecycle management - send report or log after run

I am considering using Azure Blob Storage's build-in lifecycle management feature for deleting blobs of a certain age.
However, due to a business requirement, it must be possible to generate a report or log statement after each daily execution of the defined ruleset. The report or log must state the number of blob blocks that were affected, e.g. deleted during the run.
I have read through the documentation and Googled to see if others have had similar inquiries, but so far without any luck.
So my question: Does any of you know if and how I can get a build-in Lifecycle management system to do one of the following after each daily run:
Add a log statement to the storage account containing the Blob storage.
Generate and send a report to an endpoint I define.
If the above can't be done I will have to code the daily deletion job and report generation myself, which surely I can do, but I would like to use the built-in feature if possible.
I summarize the solution as below.
If you want to know which blobs are deleted every day, we can configure Diagnostics settings in the storqge account. After doing that, we will get the logs for read, write, and delete requests for the blob. For more detail, please refer to here and here
Regarding how to enable it, we can use PowerShell command Set-AzStorageServiceLoggingProperty.

BulkDeleteFailureBase - lots of "Not Enough Privilege" records

I've taken over support of a CRM 2016 On-Premise system. I don't know the history of the particular instance, but I suspect it's been copied and/or imported many times.
The BulkDeleteFailureBase tables has just short of 2 million rows, almost all of which contain an error description like:
Not enough privilege to access the Microsoft Dynamics CRM object or
perform the requested operation. The current Organizationid '<GUID1>'
does not match with userOrTeam's organization id '<GUID2>'.
OrganisationBase has only one record with <GUID2> in it.
Has this happened because the instance has been copied/moved around incorrectly? If so, is this likely an indication more problems are heading my way in the future?
How can I recover from this?
BulkDeleteFailureBase is one of the system async jobs logging table where platform captures the run/success/failure logs.
Probably someone might have tried to clean the data like Plugin Trace log which were copied over from different DB backup/restore or CRM Org restoration. They used Bulk delete & all that fails, ended up here.
MS Support recommendation gives the script to clean those tables safely. Leaving it only gives you performance head-ache.

Retention policy to TFS Code Search Server (Elastic Search)

We have TFS 2017.3 with separate Code Search server.
We have huge TFS DB (about 1.6TB), in the code search server we have 700GB dis space.
After few weeks the disk space running out and the code search not work in the tfs.
After we increase the disk space the search back to work.
How can we make retention policy to delete old code search data (index)? we don't want to increased more the disk space.
Search indexing (Code and Work Item) works in 2 phases:
Bulk Indexing (BI) where the entire code and work item artifacts in all projects/repositories under a Collection are indexed. This is a
time consuming operation and depends on the size of the artifacts
under the collection.
Continuous Indexing (CI) which handles all incremental updates to the artifacts (add/updated/delete) and indexes them. This is
notification based model where the indexer listens to TFS events
and operates based on those event notifications. CI handles almost
all update operations including CRUD operations at
Project/Repository/Collection layer (such as Repository renames,
Project add/deletes, etc.). The operation time for these CI would
depend again on the size of the incremental update. BI always
precedes CI i.e. a CI will never execute on a project/repository
until BI is completed for the same.
How to Clean-up Index Data and Re-index please follow below steps:
Pause Indexing for all collections. Run the following script on TFS
Configuration DB
https://github.com/Microsoft/Code-Search/blob/master/PauseIndexing.ps1
Login to the machine where the Elasticsearch (ES) is running
Stop the ES service
Delete the entire Search Index folder (something like,
C:\TfsData\Search\IndexStore, or wherever you had configured it to
be)
Restart the TFS Job Agent service(s) on the AT machines
Delete the following tables from each of the collection DBs
DELETE FROM [Search].[tbl_IndexingUnit]
DELETE FROM [Search].[tbl_IndexingUnitChangeEvent]
DELETE FROM [Search].[tbl_IndexingUnitChangeEventArchive]
DELETE FROM [Search].[tbl_JobYield]
DELETE FROM [Search].[tbl_TreeStore]
DELETE FROM [Search].[tbl_DisabledFiles]
DELETE FROM [Search].[tbl_ResourceLockTable]
Restart the ES service
Run this script on TFS Configuration DB:
https://github.com/Microsoft/Code-Search/blob/master/ResumeIndexing.ps1
Run this script (pick from the correct TFS release folder) on each of
the collections:
https://github.com/Microsoft/Code-Search/blob/master/TFS_2017Update2/MissingIndexFolderTriggerCollectionIndexing.ps1
Try the last script on a smaller collection first (which has less
number of repositories) so that you can verify that indexing happened
correctly and the results are query-able.
More details please refer this blog in MSDN: Resetting Search Index in Team Foundation Server
I was able to reduce the disk size after deleting the ES folders, reinstalling the code search extension, and sometimes had to run the MissingIndexFolderTriggerCollectionIndexing.ps1.
But - I came to the conclusion that it was not worth doing, the disk size was growing rapidly and reaching the original size, so I did not save anything.
Although Microsoft recommends giving disk space of 35% of the DB, it is not enough for us and we increase the size when the disk is full to the end (currently about 45% of the DB size).
The conclusion - don't touch the ES, if the disk fills up then increase the disk size.

Resources