Is there any way to backup TDE enable Azure DB(Managed Instance) - database-backups

Our requirement is to manual back up of Azure DB(MI) when an event is triggered. This db is TDE enabled. I cannot decrypt and take manual backup as decrypting db takes long time. Is there any way to back up of db (export to Azure storage) manually and export it to Azure storage?

If you say that removing the TDE takes too much time and it's not an option, then you can export the database to a BACPAC file.
If removing TDE takes too much time I can assume that the DB has a considerable size and in that case I advise to use SQLPackage running on a Azure VM on the same region where the MI is.
Be alerted that the BACPAC file is not encrypted.
But if you need a backup file, then you need to remove the TDE or copy the database and on the copy remove the TDE, then execute the backup.

You could switch from service-managed TDE to customer-managed TDE, with protector key kept in Azure Key Vault. You can take copy-only backups of your database(s) as long as they are encrypted with that key. Please be aware that you need to keep the protector key(s) as long as you keep backup files. To restore such a backup to the same or some other managed instance, you need to provide access to the key to the destination instance first.

Related

How to use a recent Oracle backup file (from yesterday) and only online redo logs to recover the database in another location (disaster recovery)?

I would like to plan and test my database recovery in another site (another instance on another server in disaster recovery site).
I take a monthly RMAN level 0 image copy every month and daily incremental level 1 backups.
The database is running in noarchivelog mode. The online redo logs are multiplexed to a disk in the disaster recovery site. Also we have a recovery catalog on another server.
I want to test restoring the recent (yesterday) backup to database in disaster recovery site and then recover to just apply the online redo log files, how to achieve that?
side question: Is it sufficient to recover if we only have a yesterday backup and the online redo logs containing all transactions of today and none of them was overwritten? Since the database is in noarchivelog mode.
What is the use of archivelog mode if we have a daily backup and the redo logs are not overwritten during the day until the backup is taken?
what is the use of backing up archive logs?
You are working with a dangerous setup since you seem to be betting on redo log files that are never filled up between your backups. When your data has no value, go ahead, otherwise switch to archive log mode.
Archives are created when a redo log group fills up. So, in your case you need to copy the online redo log files manually to the remote site for recovery.
How sure are you about the redo log files not being overwritten?
Be sensible and if this is production switch to archive log mode. Otherwise, promise not te make promises about being able to make point in time recoveries.
An other point: if your online redo log files are damaged, your database has a big problem and in your case you might loose a day worth of work. Is that OK? If not, reduce the size of the redo log files to a limit where it does make a switch every now and then. I am sure your company has an idea about how much time they can accept loosing transactions from. Many companies allow less than one hour transaction loss.

Azure cognitive search indexer blob storage

I am stuck in a complicated situation and appreciate that if somebody can help.
So I was testing indexing blob storage( pdf files) and indexed a copy of my storage in qa environment that cost me some money.
My question is that:
Is there any solution to use this index in production without indexing again?
I found a solution to copy the index and that works fine but when I add an indexer that is connect to production blob storage it start indexing from scratch again( as I expected). Is there any solution to avid this? Is there any solution to ask indexer to index from now on?
I tried to use the index and the indexer that I already have by changing the subscription to prod. But I have to change the data source for indexer to point at production blob storage and in this case I get an error :
Indexer 'filesIndexer' currently references data source 'qafilesds' and cannot be updated to reference a different datasource 'prodfilesds' because it has a non-empty change tracking state, or it is currently in progress. You can use Reset API to reset the indexer's change tracking state when it is no longer in progress, and retry this call.
A simple answer to your first question is to simply use the qa index you built.
A more complicated answer is to switch from the push model you are using now to a pull model. From your explanation above I assume all of your content comes from blob storage. And you have configured an indexer to do the indexing for you. This is known as the pull model.
The alternative is to use the Azure Cognitive Search SDK to write your own application that submits content to the index instead. In this case you do not use the built-in indexer, only the index itself. Then you are free to use whatever logic you want to determine what to index and what to skip. You can even enable your storage accounts to notify your application with events when content is updated.

How to use snapshot and caching functions without actually storing credentials in SSRS

I have developed few testing reports in my local machine. I came across few mechanisms called Snapshot and Caching. I am trying to implement those in my reports, every time when i try to create Caching mechanism it throws me an error "credentials need to be stored. "
Can we use caching and snapshot by using Windows credentials?. if so what is the approach.
My local machine details.
Serername-(loacl)
Authentication -- (Windows)
name and pwd-- gryed out
my reportserver URL: satish-pc/reportserver
DB-Adventure Works.
Scheduled snapshots or caching plans mean the report is being executed on an automated basis, and the results stored for easier/faster retrieval later. As the executions are autmoated and unattended, they need connections with stored credentials, as there is no user sitting at the computer at run-time to punch in credentials. So, in order to use snapshots or scheduled caches, you will need to create a data source that has credentials stored in it. In Report Manager, you can edit the Report's datasource in the Report Properties page, or the shared datasource's connection into on its own properties page.

WindowsAzure: Is it possible to set directory permissions within the web.config?

A PHP scriptof mine wants to write into a log folder, the resulting error is:
Unable to open the log file "E:\approot\framework\log/dev.log" for writing.
When I set the writing permissions for the WebRole User RD001... manually it works fine.
Now I want to set the folder permissions automatically. Is there an easy way to get it done?
Please note that I'm very new to IIS and the stuff around, I would appreciate precise answers, thx.
Short/Technical Response:
You could probably set permissions on a particular folder using full-trust and a startup taks. However, you'd need to account for a stateless OS and changing drive letters (possible, not likely) in this script, which would make it difficult. Also, local storage is not persisted, so you'd have no way to ensure this data stayed in the case of a reboot.
Recommendation: Don't write local, read below ...
EDIT: Got to thinking about this, and while I still recommend against this, there is a 3rd option: You can allocate local storage in the service config, then access it from PHP using a dll reference, then you will have access to that folder. Please remember local storage is not persisted, so it's gone during a reboot.
Service Config for local:
http://blogs.mscommunity.net/blogs/dadamec/archive/2008/12/11/azure-reading-and-writing-with-localstorage.aspx
Accessing config from php:
http://phpazure.codeplex.com/discussions/64334?ProjectName=phpazure
Long / Detailed Response:
In Azure, you really are encouraged to approach things as a platform and not as "software on a server". What I mean there is that ideas such as "write something to a local log file" are somewhat incompatible with the cloud "idea". Depending on your usage, you could (and should) convert this script to output this data to some cloud-based or external storage, vs just placing it on the disk.
I would suggest modifying this script to leverage the PHP Azure SDK and write these log entries out to table or blob storage in Azure. If this sounds good, please provide the PHP and I can give an exact example.
The main reason for that (besides pushing the cloud idea) is that in Azure, you cannot assume the host machine ("role instance") will maintain an OS state, so while you can set some things such as folder permissions, you can't rely on them sticking that way. You have no real way to guarantee those permissions won't be reset when the fabric has to update your role and react to some lower level problem. For example, a hard-drive cage on the rack where your current instance lives could fail. If the failure were bad enough, the Fabric controller would need to rebuild your instance. When that happens, your code is moved to an entirely different server, so the need would arise to re-set those permissions. Also, depending on the changes, the E:\ could all of a sudden need to be the F:\ or X:\ drive and you wouldn't know.
Its much better to pretend (at some level) that your application is running "in Azure" and not "on a server in azure", so you make no assumptions about the hosting environment. So anything you need outside of your code (data, logs, audits, etc) should be stored somewhere you can control (Azure Storage, external call-out, etc)

Good way to demo a classic ASP web site

What is the best way to save data in session variables in a classic web site?
I am maintaining a classic web site and want to be able to allow my users to demo all functionality of the site, this means allowing them to delete records.
The closet example I have seen so far are the demos of Telerik controls where they are saving the dataset in sessions on first load and allowing the user to manipulate the data.
How can I achieve the same in ASP with an MS Access backend?
If you want to persist the state over multiple pages (e.g. to demo you complete application) then it's a bit tricky.
I would suggest copying the MDB file for each session and using the copied version. This would ensure that every session uses its own data.
create a version of your access db which will be used as a fresh template for each user
on session copy the template and name it after the users session ID
use the individual MDB
Note: Then only drawback I can see here is that you need to remove the unused MDB files as it can get a lot after sometime. You could do it with a scheduled task or even on session start before you create a new one.
I am not sure what you can use to check if it's used or not but check the files creation date or maybe the LDF file can help you as well (if it does not exist = unused).
You can store a connection or inclusive an object in a session variable as far you remember what kind of variable are you storing at the retrieving time. I had never stored a dataset in a session variable but I had stored a lot of arrays in session variables so you can use the ADO Getrows method to locate a complete dataset into a session variable.
How big is the Access database? If your database is small enough (relative to the server capacity, expected number of users, and so forth) then I like the idea of using a fresh copy of the database for each user that runs the demo.
With this approach, you simplify your possible code paths. Otherwise this "are we in demo mode or not?" logic will permeate a heck of a lot of your code.
I'd do it like this...
When the user begins the demo, make a copy of the Access DB for that user to use. If your db is foo.mdb, copy it to /tempdb/foo_1234567890.mdb where 1234567890 is the user's session ID.
Alter the user's connection string to point to the fresh database copy. From this point on, your app can operate like "normal" with no further modifications.
Have a scheduled task that deletes all files in /tempdb with last-modified times more than __ hours in the past. If you don't have the ability to schedule tasks on the server (perhaps you're in a shared hosting environment, etc) then you could do this at the same time you do step #1.

Resources