I have some questions about Azure recovery services that I can't find on the azure website:
If I have a windows VM with SQL and IIS installed and a network drive (azure file service account). What will actually be backuped? Do all files from all drives get backuped?
Is it possible to download the backuped files? or at least see where they live?
Can you set your own storage account for Azure recovery services?
Does Site Replication have a purpose for Azure VM's, or only for on-premise servers. I can't really figure out what site replication does.
How do I delete a backup after I created it, the delete-backup button always seems disabled.
What happens when I do a restore, does it basically just write back a copy of the VHD to my storage around and reboots the VM?
If I have a windows VM with SQL and IIS installed and a network drive
(azure file service account). What will actually be backuped? Do all
files from all drives get backuped?
Base on my experience, Azure Recovery service will backup your data disks(no more than 16), storage, and system environments of VMs at the scheduled time. And backup entension to take a point to time snapshot and transfer this data into backup vault. Please refer to this document and the picture:
Is it possible to download the backuped files? or at least see where they live?
You can check them status[Backup Item] on Azure portal, such as this picture:
Can you set your own storage account for Azure recovery services?
No. So far, the backup items is store into storage. But the data is encrypted.
see this document commentpost.
Does Site Replication have a purpose for Azure VM's, or only for
on-premise servers. I can't really figure out what site replication
does.
How do I delete a backup after I created it, the delete-backup
button always seems disabled.
From your description, I think you may need to understanding the concept of Azure Site Recovery and Azure back up service, please refer to these document for more details:site-recovery-overview and azure backup. Then you can follow the document to manage your backup items.Also, you can delete it.
What happens when I do a restore, does it basically just write back a
copy of the VHD to my storage around and reboots the VM?
The data is retrieved from the Azure Recovery Services vault.
Please refer to this document about how to restore the backup item into the same service and other servers.
Related
We have approx 15 servers each server has different log files. Developer need to access that log files. We want then to access that files without login to VM. We figured out AZURE STORAGE ACCOUNT. Our requirement is every vm logs has to sync with respective azure Blob or File share from azure devops. We found AZcopy. But draw back is we need approx 15 pipeline to run periodically for every 30min to get lastest logs. Also we figured out windows scheduler which is not accepted by client.
All servers are Windows.
Is there any other and best way to send all logs files to blob or file share (Automation) and access that files using CDN or storage explorer. Even if it is other method also fine.
If you want to save logs files of VM to Azure Blobs and don't want to use the pipelines for the process then you can use the Azure Log Analytics for your work.
The Azure Log Analytics agent can collects telemetry from Windows virtual machines in any cloud, on-premises machines, and sends it collected data to your Log Analytics workspace in Azure Monitor. Then you can send the custom logs of VM to the Azure Blob Storage or even in the event hub.
In each and every VM you need to check the diagnostic settings and enable logs against the log analytics. Once configured it will be sending logs directly to the Azure Storage, no automation is required.
I would suggest to read this Azure Activity log and Log Analytics agent document for more information.
I want to create a Failover cluster for MSMQ for two vm's in azure. I created two VM's in azure and have them domain joined. I can create the failover cluster with both nodes. However when i try to add a role for MSMQ i need an cluster shared disk. I tried to create a new managed disk in azure and attach it to the vm's but it still wasn't able to find the disk.
Also tried fileshare-sync, but still not working.
I found out i need iSCSI disk, there was this article https://learn.microsoft.com/en-us/azure/storsimple/storsimple-virtual-array-deploy3-iscsi-setup . But it is end of life next year.
So i am wondering if it is possible to setup a failover cluster for msmq on azure and if so how can i do it?
Kind regards,
You should be able to create a Cluster Shared Volume using Storage Spaces Direct across a cluster of Azure VMs. Here are instructions for a SQL failover cluster. I assume this should work for MSMQ, but I haven't set up MSMQ in over 10 years and I don't' know if requirements are different.
I'm trying to use Appassure with Azure Blob storage however tle Blob storage does not work with Appassure, and if I want to get ZRS replication only Blob is available.
Is there a way to configure it to work ?
Thanks,
You probably need check with Dell to confirm if AppAssure supports Azure Blob Storage as backup destination or not.
You can also check Azure Backup service to see if it fit your replication requirements
https://azure.microsoft.com/en-us/services/backup/
I'm looking to setup a demo environment in Amazon that consists of a pre-configured EC2 image that resets itself back to a snapshot configuration every hour, this is would be a Linux VM.
What would be the best way to go about doing this in EC2? Does Amazon offer any tools for scheduling and reverting to the snapshot or would this need to be done from a third party VM or software?
There is no VMWare-like 'snapshot' functionality in Amazon EC2 (where you can roll-back to a point-in-time).
The network-attached disk storage system used with Amazon EC2 is called Amazon Elastic Block Store (EBS). While EBS does have a 'snapshot' function, this actually takes a backup of an EBS Volume and stores it in Amazon S3. The snapshot can then be used to create a new EBS volume, which will contain the same contents as the original disk at the time the snapshot was created.
One option would be to launch a new Amazon EC2 instance, which will automatically create a new boot disk from the indicated Amazon Machine Image (AMI). This is the way to launch new machines with the same disk content. However, this might not lend itself well to your "revert every half hour" since it requires a new machine to be started, which will also trigger a new hourly billing cycle.
You might be able to script the deletion of files or the reload of some database tables, but this will depend upon your particular system and applications.
I have an Alfresco community installation, hosted on Amazon Web Services, which I am using as a personal repository. I am starting having quite important docs stored within (roughly 2Gb), so I am thinking about how to implement a strong backup/restore strategy.
I have seen many tutorials and official docs, showing how to backup alfresco by backing up two directories, alf_data and the postgresql (or whatever database is used) directory.
The question: in the case of a default Alfresco installation, which means with an embedded database, I wonder if the following scenario is enough for being considered a good cold back up strategy. The starting point is of course stopping Alfresco, then one (or both) of the following.
Tar gz the whole alfresco installation directory and store in a safe place (at the moment Amazon S3).
Create an EBS snapshot with the amazon EC2 console
If both your alf_data and postgres directory is on the EBS, than a snapshot is sufficient.
You just need to know that a hot-backup (done while running Alfresco) could be inconstant: out of sync database & alf_data or inclomplete within a transaction.
A cold-backup is the best, take look at the Alfresco Wiki for more info.
Still when doing a hot-backup at night when there are no jobs running (ldap/cleanup/etc) it's doable.