Welcome, I have question about managing disks via Powershell. Is there any possibility to e.g. change type of disk to dynamic, create mirrored disk, create raid-5 volume, extend volume via Powershell? I mean I need to do these things by using Powershell, NOT DISKPART!, but can't find solution.
See these:
Replace Diskpart With Windows Powershell – Basic Storage Cmdlets
Converting a DiskPart script to PowerShell on Windows Server 2012
step by step how to create a two way mirrored storage space via Powershell
Update
As for ...
'any command to change storage type from basic to dynamic?'
Why are you trying to do this?
As per Microsoft, dynamic disk are depreciated, and we are to be using the WSM API -
For all usages except mirror boot volumes (using a mirror volume to
host the operating system), dynamic disks are deprecated. For data
that requires resiliency against drive failure, use Storage Spaces, a
resilient storage virtualization solution. For more info, see Storage
Spaces Technical Preview.
You can continue to use DiskPart, DiskRAID, and Disk Management during
the deprecation period, but these tools will not work with Storage
Spaces or with any other new Windows Management Instrumentation
(WMI)-based Windows Storage Management APIs or in-box storage
management utilities or clients.
... and this the reason for no cmdlets for them exist or will.
Related
I am creating a custom cloud storage provider for Windows, based on this repo. I have removed the root node in the Windows file manager as described in this post. Now I need to mount it as a drive.
How do I display my custom cloud storage provider as a drive?
I know this may be a little late but have a look at the answers - they may help you decide what approach to take as there's no native way to mount GCP buckets (or other storage) in Windows as a file system.
Have a look at this answer as it presents some of the tools (free & proprietary) and workarounds you can use (such as mounting filesystem in Linux and sharing it as a Samba/NFS with windows):
ALthough the general question of hadoop/HDFS on Windows has been posed before, I haven't seen anyone presenting the use case I think is the most important for Windows support: How can Windows end stations participate in a HDFS environment and consume files stored in HDFS.
In particular, let's say we have a nice Linux based HDFS environment with lots of nodes and analysis jobs being run, etc, and all is happy. How can Windows desktops also consume the files? Suppose our analytics find interesting files out of the millions of mostly uninteresting. Now we want to bring them into a desktop application to visualize, etc. The most natural way for the desktop to consume these is via a Windows share, hopefully via a Windows server.
Windows' implementation of CIFS is orders of magnitude better than Samba -- I'm stating that as a fact, not a point of debate. That isn't to say that Samba cannot be made to work, only that there are good reasons to have a very strong preference for essentially exporting this HDFS file system as CIFS.
It's possible to do this via some workflow where we have a back-end process take the interesting files and copy them. But this is cumbersome in many cases and does not give the Windows-shackled analyst the freedom to explore the files on his own as easily.
Hence, what I'm looking for really is:
Windows server
HDFS as a "mounted" file system; Windows is thought of as a HDFS "client"
Export this file system from Windows as a CIFS server
Consume files on Windows desktop
Have all the usual Windows group permissions work correctly (e.g. by mapping through to NFSv4 ACLs).
Btw, if we replace "HDFS" with "GPFS" in this question, it all does work. At the moment, this is a key differentiator between HDFS and GPFS in my environment. Yes, there are many many more points of comparison, but I'd like to not focus on GPFS vs HDFS in general at the moment.
Could someone please add the #GPFS tag?
In particular, let's say we have a nice Linux based HDFS environment with lots of nodes and analysis jobs being run, etc, and all is happy. How can Windows desktops also consume the files?
HDFS provides a REST API through WebHDFS and HttpFS for various operations. The REST API can be pragmatically accessed from many languages. Also note that these languages also have libraries to program against REST API easily.
Haven't tried it out, but according to the Hadoop documentation it should be possible to also mount HDFS to a Windows machine.
I'm experimenting with OnStart() in my Azure role using "small" instances. Turns out it takes about two minutes to unpack a 400 megabytes ZIP file that is located in "local storage" on drive D into a folder on drive E.
I though maybe I should do it some other way around but I can't find any data about how fast the local disks on Azure VMs typically are.
Are there any test results for how fast Azure VM local disks are?
I just ran a comparison of disk performance between Azure and Amazon EC2. You can read it here, although you will probably want to translate it from Norwegian :-)
The interesting parts, though, are the following HD Tune screenshots.
First, a small instance at Amazon EC2 running Windows Server 2008:
Next, a small instance on Azure running Windows Server 2012:
This isn't a fair comparison, as some of the differences may be due to missing Windows 2012 drivers, but you may still find it useful.
As pointed out by Sandrino, though, small instances at Azure only get "moderate" I/O performance, and this may be an argument in favor of Amazon.
It all depends on your VM size: https://www.windowsazure.com/en-us/pricing/details/#cloud-services. As you can see a small instance will give you a moderate I/O performance, and medium/large/xxl will give you a high I/O performance.
If you want specifics I suggest you read through this blog post: Microsoft SQL Server 2012 VM Performance on Windows Azure Virtual Machines – Part I: I/O Performance results. They talk about the SQLIO tool that can help people decide on moving they SQL Server infrastructure to Windows Azure VMs.
This tool is interesting since it might just give you the info you need (read and write MB/s):
i'd like to create a source control system running on a NAS Drive. As a Windows User, I've never been able to get Microsoft Visual Source Safe to work on previous NAS Drives, as internally most of them seem to use Linux rather than Windows. I always got a security denied error message.
Should most NAS Drives be able to host a source control system. I'm presuming that possiblly i should have tried to install subversion rather than visual source safe?
Or ideally i'd like to use Microsoft Team Foundation Server.
Should i be looking for a NAS Drive with NTFS? Is there any that someone can recommend?
Also will there be a performance issue?
regards
Kojo
TFS uses SQL server as its storage mechanism. It is not recommended to put SQL data files on NAS drives, due to the high latency.
I admit this is not strictly a programming question, although I do use my WHS as a source repository server for home projects, and I'm guessing many other coders here do as well.
Does anyone have a recommendation for a good backup solution for the non-fileshare portion of Windows Home Server? All the WHS backups I've seen handle the fileshares, but none of the system files or other administrative stuff on the box.
Thanks,
Andy
Windows Home Server is designed to not need a backup of the OS. If your system drive fails, install a new drive, and then boot the WHS OS setup disc and install the OS. It will find the data on the other drives and recreate all the shared folders. You do need to do some configuring once it is back up but that is pretty small compared to not having to back it up.
One good solution for backing up the home server itself is to attach an external drive, say via USB 2.0 or eSATA. For this to work, though, you need the supporting software like Norton Ghost or something similar installed on your WHS server.
Windows Home Server Power Pack 1 (aka WHS PP1) added a feature to perform backups of the WHS shared folders to an external drive -- as you mention, this feature is only intended to do the data side and not the OS.
If you have an HP MediaSmart server, you could try the method mentioned in Quick & Easy Windows Home Server Backup and Restore. The author said it worked for him, but of course, caveat emptor. This technique has you creating a disk-image for your backup, and using that to restore from in the Recovery Disk / Restore disk process.
If you want a faster way to recover your OS and you do not have a Media Smart server, you can also check out these instructions on how to use a USB flash drive for installing WHS, and merge in the instructions found above for restoring a disk image vis-à-vis the OS Recovery disk process.
WHS OS backup solved by running two copies of WHS each on its own computer in a virtual machine with each WHS backing up the other (running in a VM makes the WHS a file thus able to be backed up and restored by WHS).
iDrive is Great and free under 2 gigs