Unzip too slow for transfer of many files - windows

We need to distribute lots of small jpg files to offline systems. Right now, we send it as a 7zip (or plain zip) which is 800MB (230K files) and use 7zip to unzip it. It is taking about an hour to unzip on fairly large 4 core processors.
Is there a way on windows7 (or win server 2008) to create and unpack a package of files of this size in a more reasonable time frame?
(I will entertain even far out answers such as: put this all in a single CloudDB database as binary blobs and then ship the archive to the target machine, or create a VM, or a virtual disk image - but I will need some pointers to tips on doing that sort of stuff).

So then here's your far out answer: ;)
The problem probably doesn't lie in computing power. The filesystem and/or harddisk are the bottleneck most likely.
For Win7 (and afaik Server2008 as well) you could use a Virtual Hard Disk instead of zipping it. Win7 has native support for VHD-files and can emulate the content as a drive or subfolder via Disk Management. So there would be no need to unzip the files.

I had the same problem, and solved it. The issue is likely the Windows Attachment Service, which subjects downloaded or attached zip files to additional scrutiny for security reasons.
To bypass this:
Right-click the file
Choose Properties
Check Unblock
For more info, see: Why is WinZip slow?

I spoke to some colleagues, and they might have an easier solution. Since the size is under 4GB, and I want READ-ONLY access, I can create an ISO image, and then mount it on win7 or win2008server, using this Microsoft utility:
This utility enables users of Windows XP, Windows Vista, and Windows 7 to mount ISO disk image files as virtual CD-ROM drives.

Related

Migrate FreeNAS Data to Windows (over SMB)

My FreeNAS server is slowly dying and before that happens i need to migrate all data in the NAS to a windows server.
The FreeNAS has ZFS Snapshots and i need to restore data from a few days ago to the Windows server.
I have done some research and i can't think of the best way to do this. (i am not linux/Zfs savvy)
So the things i need to do is,
Restore ZFS Snaptshot from a few days ago to a windows Server
I mounted a windows share to the Freenas using mount_smbfs //username:password#server.name/share_name share_name/
I can copy and create files on that share just fine. So I was wondering if it was possible to restore an entire data set from an snapshot to the windows share.
Any help, tips is much appreciated.
Note. I could easily copy all data on a freenas volume to the windows share, but what makes it complicated for me, is restoring data from a snapshot without overwriting the current data on the volume and moving that data to the windows share.
You have two sensible possibilities:
Access the ZFS dataset (shared over SMB) from your Windows Server, then right-click on it in Explorer and choose "Previous Versions". You will get (after a short time depending on the number of snapshots) a list of all snapshots with their dates. You can then either explore them and copy some files over, or you can choose to copy all to another location (e. g. your new share).
Mount the Windows share on FreeNAS like you did, then go to <pool>/<filesystem>/.zfs/snapshot/ (path completion on the shell might be turned off for the .zfs directory, so type it in manually). There you'll find all your snapshots (like you would have on Windows' Previous Versions) and you can copy some or all files over to the new directory.
I would suggest the first way, because you have the GUI and cannot do any harm to the FreeNAS system this way.
On the other hand, have you thought about the possibility of rescuing the system? You did not specify why it's dying, but things like hard drives or mainboards can be swapped quite easily without requiring setting up everything anew. Maybe this would help you more than moving the data off to another, unconfigured system?

How do I Install Xcode 6 or 7 on an external drive?

The capacity of my SSD is just 60Gb, and I have just over 5Gb of free space at the moment. Is there a way to install Xcode directly on the external drive? Or to do so I'd have first to make this drive bootable and boot my system from it?
There are various possible solutions, including, making use of symlinks, dual booting two versions of macOS (one on external SSD), and many more.
But the best way I found was to create a new macOS user and change its home directory to external SSD (by going to advanced user settings under Users & Groups System Preferences).
The exact steps I followed:
Create a new APFS partition on external SSD with 100GB storage. (say NewVol)
Create new macOS user and change its home directory to /Volume/NewVol/user
Logged into the new user with external SSD connected, and installed xcode in ~/Application. (i.e. the local Application folder, not /Application)
Why this works best is because you don't need to manually manage symlinks, also symlinks might create problems during builds. All the required files (including builds and temporary files) are stored in user directory, so no space occupied on internal drive. Also, no hassle of installing a complete separate OS, and going through cycles of reboots to switch the OS.
There are a couple of options you can consider.
Move some files to the external drive, instead of installing applications on it. This would be your best bet, since applications have dependancies. Also, if you run them from your SSD, they will get better performance.
If you absolutely need your files on your SSD, and you can't move them, then I would suggest moving any third party applications to see if you can free up space for Xcode, and run it from your SSD.
If the two options above don't work for you, then you will have to try and work with Xcode. There is no easy way to change the install location. Your option here would be to free up some space temporarily, by moving bigger files to an external drive. Then do the Xcode install in your applications folder. Once that's done, move Xcode to the external drive, and take your files back to your SSD. Here is another questions that talks about the same topic.

Methods to transfer files from Windows server to linux server

I need to transfer webserver-log-like-files containing periodically from windows production servers in the US to linux servers here in India. The files are ~4 MB in size each and I get about 1 file per minute. I can take about 5 mins lag between the files getting written in windows and them being available in the linux machines. I am a bit confused between the various options here as I am quite inexperienced in such design:
I am thinking of writing a service in C#.NET which will periodically archive, compress and send them over to the linux machines. These files are pretty compressible. WinRAR can convert 32 MB of these files into a 1.2 MB archive. So that should solve the network transfer speed issue. But then how exactly do I transfer files to linux? I could mount linux drive on windows server using samba, or should I create an ftp server, or send the file serialized as a POST request. Which one would be good? Also, I have to minimize the load on the windows server.
Mount the windows drive on linux instead. I could use the mount command or I could use samba here (What are the pros and cons of these two?). I can then write the compressing and copying part in linux itself.
I don't trust the internet connection to be very stable, so there should be a good retry mechanism and failure protection too. What are the potential gotchas in these situations, and other points that I must be worried about?
Thanks,
Hari
RAR is bad. Stick to 7zip or bzip2. Transfer it using ssh, probably with rsync since it can be link-failure-tolerant.
WinSCP can help you transfer files from Windows to Linux in batch with script. Then configure Windows Task Scheduler to run the script periodically.
I learnt from this post step by step: https://techglimpse.com/batch-script-automate-file-transfer-winscp/

Ideas on how to save space with Windows 2008 R2 server on Hyper-V?

I've got this question awhile ago, but it still bothers me.
I work with a few virtual machines running Windows 2008 server, mostly demo VMs and test machines. Since most devs use them, I prefer to not have individual setups here and there and maintain a catalog of exported VMs and hard drive images instead.
Thanks to side-by-side assemblies and windows updates each server carries an overhead of about 6 - 12 Gigs in side-by-side folder (winsxs) and windows update.
Suppose I have 50 exported VMs (with their images), each has about 3 Gigs of payload data (OS, programs, data) and about 12 Gigs of shared overhead, which is mostly the same for all these VMs. Then I waste 2/3 of my storage space (about 600 Gigs total), not to mention network overhead of pushing this redundant data around the network when a dev wants to download a new VM snapshot.
So I am thinking of a way of consolidating the winsxs folder accross multiple VMS. Ideally, I'd like to come up with some shared drive or something. I am even willing to designate a physical device for this.
I realize that Windows server has minimum requirements and these files cannot be deleted (http://social.technet.microsoft.com/Forums/en-US/itprovistasetup/thread/9411dbaa-69ac-43a1-8915-749670cec8c3).
I also found a post on moving winsxs folder, but it does not appear as a reliable solution.
Does this sound even remotely feasible? What are the best practices for consolidating resources across VMs?
Thank you almighty stackoverflow gurus for your prompt attention ;-)
Don't touch the WinSXS folder.
It's not as big as it looks (alot of it is hard links to duplicate files)
If you wanted to have consolidated space, use differencing disks--create one VM with Windows on it, and then use that disk as the basis for the rest. Each disk will only store the delta between the original and where that VM goes after that.
It is not possible to share WinSxS folders across installations.
If you want to know more about how WinSxS works, check out my blog post: http://fearthecowboy.com/post/CoApp-FAQ-Can-you-explain-how-Side-by-side-%28WinSxS%29-works.aspx

The ideal background filesystem backup

I am thinking about a script/program that can run in background, and attempt to backup or synchronize a given filesystem path to a mirror location (probably located on an external/separate storage device).
This should apply to Windows but it could as well be used under Linux.
Differential/incremental backups are a bonus.
Windows System State backups are a bonus too.
Keeping the origin free of meta-data is essential. (unlike version control)
Searching by file or activity date could be interesting (like version control)
Backup repositories should be easy to browse and take little space.
Deleted files should be available for recovery for a period of time.
Windows Backup is tedious and bloated and limited.
Tar-gzipping is not accessible.
User interaction during backup should be nonexistent.
Amanda is the ultimate full-featured open-source backup solution, and there's a (relatively) new Zmanda Windows Client.
Duplicity is free and creates encrypted, incremental, compressed offsite backups. It's a linux app, but you could run it in cygwin or a small virtual machine.
I've written a perl script that runs it via a cronjob to backup several very big directories over DSL and it works great.
Check out AJCBackup. Does an excellent job at a good price.
Acronis True Image is great. It's not free but the Home edition is pretty cheap for what it does and it works reliably. Does image- and file- based backups, scheduling, instant backup of chosen folders accessible from explorer context menu, incremental/differential backups, can mount the backup files as Windows volumes and browse them, copy files out etc. It has saved my ass a few times already.

Resources