Dowloading files as a single .zip on windows server - windows

a client have a download area where users can download or browse single files. Files are divided in folder, so there are documents, catalogues, newsletter and so on, and their extension can vary: they can be .pdf, .ai or simple .jpeg. He asked me if I can provide a link to download every item in a specific folder as a big, compressed file. Problem is, I'm on a Windows server, so I'm a bit clueless if there's a way. I can edit pe pages of this area, so I can include jquery and scripts with a little freedom. Any hint?

Windows archiver is TAR and you are needing to build a TARbALL (Historically all related files in one Tape ARchive)
I have a file server which is mapped as S:\ (it does not have TAR command, and Tar cannot use URL but can use device:)
For any folders contents (including sub folders) it is easy to remotely save all current files in a zip with a single command (for multiple root locations they need a loop or a list)
It will build the Tape Archive as a windows.zip using the -a (auto) switch but you need to consider the desired level of nesting by collect all contents at the desired root location.
TAR -a[other options] file.zip [folder / files]
Points to watch out for
ensure here is not an older archive
it will comment error/warnings like the two given during run, however, should complete without fail.
Once you have the zip file you can offer post as a web asset such as
<a href="\\server\folder\all.zip" download="all.zip">Get All<a>
for other notes see https://stackoverflow.com/a/68728992/10802527

Related

Extracting contents of many zipped folders into a single directory

Kind of easy question, but I can't find the answer. I want to extract the contents of multiple zipped folders into a single directory. I am using the bash console, which is the only tool available on the particular website I am using.
For example, I have two folders: a.zip (which contains a1.txt and a2.txt) and b.zip (which contains b1.txt and b2.txt). I want to get extract all four text files into a single directory.
I have tried
unzip \*.zip -d \newdirectory
But it creates two directories (a and b) with two text files in each.
I also tried concatenating the two zipped folders into one big folder and extracting it, but it still creates two directories, even when I specify a new directory.
I can't figure what I am doing wrong. Any help?
Thanks in advance!
Use the -j parameter to ignore any directory structure.
unzip -j -d /path/to/your/directory '*.zip*'

How to restore a folder structure 7Zip'd with split volume option?

I 7Zip'd a multi-gig folder which contained many folders each with many files using the split to volumes (9Meg) option. 7Zip created files of type .zip.001,
.zip.002, etc. When I extract .001 it appears to work correctly but I get an 'unexpected end of data' error. 7Zip does not automatically go to .002. When I extract .002, it also gives the same error and it does not continue the original folder/file structure. Instead it extracts a zip file in the same folder as the previously extracted files. How do I properly extract split files to obtain the original folder/file structure? Thank you.

How to extract a specific folder using IZARC (IZARCe)

I want to extract a specific directory form a huge zip file (>5GB) that is somewhat corrupted because of an inevitable bad maintained build system that creates the zip.
The tools such as winrar/7Zip GUI apps have no issues extracting the files, but some command line tools such as mks unzip and 7za fails to extract from the corrupted archive.
After a lot of digging around and trying out many such command line utilities I found out that IZARC successfully extracts files from the archive.
I am running the following command:
IZARCe.exe -e -d -o D:\aHugeZipFile.zip -pD:\temp #"source.txt"
The listing file source.txt contains just one entry:
source/lib/*
which is the only directory in the archive, from where the contents are to be extracted.
But, it is resulting in:
IZArc Command Line Extraction Add-On Version 1.1 (Build: 130)
Copyright(c) 2007 Ivan Zahariev, All Rights Reserved.
http://www.izarc.org contact#izarc.org
Archive File: aHugeZipFile.zip
WARNING: Nothing to do!
I have tried specifying:
/source/lib/*
source/lib/*
source/lib/
source/lib
*source/lib/*
in the listing file, all to no avail! :(
Any pointers on where the error is occurring, and how to fix the issue will be of great help. Thank you in advance!
Using relative or absolute paths for listfiles doesn't appear to work with IZArc. Try using wildcards such as ., *.doc, etc instead of paths in the listfile. Be aware that there appears to be a limitation for the folder depth that IZArc will extract to as well as a tendency to generate CRC errors when files with the same name are present in the same archive, even if they are in different directories.
I would suggest using 7-Zip command-line instead. It can recurse deeply through a file structure without error and can use relative directories and wildcards in its listfiles.
The following 7-Zip command was tested and worked perfectly.
7za x somearchive.zip -o"C:\Documents and Settings\me\desktop\temp_folder\test2" -ir#source.txt -aoa -scsWIN
the source.txt file may contain contain a combination of relative paths and/or wildcards on separate lines such as:
Output/, Folder2/, *, or *.doc.
In the command above: x (extract with full paths), -ir (include filenames, recurse subdirectories), -aoa (overide existing files without prompt), -scsWIN (set charset for list files). You may need to adjust these commands for your situation.

s3cmd sync is remote copying the wrong files to the wrong locations

I've got the following as part of a shell script to copy site files up to a S3 CDN:
for i in "${S3_ASSET_FOLDERS[#]}"; do
s3cmd sync -c /path/to/.s3cfg --recursive --acl-public --no-check-md5 --guess-mime-type --verbose --exclude-from=sync_ignore.txt /path/to/local/${i} s3://my.cdn/path/to/remote/${i}
done
Say S3_ASSET_FOLDERS is:
("one/" "two/")
and say both of those folders contain a file called... "script.js"
and say I've made a change to two/script.js - but not touched one/script.js
running the above command will firstly copy the file from /one/ to the correct location, although I've no idea why it thinks it needs to:
INFO: Sending file
'/path/to/local/one/script.js', please wait...
File
'/path/to/local/one/script.js'
stored as
's3://my.cdn/path/to/remote/one/script.js' (13551
bytes in 0.1 seconds, 168.22 kB/s) [1 of 0]
... and then a remote copy operation for the second folder:
remote copy: two/script.js -> script.js
What's it doing? Why?? Those files aren't even similar. Different modified times, different checksums. No relation.
And I end up with an s3 bucket with two incorrect files in. The file in /two/ that should have been updated, hasn't. And the file in /one/ that shouldn't have changed is now overwritten with the contents of /two/script.js
Clearly I'm doing something bizarrely stupid because I don't see anyone else having the same issue. But I've no idea what??
First of all, try to run it without --no-check-md5 option.
Second, I suggest you to pay attention to directory names, specifically trailing slashes.
s3cmd documentation says:
With directories there is one thing to watch out for – you can either upload the directory and its contents or just the contents. It all depends on how you specify the source.
To upload a directory and keep its name on the remote side specify the source without the trailing slash
On the other hand to upload just the contents, specify the directory it with a trailing slash

Exporting and importing images in MediaWiki

How do I export and import images from and into a MediaWiki?
Terminal solutions
MediaWiki administrator, at server's terminal, can perform maintenance tasks using the Maintenance scripts framework. New Mediawiki versions run all standard scripts in the tasks described below, but old versions have some bugs or not have all moderns scripts: check the version number by grep wgVersion includes/DefaultSettings.php.
Note: all cited (below) scripts have also --help option, for instance php maintenance/importImages.php --help
Original image folder
Users upload files through the Special:Upload page; administrators can configure the allowed file types through an extension whitelist. Once uploaded, files are stored in a folder on the file system, and thumbnails in a dedicated thumb directory.
The Mediawiki's images folder can be zipped with zip -r ~/Mediafiles.zip images command, but this zip is not so good:
there are a lot of expurious files: "deleted files" and "old files" (not the current) with filenames as 20160627184943!MyFig.png, and thumbnails as MyFig.png/120px-MyFig.jpg.
for data-interchange or long-term preservation porpurses, it is invalid... The ugly images/?/??/* folder format is not suitable, as usual "all image files in only one folder".
Images export/import
For "Exporting and Importing" all current images in one folder at MediaWiki server's terminal, there are a step-by-step single procedure.
Step-1: generate the image dumps using dumpUploads (with --local or --shared options when preservation need), that creates a txt list of all image filenames in use.
mkdir /tmp/workingBackupMediaFiles
php maintenance/dumpUploads.php \
| sed 's~mwstore://local-backend/local-public~./images~' \
| xargs cp -t /tmp/workingBackupMediaFiles
zip -r ~/Mediafiles.zip /tmp/workingBackupMediaFiles
rm -r /tmp/workingBackupMediaFiles
The command results in a standard zip file of your image backup folder, Mediafiles.zip at yor user root directory (~/).
NOTE: if you are not worried about the ugly folder strutcture, a more direct way is
php maintenance/dumpUploads.php \
| sed 's~mwstore://local-backend/local-public~./images~' \
| zip ~/Mediafiles.zip -#
according Mediawiki version the --base=./ option will work fine and you can remove the sed command of the pipe.
Step-2: need a backup? installing a copy of the images? ... you need only Mediafiles.zip, and the Mediawiki installed, with no contents... If the Wiki have contents, check problems with filename conflicks (!). Another problem is configuration of file formats and permissions, that must be the same or broader in the new Wiki, see Manual:Configuring file uploads.
Step-3: restore the dumps (to the new Wiki), with the maintenance tools. Supposing that you used step-1 to export and preserve in a zip file,
unzip ~/Mediafiles.zip -d /tmp/workingBackupMediaFiles
php maintenance/importImages.php /tmp/workingBackupMediaFiles
rm -r /tmp/workingBackupMediaFiles
php maintenance/update.php
php maintenance/rebuildall.php
That is all. Check, navegating in your new Wiki's Special:NewFiles.
The full export or preservation
For exporting "ALL images and ALL articles" of your old MediaWiki, for full backup or content preservation. Add some procedures at each step:
Step-1: ... see above step-1... and, to generate the text-content dumps from the old Wiki
php maintenance/dumpBackup.php --full | gzip > ~/dumpContent.xml.gz
Note: instead of --full you can use the --current option.
Step-2: ... you need dumpContent.xml.zip and Mediafiles.zip... from the old Wiki. Suppose both zip files at your ~ folder.
Step-3: run in your new Wiki
unzip ~/Mediafiles.zip -d /tmp/workingBackupMediaFiles
gunzip -c ~/dumpContent.xml.gz
| php maintenance/importDump.php --no-updates \
--image-base-path=/tmp/workingBackupMediaFiles
rm -r /tmp/workingBackupMediaFiles
php maintenance/update.php
php maintenance/rebuildall.php
That is all. Check also Special:AllPages of the new Wiki.
There is no automatic way to export images like you export pages, you have to right click on them, and choose "save image". To get the history of the Image page, use the Special:Export page.
To import images use the Special:Upload page on your wiki. If you have lots of them, you can use the Import Images script. Note: you generally have to be in the sysop group to upload images.
- Export ALL:
You can get all pages and all images from a MediaWiki web using [API], even you are not the owner of the web (of course when the owner hasn't disable this function):
Step 1: Using API to get all pages title and all images url. You can write some code to do it automatically.
Step 2: Next you use [Special:Export] to export all pages with the titles you got, and use wget to get all images you had links (like this wget -i img-list.txt).
- Import ALL:
Step 1: Import pages using [Special:Import]
Step 2: Import images using [Manual:ImportImages.php].
There are a few mass upload tools available.
Commonist - www.djini.de/software/commonist/
Both run on the desktop and can be configured to upload to your local wiki (they are configured for Wikipedia and Wikimedia commons by default). If you are afraid to edit the content of a .jar file, I suggest you start with Commonplace.
Another useful extension exists for Mediawiki itself.
MultiUpload - http://www.mediawiki.org/wiki/Extension:MultiUpload
This extension allows you to drop images in a folder and load them all at once. It supports annotations for each file if necessary and cleans up the folder once it is done. On the downside, it requires opening a shared folder on the server side.
Commonplace - commons.wikimedia.org/wiki/Commons:Tools/Commonplace
used to be available, but it was deprecated as of Jan. 13, 2010.
Hope this helps a bit: http://www.mediawiki.org/wiki/Manual:ImportImages.php
As a committer of MediaWiki-Japi I'd like to point out:
For the usecase to push pages including images from one wiki to another MediaWiki-Japi now has a command line mode see
Issue 49 - Enable commandline interface with page transfer option
Otherwise you can use MediaWiki-Api with the language of your choice and use the functions as you find in PushPages.java
e.g.
download
upload

Resources