Downloading Dropbox files one by one with wget - terminal

I have some huge images in a folder on the web version of Dropbox that I need to make a shell script to download them one by one (There isn't enough room on my SDD and can't download the whole folder). I know using "wget" I can download a file:
wget link_to_the_file
However since I have many images it is not feasible to get the download link of each of them manually. I'm looking for a way of obtaining downloading link for each of them through the shell. Any suggestions?

Dropbox offers an API you can use to write a program to list and download multiple files.
For instance, you can use /2/files/list_folder[/continue] to list files, and then use /2/files/download to download them.
Those are links to the HTTPS endpoints, but there are corresponding native methods in the official SDKs, if you want to use one of those.

Related

Connect GoLang to GitHub

I have a fairly large project in github that I would like not to download in its entirety. The file is mostly Go based. Unfortunately, most of the file's packages call each other. I would like to use some of the files within this repo to test my code before I push it with the rest. Is their any way to import it into golang without downloading the file (using go get github.com/foo)?
As stated above, I've tried using go get github.com/etc. but it's way to large for that.
There's no tool I know of that could do this. You could use go list -f '{{ .Imports }}' or similar to list the dependencies you need for any given Go package, and script a way to download only the files needed. But you also have to think about things like templates or config files that your programs might access too.
Alternatively, and I have no idea if this will work, you could try mounting your Git repository as a FUSE mount. I found this in a quick Google search. This will let you download the files you need on demand, and of course when you compile your program, it will only include the code needed.

Wget download many files in the sublink of a webpage

I am trying to download many files (~30,000) using wget, all files are in the following webpage:
http://galex.stsci.edu/gr6/?page=tilelist&survey=ais&showall=Y
However, the real data is under a sublink after I click Fits and then some file under this sublink is displayed. For example, the sublink of the first file is the following:
http://galex.stsci.edu/gr6/?page=downloadlist&tilenum=50270&type=coaddI&subvis=28&img=1
I only want to download one file in this sublink: Intensity Map of band NUV. In this above case, it is the second file that I want to download.
All files have the same structure. How could I use wget to download all the files under sublink?
The Intensity Map of band NUV files have a common ending, which should allow you to download only the files you want using wget -r -A "*nd-int.fits.gz" on the target site. This employs wget's recursive function, -r, and the Accept List function, -A. The Accept List function, outlined here, will only download the files you want according to extension, name, or naming convention. Whether the wget recursive function can successfully crawl the entirety of your target site is something you'll have to test.
If the above doesn't work, the website seems to have handy tools for filtering available files, such as a catalog search.

Appcelerator with expansion files

Has anyone successfully used expansion files with appcelerator? I have found the module that is supposed to let us work with them, however I am running into the problem of the .obb file being downloaded directly from the play store and then being downloaded again with the module. Aside from that I can't seem to get access to any of the files contained within the .obb using the module.
I have heard all of the woes of having a big app, so please don't just tell me to make a smaller app, my client has a large "library" that they want installed directly on the app. It consists of html files that call javascript files and images through relative paths.
Are expansion files even the way to go with this? Should I simply zip up my files and download them after, unpack them, and access them using the file system? I am just looking for a way to get these large files onto the device and access them as if they were in the resources directory of the app.
Any help would be appreciated. thanks!
I have an app that needs over 300 PNG images and text files (to populate a database with) and could not get the app small enough to put up on the Play Store. What I ended up doing was create a barebones app (enough to get the user started) then I download the files on start up. I didn't mess with zipping everything (the data is constantly being updated), but if the information you have is pretty static, you could zip it. Once the download successfully finishes and installs the data, it sets an app property (Ti.App.Properties.setInt) 0 is never ran, 1 is partial download and 2 is download is installed (you can do this however you want, but that's what I did).

Downloading Steam OS Source

Like a lot of you guys out there, I'm pretty pumped for Steam OS. I have a link to the source code, which I want to download:
http://repo.steampowered.com/steamos/
Is there an easy way for me to download all of these files?
There's no download button, and right clicking doesn't give me anything useful.
You can use wget to recursively download the directories you want.
wget -r --include-directories=steamos/ --directory-prefix=steamos/ --wait=15--reject=index.htm* "http://repo.steampowered.com/steamos/"
-r tells wget that we want to recursively download the given site.
--include-directories=steamos/ limits our download to just the steamos folder, from the root of the site. Otherwise it would try to download absolutely everything from http://repo.steampowered.com/
--directory-prefix=steamos/ specifies the folder this will be place in once its downloaded. By default, the download will be placed in 'repo.steampowered.com/steamos/'.
--reject=index.htm* junks the three index pages that would otherwise be saved to each sub-directory.
--wait=15 places a delay of 15 seconds between your downloads, for the sake of being kind to the servers.
My main reference for this was http://learningbitsandbytes.blogspot.ca/2013/07/downloading-source-code-from-svngit.html

Expose setup in what file format: zip, exe,..?

this may be a stupid question... but I have created a setup.exe using Installshield that installs my application. I cannot use an MSI because I want to include prerequisites, like for example .NET 4.0 framework web install.
So, I put this setup.exe on my web site somewhere, but when I try to download this I get 'The page you are requesting cannot be served because of the ISAPI and CGI Restriction list settings on the Web server.' Obviously a security is going on that makes it impossible to download executables.
So I zip it to setup.zip, and now it can be downloaded.
My question is: what is the best way to distribute such a setup via the web? Is it acceptable to make it a zip file for users to download (because then they have to unzip it first), or should I just allow an executable to be downloaded?
I've seen this practice for some products and it's always annoying. Why would you give me a ZIP file which contains a setup file? There are no advantages, but a lot of disadvantages:
I need to perform an extra step to extract the ZIP
I need twice the disk space (the ZIP plus what's inside it)
Most users don't see a ZIP file as an installer, they are used to "setup.exe" or "setup.msi"
The correct approaches are:
configure your website to allow EXE file downloads
or
distribute a ZIP which contains your application files (instead of using a setup file)

Resources