Like a lot of you guys out there, I'm pretty pumped for Steam OS. I have a link to the source code, which I want to download:
http://repo.steampowered.com/steamos/
Is there an easy way for me to download all of these files?
There's no download button, and right clicking doesn't give me anything useful.
You can use wget to recursively download the directories you want.
wget -r --include-directories=steamos/ --directory-prefix=steamos/ --wait=15--reject=index.htm* "http://repo.steampowered.com/steamos/"
-r tells wget that we want to recursively download the given site.
--include-directories=steamos/ limits our download to just the steamos folder, from the root of the site. Otherwise it would try to download absolutely everything from http://repo.steampowered.com/
--directory-prefix=steamos/ specifies the folder this will be place in once its downloaded. By default, the download will be placed in 'repo.steampowered.com/steamos/'.
--reject=index.htm* junks the three index pages that would otherwise be saved to each sub-directory.
--wait=15 places a delay of 15 seconds between your downloads, for the sake of being kind to the servers.
My main reference for this was http://learningbitsandbytes.blogspot.ca/2013/07/downloading-source-code-from-svngit.html
Related
I am trying to download many files (~30,000) using wget, all files are in the following webpage:
http://galex.stsci.edu/gr6/?page=tilelist&survey=ais&showall=Y
However, the real data is under a sublink after I click Fits and then some file under this sublink is displayed. For example, the sublink of the first file is the following:
http://galex.stsci.edu/gr6/?page=downloadlist&tilenum=50270&type=coaddI&subvis=28&img=1
I only want to download one file in this sublink: Intensity Map of band NUV. In this above case, it is the second file that I want to download.
All files have the same structure. How could I use wget to download all the files under sublink?
The Intensity Map of band NUV files have a common ending, which should allow you to download only the files you want using wget -r -A "*nd-int.fits.gz" on the target site. This employs wget's recursive function, -r, and the Accept List function, -A. The Accept List function, outlined here, will only download the files you want according to extension, name, or naming convention. Whether the wget recursive function can successfully crawl the entirety of your target site is something you'll have to test.
If the above doesn't work, the website seems to have handy tools for filtering available files, such as a catalog search.
Has anyone successfully used expansion files with appcelerator? I have found the module that is supposed to let us work with them, however I am running into the problem of the .obb file being downloaded directly from the play store and then being downloaded again with the module. Aside from that I can't seem to get access to any of the files contained within the .obb using the module.
I have heard all of the woes of having a big app, so please don't just tell me to make a smaller app, my client has a large "library" that they want installed directly on the app. It consists of html files that call javascript files and images through relative paths.
Are expansion files even the way to go with this? Should I simply zip up my files and download them after, unpack them, and access them using the file system? I am just looking for a way to get these large files onto the device and access them as if they were in the resources directory of the app.
Any help would be appreciated. thanks!
I have an app that needs over 300 PNG images and text files (to populate a database with) and could not get the app small enough to put up on the Play Store. What I ended up doing was create a barebones app (enough to get the user started) then I download the files on start up. I didn't mess with zipping everything (the data is constantly being updated), but if the information you have is pretty static, you could zip it. Once the download successfully finishes and installs the data, it sets an app property (Ti.App.Properties.setInt) 0 is never ran, 1 is partial download and 2 is download is installed (you can do this however you want, but that's what I did).
HTTrack gives filter options but I cannot figure out how to download a certain subfolder level and ignore all other subfolders.
Example:
domain.com/
domain.com/pets/
domain.com/pets/elephant
domain.com/zoo/tiger
domain.com/pics/giraffe
domain.com/pics/giraffe/details
I would like to only download the subfolders elephant, tiger and giraffe as HTML including images linked from there.
Is HTTrack that powerful? (I am using the Windows GUI version "WinHTTrack".)
PS: It would be nice to have this as a program option, e.g. "Minimum mirroring depth".
I found a way how to do it:
-*
-domain.com/*[path]/*
-domain.com/*[path]
+domain.com/*[path]/*[path]/*
-domain.com/*/specialfolder*
+domain.com/*specialimages*.jpg
-mime:*/* +mime:text/html +mime:image/*
Only issue: To get all URLs it was not enough to specify the root domain but also the first level subfolders (for the example: domain.com/pets, domain.com/zoo, domain.com/pics).
Developing an extension for Mozilla Firefox I wonder if there is an "easier way" to what I do right now. Currently I do:
Create a folder - in which to develop - for example myextension
Inside this folder: Create and Edit the Files (like install.rdf, chrome.manifest, xul files. Basically all the other structure of a Firefox extension (no problem here))
Zip-compress the content of the myextension to a ZIP-file (i.e. named myextension.zip)
Rename myextension.zip to myextension.xpi
Install the xpi-file-firefox-extension then in firefox
Restart Firefox
Test the extention
After each edit to the codebase of the extension I need to undergo the process of 3. zip-compress, 4. Rename, 5. install XPI file to firefox, 6 restart browser.
Of course I could automate some of this but still I wonder if there is another way to develop the firefox extension directly in the running firefox profile folder .
The extensions I know are stored in the Firefox profile folder as:
firefox/profile/extensions/nameofextension.xpi
I cannot remember well, but I think that there was a way to have the extension being stored unzipped as a folder there too? This way I would still need to restart after edits but not do all the laboursome zipping-renaming-installing.
Any ideas?
It is possible to setup a directory to "in-place-edit" a firefox extension. By this the effort between editing and testing of the Firefox-extension can be reduced.
I have found the good explanation on the blog https://blog.mozilla.org/addons/2009/01/28/how-to-develop-a-firefox-extension/
Here I want to give the principal steps necessary to achieve the "in-place-edit"
Step 1: You have to find your profile directory of Firefox.
For example in Linux this would often be often something like this:
~/.mozilla/firefox/#%#%.default/
Step 2: Go to this profile directory
Step 3: If you already have any extensions installed (like for example adblock+ or noscript), then inside this profile directory you will find a folder named extensions. If you do not have yet any additional extension installed, it might be easy to simply install any, only to have the **extensions" folder be setup for you.
Step 4: In this extensions folder you can create a new directory (let us name it "myextensions_1"), which shall contain the stuff of your plugin. This stuff would be the ordinary things like the install.rdf, chrome.manifest files and the content,skin,locale subdirectories. In effect all the stuff you would normaly zip up to become the XPI file.
Step 5. Create a file that is equal to the content of the <em:id> tag that you used in your ìnstall.rdf file. So if you used <em:id>myextensionname#author.org</em:id> you need to create a file named myextensionname#author.org. Inside this file you will write the location of the "in-place-edit-extension-folder" we created before. In our example this we would have
the file myextensionname#author.org
which contains only the text ~/.mozilla/firefox/#%#%.default/extensions/myextensions_1
Of course the text depends on the location of the folder you use for your plugin.
If you did all things correctly - and maybe double-checked with the instructions of the link above - you can restart or "newly start" firefox. The browser will ask you if you want to allow the usage of the plugin myextensionname#author.org, which you can conceed.
Now you can edit in the folder ~/.mozilla/firefox/#%#%.default/extensions/myextensions_1 and need not to worry about zipping-up -> renaming -> installing.
You simple restart Firefox and the edits to your extensions code will become available.
This will allow you swifter and faster developing "in-place".
Note: this is a shameless self-plug - I am talking about an extension I created myself.
Developing an extension in place is possible but has so many issues (mostly caching of all kinds) that it really isn't a good option. Still, you can simplify your development cycle a lot. For that you need to install the Extension Auto-Installer add-on in your Firefox. Then you can put a batch file (assuming that you are developing on Windows) into your extension directory along the lines of:
zip -r test.xpi * -xi *.bat *.xpi
wget --post-file=test.xpi http://localhost:8888/
del test.xpi
The required command line tools are all preinstalled on Unix-based systems, for Windows you can easily download them: zip, wget.
Then you will only need to run that batch file to update your extension in Firefox. If your extension isn't restartless then Firefox will restart automatically. So this replaces your steps 3 to 6.
I have some huge images in a folder on the web version of Dropbox that I need to make a shell script to download them one by one (There isn't enough room on my SDD and can't download the whole folder). I know using "wget" I can download a file:
wget link_to_the_file
However since I have many images it is not feasible to get the download link of each of them manually. I'm looking for a way of obtaining downloading link for each of them through the shell. Any suggestions?
Dropbox offers an API you can use to write a program to list and download multiple files.
For instance, you can use /2/files/list_folder[/continue] to list files, and then use /2/files/download to download them.
Those are links to the HTTPS endpoints, but there are corresponding native methods in the official SDKs, if you want to use one of those.