Convert temporary download links (URLs) to permanent links - download

Particularly on sites like YouTube the download link is a temporary link which for me more often than not expires roughly after 5 hours.I like using YouTube to watch tutorials and stuff.I prefer downloading tuts overnight since watching them at HD is impossible due to a slow internet connection.
Before I sleep I pile like 10 episodes of my tutorials to my download manager , in the morning only to find a few completed and the rest failed since the link expired.
My question is how can I download seamlessly e.g
copy temporary link
paste it some where
wait a few secs to mins
get permanent link
add to my download manager
How can I pull this of using a virtual private server (Mine is Windows) ,using some webgui like download manager (please suggest one as I have not found one that is easy to use) OR is there any other way to do this (since i don't have much storage in my VPS).
Hoping a for method that works for all site not only Youtube.

After much scouring the web a good solution was to run Free Download Manager Remote control server or Uget on the VPS.Then use open source file manager scripts to access the download directory.Then there after copying the permanent link to the local download manager.

Related

Google Drive issue when trying to download many times

I have problem using Indy's TIdHTTP in my Windows app created with C++Builder XE7.
I am doing a simple task... when my app launches, it goes to a link with a direct download and the app begins to download.
All works great, but when I do many requests to download (it is a simple .txt file) from Google Drive, I get an "HTTP/1.1 403 Forbidden" error.
I tried many ways (change the User-Agent, memory leakage control, etc) without luck.
Can somebody give me an idea of what the problem may be?

When uploading new files to FTP server, how to prevent reupload of files that were deleted on the server meanwhile

I need to automate the upload of some files from client PCs to a central server. We're building central statistics for an online gaming community, processing game replay files.
target is my own small VPS server running ubuntu
upload file size 2-3MB
20-40 different clients running windows spread around the globe
I expect ~6GB of wanted data to be uploaded over the course of 7 weeks (a season in our game) and 5-10x that amount of "unwanted" data.
The files are processed on the server, and then they're not required anymore, and ought to be deleted to not run out of disk space eventually. I also only need some of the files, but due to the files requiring very complex processing including decryption, so i can only determine that after the server processed it.
My initial idea was to use a scriptable client such as WinSCP, and use some Windows scheduler entry to automate it. WinSCP documentation looks very nice. I am a bit hesitant because I see the following problems:
after deletion on the server, how to prevent re-upload ?
ease of setup to technical novices
reliability of the solution
I was thinking maybe someone has done the same before and can give some advice.
There's article on WinSCP site that deals with all this:
How do I transfer new/modified files only?
For advanced logic, like yours, it uses PowerShell script with use of WinSCP .NET assembly.
Particularly, there is a section that you will be interested in: Remembering the last timestamp – It shows how to remember the timestamp of the last uploaded file, so that the next time you will transfer only newer files, even if the previously uploaded files are not on the server anymore.
The example is for downloads with Session.GetFiles, but it will with small changes work for uploads with Session.PutFiles too.
It also points to another article: Remember already downloaded files so they are not downloaded again, which shows another method – To store names of already transferrer file to a file and use it the next time to decide, which files are new.

File sharing over the internet - WebDAV / SMB / FTP

We are developing a web based application which provides a repository of users case files. Would like the user to be able to access these from their web browser with full read write capability.
For an earlier generation of our system, which was hosted on a local Linux server with Windows clients we were able to share out a folder and access it with \\server\share_name\file.doc type links. If these type of links were included in web pages (in internet explorer) and clicked on the file opened in MS Word and was savable directly into the shared folder. These type of links however only worked in IE - not FF or Chrome
Moving now to an internet based solution in our next generation of the system, we require similar functionality.
We are toying with the idea of having a WebDAV (or FTP/SFTP) share and mapping a local drive on each client machine to it to provide similar functionality. This though will probably not work well with FF or Chrome with \\server\share_name... type links. We have done brief testing and file:// links do not provide write capability once the file is opened.
As a last resort we will be able to use manual file upload dialogs, but this is not ideal and would entail additional end user training.
Has anyone any similar experience in this field and any possible solutions / best practice.
When you map remote resource as a local drive, for a browser this becomes a local drive. And browsers have only limited access to the local file system. Now when you provide a link to the browser, the browser's default behavior is to download the resource behind the link, and then let the local application process it. The browser just doesn't know how to open the remote resource locally in a different manner.
The solution would be to let the browser download something (some kind of link file) and have some local helper module (external application or browser plugin) open this link file and open the location, specified in this link file, locally. As this would be a client-side helper module, it will be able to interact with the client system and will know how to open the provided link. Given that the virtual drive letter can be different on each system (if you mount the disk to the drive letter), the helper module would need to resolve the link to point to the correct local drive. If you create a hidden virtual drive (our virtual storage products let you do this), then a link would look like "\SomeFancyNameUniqueToYourApp\Path\To\File.ext" and no resolving would be necessary. And most applications handle this type of paths fine.
I don't know for sure, but it's possible that browsers will open Windows .lnk files without a need in helper module, and with hidden virtual drive you could generate an LNK file on the server and have the browser open it locally. But this is just a guess. My bet is that you will need a helper module anyway.
ftp://username:password#hostname/ type links should work, and MS apps are getting better at handling them. still not 100% though
Try SMEStorage.com. They enable you to map local WebDav and FTP servers and access files using a Cloud Drive on Linux, Mac or Windows, and also from mobile devices (iOS, Android, BlackBerry and Windows Phone 7). You can get unique file links for each file and also secure file sharing in which the links expire.

Edit office document on server

We are going to develop a client-server application where all the office documents will be stored on the remote server.
The problem is that users need to edit these docs very often.
The standard solution is:
download
edit locally
upload
But it is very inconvenient and would cause high traffic, cause docs are very large.
Is there any solution to edit documents right on server?
E.g. some remote OpenOffice installation which we can connect somehow?
Thanks in advance!
Unless you can give your users RDP sessions on Windows or VNC (or X windows?) sessions on Linux you're going to be stuck with downloading the document to edit locally (in one form or another) then upload again.
There may be some HTTP/browser based solution but because it's HTTP you're going be to pulling all of the document back to the browser to edit then posting back to the server, it pretty much defeats the purpose.
As pointed out by Kev, one solution would be some sort of remote access software to access a copy of OpenOffice.org running on the server. There is for example a VNC viewer that will run as a Java applet in a browser (http://www.realvnc.com/support/javavncviewer.html ), that might do the trick.
Another option would be a server-based office package, a la Google docs. There are some available, but none with the full feature set of OpenOffice.org, so this is probably only an option if you can restrict to that feature set. If you can, it could work quite well.

Mac os help browser fails requiring internet connection

I am developing an application for Mac OS X (I am new to that kind of things) and I want to include online help. The help is generated using doxygen and the help index generated using Help Indexer. I changed the Info.plist to point to the documentation, but when I try to access it, I get the following error:
Internet connection required.
The help topic you’re opening requires an Internet connection.
Choose Apple > System Preferences, and
then click Network to check your
network settings and, if necessary,
connect to the Internet.
Obviously, the computer I develop on has internet access that works, but more importantly, I would like to know why I need Internet while the help is on the drive (there are some links to internet in the help though). And also, why doesn't the browser see the existing internet connection?
I ran into this problem recently. I had some temporary links which went to pages I hadn't yet created. The problem was that Apple Help Viewer couldn't find a local copy of the linked pages. The Error message went away after I created the pages. IIRC my actual problem was an img tag for an image I had not yet created.

Resources