i have a simple question, but i couldn't find an easy solution for it, i have a rented ftp, i cant moderate it, and i have a website with links to this ftp, there few archive files that i want them to be downloaded rather than opened directly through browser, my link looks like this:
IMS 200 Client V1.29 (06.02.13)
i solved this problem by using php page that defines the file type, so that browser could understand that it is archive, and then download it, rather than try and open it directly through browser, is there any easier way to achieve it?!
thank u all for the help!
Hope that my case can help you some way.
I have a website that allows user to view news and download files. Oneday, I discover that if I show up the download link to a .rar file directly, eg. http://www.somenet.com/myfile.rar, then it is opened automatically in the browser instead of asking users if they want to save/open it. If I write some code to read and transfer the file to browser, eg. http://www.somenet.com/download?fileid=123, then it is asked to be saved/opened by browser.
After googling a while, I insert a piece of configuration into my Apache Tomcat web.xml (often at CATALINA_HOME/conf/web.xml) as follows:
<mime-mapping>
<extension>rar</extension>
<mime-type>application/x-rar-compressed</mime-type>
</mime-mapping>
then restart the Apapche Tomcat server to take effect.
And now i can click on the direft rar link to download the file.
I also have to restart the IE (FF takes effect right away).
Good luck!
If you can use HTML5 you can try to use:
Download this file
Extracted from: HTML5 link download
Related
I need to download a pdf from a website which does not provide a link ending with (.pdf) using ruby. Manually, when i click on the link to download the pdf, it takes me to a new page and the dialog box to save/open the file appears after some time.
Please help me in downloading the file.
The link
You an do this
require 'open-uri'
File.open('my_file_name.pdf', "wb") do |file|
file.write open('http://someurl.com/2013-1-2/somefile/download').read
end
I have been doing this for my projects and it works.
If you just need a simple ruby script to do it, I'd just run wget. Like this exec 'wget "http://path.to.the.file/and/some/params"'
At that point though, you might as well run wget.
The other way, is to just run a get on the page that you know the pdf is at
source = Net::HTTP.get("http://the.website.com", "/and/some/params")
There are a number of other http clients that you could use, but as long as you make a get request to the endpoint that the pdf is at, it should give you the raw data. Then you can just rename the file, and you'll have the pdf
In your case, I ran the following commands to get the pdf
wget http://www.lawcommission.gov.np/en/documents/prevailing-laws/constitution/func-download/129/chk,d8c4644b0f086a04d8d363cb86fb1647/no_html,1/
mv index.html thefile.pdf
Then open the pdf. Note that these are linux commands. If you want to get the file with a ruby script, you could use something like what I previously mentioned.
Update:
There is an added complication that was not initially stated, which is that the url to the pdf changes every time there is an update to the pdf. In order to make this work, you probably want to do something involving web scraping. I suggest nokogiri. This way you can look at the page where the download is and then perform a get request on the desired URL. Furthermore, the server that hosts the pdf is misconfigured, and breaks chrome within a few seconds of opening the page.
How to solve this problem: I went to the site, and refreshed it. Then broke the connection to the server (press the X where there would otherwise be a refresh button). Then right click next to the download link, and select inspect element. Then browse the dom to find something that is definitively identifying (like an id). Thankfully, I found something <strong id="telecharger"> Download</strong>. This means that you can use something like page.css('strong#telecharger')[0].parent['href'] This should give you a URL. Then you can perform a get request as described above. I don't have time to make the script for you (too much work to do), but this should be enough to solve the problem.
I'm having problem with a client site. I'm not good with Joomla (we mostly do Wordpress), but one of my long-time clients asked me to move a site from another developer that never finished it, so I obliged. The problem is, everything is working great except for the Community page:
http://gettingripped.com/index.php/community
The only errors I'm finding are with the Facebook integration (which they told me the previous dev never finished/fixed). I'm really confused here...anyone out there have any ideas? It seems instead of showing the proper titles that Com_community_somethingElseHere is replacing everything.
Thank you guys in advance for your help!
Seems something is wrong with the en-GB.com_community.ini file.
Location: gettingripped.com/language/en-GB/en-GB.com_community.ini
I could not find the file in the above location!!!
Put this file in that folder and it will work!!!
If you can't find the file to put in the folder, create your own and place it there.. how? Well, google for this string as it is (including double quotes) "en-GB.com_community.ini" and open the first couple of results.
Then copy paste the displayed file content into your own ini file (name it en-GB.com_community.ini) and place it in your en-GB folder.
Load the page and it will show up as it should!
I need to upload images into a page in my website.
I usually use WinSCP FTP program because it gives me the option "Copy to Clipboard (Include paths)". I copy images' URL through this option and the images are usually uploading and displaying successfully to the website.
I'm trying to do the same now for a new page but that is not working. Using any option in WinSCP is not helping at all. All I get is a small icon instead of the image. But when I use FileZilla for copying the URL, the images are uploading and displaying successfully. BUT the problem is that the page is requesting the username and the password to display the images.
I've been googling about it and I realise that the problem could be that I need to change the FTP URL to HTTP. I tried to do it this way:
ftp://username#domain.org/domain_restore/pics/anton.jpg
to:
http://username.domain.org/anton.jpg
That is probably totally wrong? I tried some other ways but the problem is I'm only a beginner and I don't have the knowledge how to edit it or how to find out what the problem is.
I followed the instructions of someone from the support of my host and they advised me to do a restore to all my directories in the FTP manager. I did that but I feel like I messed it up because now all the folders and the directories are duplicated. Could that also be the problem?
I'm trying to see what a certain webpage would look like if I replaced a certain image with another. Rather than upload the image, edit the site, etc, each time I tweak it, I'd like to know if there's a way to change the image in the page to my local version while viewing the remote page.
I use Firebug for debugging web development usually, but I'm open to any other tool that might do this.
(It is absolutely impossible to search for this and find anything but questions about dynamic image swapping on a deployed website, so sorry if this is a duplicate.)
Added: I just tried substituting a file:/// URI pointing to the image (copied and pasted from the address bar after manually opening the image), and alas, it did not work — the image fails to change.
It seems to only work with the http[s] protocols (likely for security reasons). You can store your images on service like Dropbox, share the image or folder, then use the public URLs.
Really, you can use any web accessible images, so a local server would work too.
If your image is in a localhost server(not as file mind you) i think you can still put that localhost url in the firebug inspect element and it'll work.
Tried an absolute file path but it doesn't work apparently. So I guess you just have to make do with a localhost server image. That works for me
Quick and Lowtech Answer: Take a screen shot of the page open it in photoshop and drop the local image on a layer above the webpage image.
Hi if you are serving from a webserver, u probably can't point it to a file on ur local drive. Even if its localhost, u can't point to a local file c:/test.jpg for example. Its because the browser sorts of sandbox ur page so that scripts can't access local files.
One way is to upload the new file (new_file.jpg) to the webserver, give the image link an id
<img id="something1" src="test.jpg"/>
Using jQuery in the firebug watch window do
$("#something1").attr("src","new_file.jpg");
You should see the image change. If you are not using jQuery, you can use document.getElementById("something1") and get the element to modify.
Another way is to use http://makiapp.com/
You can overlay an image from you computer onto any website you look at with this. Very cool tool for lining up a comp with your code.
You can:
Drag your test image into Google Drive
Open it in a browser
Go to the actual image path
Use this path as a substitute in Firebug
It's almost as fast as working from a local drive.
Back in the earlier days of the internet I remember that in certain browsers, every time you downloaded an image or a file, the URL of where that file was downloaded from would be written into that file's properties (I guess the summary tab?). I think Netscape v2 did this if I remember correctly.
I really miss that kind of functionality as every once in a while I'll run into a neat little program stored somewhere in the depths of my hard drive and wonder where I got it from originally.
I googled around but I'm not quite sure what terms to use to describe what I'm looking for. So I'm wondering if anyone knows of a Firefox plug-in or something similar that would do this?
If you use the DownThemAll! extension for Firefox, you can tell it to prepend the URL of the site to the downloaded file name...
thus you end up with files like:
download.com_utils_compression_ABCD32.exe
It also works really well when you want to download/queue a bunch of files.
You download http://example.com/foo to ~/Desktop/foo, and you want to see the originating URL in the properties of the local file foo?
Back when I used OS X, I remember Safari used to record the original URL in the resource fork of the downloaded file. Can't remember what the named fork is, well, named, but it'll show up in the properties panel from Finder. Since it's there, Spotlight will probably index it, too, but I haven't used OS X since 10.3.
If you use Opera, and haven't cleared the file out from your download manager, select the download and it'll show the original URL that the file is from in the properties pane.
Is this what you want? If so... well, I don't know of a similar Firefox extension, but it'll clarify the question.
For the IE Browser I use the hell out of Fidler to look at all traffic going across the wire.
For FireFox, you can use the FireBug plugin. There is a "Net" tab that will show you request information that is going across the wire.
Most of the time you can use one of these tools to see what URL was requested in order to start a download. You can also view all the get and post information that might need to be sent in order to have your request succeed.
Fidler is here: http://www.fiddlertool.com/fiddler/
FireBug is here: https://addons.mozilla.org/en-US/firefox/addon/1843
Best of Luck!