I have used this URL for the web API compression but when I see the out put in the fiddler header is not zip. there are multiple Zip option are available example GZIP, BZIP2 DEFLATE not sure which one to use kindly help here
I have tried with the below solution and both of them are not working :
http://benfoster.io/blog/aspnet-web-api-compression
there are multiple Zip option are available example GZIP, BZIP2 DEFLATE not sure which one to use kindly help here
This list will be sent to the server and let it know about the client side preferences about compression. It means "I first prefer GZIP. If GZIP not supported by the server side then fallback to BZIP2 DEFLATE compression. If BZIP2 DEFLATE not supported then the server will not do any compression."
There is someone who already create a nuget package that use that implementation you just put in your question. The package name is Microsoft.AspNet.WebApi.MessageHandlers.Compression which install the following two packages :
Microsoft.AspNet.WebApi.Extensions.Compression.Server
System.Net.Http.Extensions.Compression.Client
If you don't need the client side library then just just the server side package in your Web API project.
To use it you need to modify to add the following line at the end of your Application_Start method in Gloabl.asax.cs:
GlobalConfiguration.Configuration.MessageHandlers.Insert(0, new ServerCompressionHandler(new GZipCompressor(), new DeflateCompressor()));
To learn more about this package check this link.
I am wondering if it is possible to convert WebDAV to FTP.
I have already found that the POST and GET methods can be parsed into an FTP url that sends or retrieves the files (in my case, using IBM DataPower). Although I managed to get both methods working, I seem to have problems getting a listing of the files in that FTP folder using WebDAV.
Could anyone give me a hint on what should travel on both request and response for the PROPFIND method? (DP v7 already supports non-standard HTTP methods)
From what I saw: http://msdn.microsoft.com/en-us/library/aa142960(v=exchg.65).aspx it seems like XML is travelling to and from, so I might be able to do something with it, am I right?
Thanks in advance :)
Regards!
Yes indeed it's XML.
The format is defined in RFC 4918.
You have some help in this SO question, here.
There is a library in Python that I love called "Requests". Requests is a HTTP client build on urllib3. "requests doc".
I am looking for something similar in Ruby. Basically what I need is:
Upload files support (multipart/form-data).
Easy get/post.
Cookies can be passed from a response object to a request object (build manually login script).
Stable and Flexible.
Sessions support (to not have to handle cookies manually if we don't have too).
I've looked at Typhoeus, but the code example in the home page doesn't work; they have moved code along and the get method is not longer directly accessible like that, so it's not starting well. Curb seems nice and I like cURL, there is also rest-client, which seems popular, and em-http seems pretty fast according to benchmark. There is a also Patron and curb-fu, which I haven't have the time to try. And, of course, Net:HTTP. But, it doesn't seem to have a mainstream solution that everyone points to.
I think a lot of people have been in my situation and I wonder what they have choosen and why?
The author of the comparison is the author of httpclient, but from the looks of it the comparison is fair.
For a more narrative style with some explanation of the matrix, see http://www.slideshare.net/HiroshiNakamura/rubyhttp-clients-comparison from the same author.
The comparison comes out partly in favor of httpclient, which I can also recommend. Simple, featureful, compatible with all Ruby platforms and performant. Better cookie support than anything else out there, but the presentation mentions that cookies may leak from one (malevolent) site to another if you use the same client object. Don't know if this is still true.
There is https://github.com/cyx/requests, which is exactly what the question is asking for, a port of the requests lib from python.
The built-in OpenURI is the first place to look. It's simple and handles the basics nicely.
Typhoeus, which I've used several times for parallel processes, works nicely. Documentation and the codebase are available at Github.
irb(main):009:0> response = Typhoeus::Request.get("www.example.com")
=> #<Typhoeus::Response:0x007ffbcc067cf8 #code=302, #curl_return_code=0, #curl_error_message="No error", #status_message=nil, #http_version=nil, #headers="HTTP/1.0 302 Found\r\nLocation: http://www.iana.org/domains/example/\r\nServer: BigIP\r\nConnection: close\r\nContent-Length: 0\r\n\r\n", #body="", #time=0.035584, #requested_url=nil, #requested_http_method=nil, #start_time=nil, #start_transfer_time=0.035529, #app_connect_time=2.8e-05, #pretransfer_time=0.000429, #connect_time=2.8e-05, #name_lookup_time=2.8e-05, #request=:method => :get,
:url => www.example.com, #effective_url="HTTP://www.example.com", #primary_ip="192.0.43.10", #redirect_count=0, #mock=false>
irb(main):010:0> puts response.headers
HTTP/1.0 302 Found
Location: http://www.iana.org/domains/example/
Server: BigIP
Connection: close
Content-Length: 0
I use Net::HTTP occasionally too, but OpenURI and Typhoeus, with Hydra, have proven to be easy to use and integrate with my code.
I've eventually found this HTTPClient :
https://github.com/nahi/httpclient
I've started using it, it matches the features I wanted, and more over it's pretty fast according to some benchmark. It also support some advanced things like streaming or chunked response. It's shame though it's not famous in the ruby community. :)
Have you looked at the HTTParty gem?
If you need cookies and form handling, mechanize is the only way to go.
I'm sorry to hear, that Typhoeus didn't work out for you. The reason is, that the README shows howto work with Typhoeus v0.5.0.rc which can be installed with
gem install typhoeus --pre
or
gem "typhoeus", git: "git://github.com/typhoeus/typhoeus.git"
.
There is no session support for Typhoeus but other than that it could be a good fit. At least its stable as hell since it is build on top of libcurl.
File sending example:
Typhoeus.post("www.example.com/file", body: { file: File.open("testfile.txt","r") })
There is unfortunately no shortcut to deal with cookies, you have to set them manually:
Typhoeus.get("www.example.com/needs_cookie", headers: { Cookie: "PRIVATE" })
TLDR: I would choose Typhoeus for its speed and libcurl if you're willing to set things up yourself. Otherwise I would look into Faraday and use it with the Typhoeus adapter.
Edit: I've added installation instructions to the README.
Edit: 0.5 is released.
This question seems to be lacking recent answers. So am filling in the void.
Coming from python myself, and having loved requests library for what it does easily, I recently discovered a very nice Ruby equivalent in rest_client
It supports all the features mentioned in the question, and seems to be very nice from usability perspective - what requests library aimed to achieve.
What is the best practice when serving files from the Zend Framework MVC? These files have to be served from the MVC as they are protected.
I know you can read in the file and place it into the Response object but this seems like a bad practice as you would be reading the entire file into memory then serving it. Right now I usually do:
header('Content-type: image/jpeg');
fpassthru(fopen($path, 'rb'));
exit;
But this also doesn't seem right as I'm stopping the execution of the script. Any suggestions?
I see nothing wrong with just exit(); What you will need to be careful of is any output buffering layers you may have on (gzip compression, etc). Large files could blow up those buffers pretty quick, so you'll want to close them out and potentially 'chunk' your output with a fopen/fread loop.
I would suggest building a super-simple script for retrieving files based on ticket system like in CMS you generate ticket to DB - filename, unique-hash and than redirect to the super-simple file-retieving script (file.php?hash=asd52ad3as1g5). It will get the hash from query and based on it fetch the real filename and push that to output as you have written using fpassthru. The hash need to be unique and hard to guess...
You could try using the X-Sendfile header. It is supported by lighttpd and newer versions of apache. Basically the webserver will replace the output of the script with the file you specified. The downside being that it is specific to the configuration of the webserver, so you may be on a host that doesn't support it.
I'm attempting to port over an Internet Explorer plugin to Firefox, but I'm not sure where to look for what I need.
Basically I need to be able to filter all content that is received by the browser with a certain Content-Type header. I tried implementing a stream converter, and this works, but only for the top-level document in the page, frame, or iframe. I had the same problem with IE, and getting around it was really hacky, and since I would ideally like this to be cross platform I would really like to be able to do this in Firefox without resorting to vtable hacks.
The content is served compressed with a proprietary compression format. So I need to receive the data, decompress it, and change the Content-Type back to what the original uncompressed file should have.
If there is a way to just filter all data received, that would probably be acceptable, I could handle parsing the header myself.
Thanks
I think I may have found what I needed. I came across this link which is used for tracing HTTP calls: http://blues.ath.cx/firekeeper/resources/http_tracer.html
There seems to be some problems with the JavaScript implementation for some reason, and I'm not a JavaScript guru to figure it out, but I've implemented it in C++ and initial results suggest that I should be able to modify it for my needs.
Basically we're replacing the nsIHttpProtocolHandler service with our own implementation, which keeps a reference to the initial implementation. When a call is made to the service, we just proxy it over to the saved original implementation. Then we provide our own implementation of nsIHttpChannel and nsIStreamListener which we use as proxies too.
Again we proxy most of the calls back off to the original handlers. But in OnDataAvailable, instead of passing the data on to the underlying nsIStreamListener, we save it using nsIStorageStream. Then in OnStopRequest, after we've gotten all of the data, we can decompress it and then call OnDataAvailable on the original handler, followed by OnStopRequest.
It has worked on some small simple tests so far, but I'll have to put it through some more rigorous tests... I'll also have to figure out if I can do the same thing with HTTPS.
The biggest problem I see at the moment is that it relies on some unfrozen interfaces such as nsIHttpChannelInternal. Can't be helped though as far as I can tell, and my version compatibility requirements are pretty small, so I can live with it if I have to.
In the meantime, if anybody has any other suggestions, I'm all ears :D