Convert WebDAV to FTP - ftp

I am wondering if it is possible to convert WebDAV to FTP.
I have already found that the POST and GET methods can be parsed into an FTP url that sends or retrieves the files (in my case, using IBM DataPower). Although I managed to get both methods working, I seem to have problems getting a listing of the files in that FTP folder using WebDAV.
Could anyone give me a hint on what should travel on both request and response for the PROPFIND method? (DP v7 already supports non-standard HTTP methods)
From what I saw: http://msdn.microsoft.com/en-us/library/aa142960(v=exchg.65).aspx it seems like XML is travelling to and from, so I might be able to do something with it, am I right?
Thanks in advance :)
Regards!

Yes indeed it's XML.
The format is defined in RFC 4918.
You have some help in this SO question, here.

Related

Google Drive API v3 : there isn't any way to get a download url for a google document?

The Google Drive API v2 to v3 migration guide says:
The exportLinks field has been removed from files. To export Google Documents, use the files.export method instead.
I don't want to export (download) the file right away. "files.export" will actually download the file. I want a link to download the file, later. This was possible in v2 by means of the exportLinks.
How can I in v3 accomplish the same? If it is not possible, why was this useful feature removed?
Besides, (similar problem to above) downloadUrl was also removed, and the suggested alternative ("files.get with ?alt=media") downloads the file instead of providing a download link. This means there is no way in v3 to get a public short lived URL for a file?
EDIT:
there is no way in v3 to get a public short lived URL for a file?
For regular files, apparently yes.
This seems to work fine (a public short lived link to the file with its right name and contents):
https://www.googleapis.com/drive/v3/files/ID?alt=media&access_token=TOKEN
For google apps files, no (not even private, as v2 exportLinks used to be).
https://www.googleapis.com/drive/v3/files/ID/exportmimeType=TYPEv&access_token=TOKEN
Similar to regular files, this URL is a short lived link to the file contents, but lacking of its right name.
BTW, I see the API is not behaving consistently: /drive/v3/files/FILEID delivers the right file name, but /drive/v3/files/FILEID/export does not.
I think the API itself should be setting the right Content-Disposition, as it is apparently doing when issuing a /drive/v3/files/FILEID call.
This file naming problem invalidates the workaround to the lack of ExportLinks in v3.
The v2 ExportLinks allowed me to link a file (which is not the same as getting its content right away). Anyone logged in and with the proper permissions was able to access it, and the link didn't needed any access_token, and it wasn't short lived. It was good and useful.
Building a link with a raw API call like /drive/v3/files/FILEID/export (with mandatory access_token) would be an close enough workaround (it is temporary and public, not the same as it was, anyway). However, the naming problem invalidates it.
In v2, regular files have a WebContentLink and google apps files have exportLinks. In v3 exportLinks are gone, and I don't see any suitable alternative to them.
Once you query for your file by id you can use the function getWebContentLink() to get the download link of the file (eg. $file->getWebContentLink() ).
I think you're placing too much emphasis on the word "method".
There is still a link to export a file, it's https://www.googleapis.com/drive/v3/files/fileIdxxxxx/export&mimeType=xxxxx/xxxxx. Make sure you URL encode the mime type.
Eg
https://www.googleapis.com/drive/v3/files/1fGBQ81haNU_nEiC5GITZD3bxT0ppL2LHg-C0ubD4Q_s/export?mimeType=text/csv&access_token=ya29.Gmo0BMvO-pVEPKsiD9j4D-NZVGE91MChRvwOcBSg3cTHt5uAClf-jFxcovQScbO2QQhwHS95eSGW1eQQcK5G1UQ6oI4BFEJJkntEBkgriZ14GbHuvpDL7LT2pKA--WiPuNoDDIuZMm5lWtlr
These links form part of the API, so the expectation is that you've written a client that sends authenticated requests, and deals with the response data. This explains why, if you simply paste the link into a browser without an access_token, it will fail. It also explains why the filename is export, ie. it isn't intended that your client would ever use a filename, but rather it should receive the data as a stream. This SO answer discusses the situation in more detail How to set name of file downloaded from browser?

powershell exchange 2010 grab body of email and set to variable

i am trying to get the body of an email and set it to a variable with powershell by using get-mailbox. the reason im not doing it an easier was is it is blocked on the network to get the body from outlook. im completly lost. ive tried export but thats to pst. ive tried doing a search query with logging but thats a bust also. im pretty lost anything to point me in the direction would be great.
As far as I know, Get-Mailbox won't do that, unfortunately. It'll get you information about the mailbox, but not it's contents. If you want to work w/ the contents of a mailbox, and you can't use Outlook, your best bet is probably the Exchange Web Service (EWS).
There is a way to do this but it really depends on how much you are willing to work to make it possible.
The best way that I could think of is using the EWS API. It's messy and it takes a while to learn so you will probably need to put some time and effort into making the script (unless you can find someone else who has).
Basically I got all of these links by doing a google search for "Powershell EWS API"
Here is another similar question:
How to check an exchange mailbox via powershell?
Here is some more help with how to use the API (it's kinda tricky):
http://blogs.technet.com/b/heyscriptingguy/archive/2011/12/02/learn-to-use-the-exchange-web-services-with-powershell.aspx
http://www.xipher.dk/WordPress/?p=739
Here are some examples to work off of (the first one is closest to what you are looking for):
http://social.technet.microsoft.com/Forums/scriptcenter/en-US/335a888b-bf85-4a36-a555-71cc84608960/download-email-content-text-from-exchange-ews-with-powershell?forum=ITCG
http://social.technet.microsoft.com/Forums/exchange/en-US/0ad086bd-eb23-4ece-a362-696fa526a7e6/retrieve-messages-from-inbox-subfolder?forum=exchangesvrdevelopment
http://poshcode.org/2978
Hope that helps!

Handle concurrent file download with flex/blazeDs/Spring

I'm currently working on a Flex3/blazeDS/Spring/MySQL project.
In this, some users needs to download some import logs. Problem is that given the singleton concept around spring, if 2 users ask for a download at the same time, the servlet responsible for export file creation may cross content between the 2 asked files.
I'm not that much familiar with spring but from what i've been reading around it seems that the solution lies in saying that the servlet is in "Request" scope so there will be a new one created for each download request instead of having a singleton.
Does anyone have ever done something like this before? Every tutorials i've seen so far explains how to handle file download request but it never talks about the fact that 2 users asking for a download may have some issues...
Thanks for any leads on how to fix this.
Each user will receive his own thread, and you should not have any problems unless using member variables (which is a bad practice anyway). If not, I do not see any problem, but it would help if you can post your code.

Ruby Oauth File upload/Multipart POST request

I've been looking at this for a couple of days now and haven't
found a solution. Is there a way to upload a file using OAuth-Ruby?
I am working with a REST system that protects their resource with oauth. I am building a test tool using ruby and oauth-ruby to make it easier to upload test data to the system. But I can't get around to upload files to the resources.
When I send a normal request, everything works but adding a file as a
parameter makes the signature invalid.
Example:
#access_token.post("http://.../imageresource", {:name=>"awesome cat"}, {'Content-Type' => 'multipart/form-data'})
works but gives me:
<error>
<message>images/POST: Request has no file data</message>
</error>
I am not sure how to add a file to the post.
Any thoughts on this?
Thanks,
I know this is old but I'm looking to do this too, this looks like it could do the trick.
Actually there's a question ruby-how-to-post-a-file-via-http-as-multipart-form-data that has an example.
This is either impossible to do with the oauth gem or exceedingly difficult. Either way, I don't know of any way to do it using that gem.
It can be done trivially with my signet gem as long as you have a handy way to construct a valid multipart request body. The construction of such a request body is out-of-scope of an OAuth gem, but should be pretty easy to do with most HTTP clients. The httpadapter gem can then translate the request into a form that signet can sign. Let me know if your preferred HTTP client isn't supported by httpadapter and I'll get that resolved immediately.
See the second example on the fetch_protected_resource method to get an idea for how this might be done.

How can I implement a content converter in Firefox for all page elements?

I'm attempting to port over an Internet Explorer plugin to Firefox, but I'm not sure where to look for what I need.
Basically I need to be able to filter all content that is received by the browser with a certain Content-Type header. I tried implementing a stream converter, and this works, but only for the top-level document in the page, frame, or iframe. I had the same problem with IE, and getting around it was really hacky, and since I would ideally like this to be cross platform I would really like to be able to do this in Firefox without resorting to vtable hacks.
The content is served compressed with a proprietary compression format. So I need to receive the data, decompress it, and change the Content-Type back to what the original uncompressed file should have.
If there is a way to just filter all data received, that would probably be acceptable, I could handle parsing the header myself.
Thanks
I think I may have found what I needed. I came across this link which is used for tracing HTTP calls: http://blues.ath.cx/firekeeper/resources/http_tracer.html
There seems to be some problems with the JavaScript implementation for some reason, and I'm not a JavaScript guru to figure it out, but I've implemented it in C++ and initial results suggest that I should be able to modify it for my needs.
Basically we're replacing the nsIHttpProtocolHandler service with our own implementation, which keeps a reference to the initial implementation. When a call is made to the service, we just proxy it over to the saved original implementation. Then we provide our own implementation of nsIHttpChannel and nsIStreamListener which we use as proxies too.
Again we proxy most of the calls back off to the original handlers. But in OnDataAvailable, instead of passing the data on to the underlying nsIStreamListener, we save it using nsIStorageStream. Then in OnStopRequest, after we've gotten all of the data, we can decompress it and then call OnDataAvailable on the original handler, followed by OnStopRequest.
It has worked on some small simple tests so far, but I'll have to put it through some more rigorous tests... I'll also have to figure out if I can do the same thing with HTTPS.
The biggest problem I see at the moment is that it relies on some unfrozen interfaces such as nsIHttpChannelInternal. Can't be helped though as far as I can tell, and my version compatibility requirements are pretty small, so I can live with it if I have to.
In the meantime, if anybody has any other suggestions, I'm all ears :D

Resources