Apache Abdera Client - Posting/Putting - client

Looking for help using the Apache Abdera Atom Client. I am trying to post and put files to a feed but I am getting a 400 error saying that the content type must be application/x-www-form-urlencoded or multipart/form-data

You need to check where you are posting too. Abdera is used to post ATOM-XML format. It looks like you try to post to the same URL your web form is posting to, which would be a HTML submission a.k.a. x-www-form-urlencoded. Check with the blog server software if and where they support the ATOM publishing protocol.

Related

413 request entity too large jetty server

I am trying to make a POST request to an endpoint served using jetty server. The request errors out saying 413 Request entity too large. But the content-length is only 70KB which I see is way below the default limit of 200KB.
I have tried serving via ngnix server and add client_max_body_size to desired level but that didn't work. I have set the setMaxFormContentSize of WebContext and that didn't help either. I have followed https://wiki.eclipse.org/Jetty/Howto/Configure_Form_Size and that didn't helped me either.
Does anyone have any solution to offer?
wiki.eclipse.org is OLD and is only for Jetty 7 and Jetty 8 (long ago EOL/End of Life). The giant red box at the top of the page that you linked it even tell you this, and gives you a link to the up to date documentation.
If you see a "413 Request entity too large" from Jetty, then it refers the the Request URI and Request Headers.
Note: some 3rd party libraries outside of Jetty's control can also use HttpServletResponse.sendError(413) which would result in the same response status message as you reported.
Judging by your screenshot, which does not include all of the details, (it's really better to copy/paste the text when making questions on stackoverflow, screenshots often hide details that are critical in getting a direct answer), your Cookie header is massive and is causing the 413 error by pushing the Request Headers over 8k in size.

How do I Create a Custom Connector in Microsoft Flow with the correct request URL?

I am attempting to create a custom connector for the Clio API (https://app.clio.com/api/v4/documentation). I was able to successfully authenticate and access the API in Postman, testing out quite a few different types of requests with good results.
Then I exported the collection to a Postman file and imported it into a new custom connector in my MS Flow account as instructed at https://learn.microsoft.com/en-us/connectors/custom-connectors/define-postman-collection. As part of that process, I entered the following settings:
Scheme: HTTPS
host: app.clio.com
Base URL: /
Within the custom connector requests, all the definitions looked acceptable, except that instead of having the fully qualified request URL, they did not include https://app.clio.com.
For example, one request should use the following address:
https://app.clio.com/api/v4/contacts.json
The field in MS Flow, where URL should be entered, is grayed out and only includes /api/v4/contacts.json and looks like this:
The grayed out field cannot be typed in. Instead, I have clicked "Import from sample," which leads to a window where I can type in the fully qualified URL. After I do that and click the "import" button, the window still lists the partial URL as shown above.
At first I thought that was intentional, since I had entered the host elsewhere for the connector, and I thought that Flow would put them together to send the request to the right URL. But it did not: when I tested the operation, I got a 404 error:
{
"error": "{\r\n \"code\": 404,\r\n \"message\": \"Unable to match incoming request to an operation.\",\r\n \"source\": \"msmanaged-na.azure-apim.net\",\r\n \"path\": \"\",\r\n \"clientRequestId\": \"500779d5-356d-4c79-bf96-caf2-f5bc2919\"\r\n}"
}
When I looked at the request, this is the URL:
https://msmanaged-na.azure-apim.net/apim/clio2.5fb03ce8462066f352.5fdeb6bc35b813689d/92053762-68ce-4c1d-9085-0785-0fd98c3b/api/v4/contacts.json?type=Person
So obviously Flow is not using the correct request URL, and I cannot figure out how to enter the fully qualified request URL. Can anybody tell me what I am doing wrong?
I found another comment where someone else is having the same problem: https://stackoverflow.com/a/48813209/7191369 so I'm not the only one. Thanks in advance for your help.
Edit:
After some additional searching, the address in the request (with https://msmanaged-na.azure-apim.net) is the required redirect URL for the proxy per this post: https://powerapps.microsoft.com/en-us/blog/custom-api-with-authentication/, and is used when processing OAuth. But the crappy part of this is that I can't see the request URL so I can't troubleshoot. Is there any way to see what request the proxy server is sending out to the Clio API?
It's been a while since this question was posted, but let me give you a suggestion to include the /api/v4 part of the URL inside the Base URL property of the Flow. This way all your endpoints will use the specified version and you will not have to define them one by one in each request.
Except if you intentionally want to use different versions across the requests :) Anyways, I'm glad that you've been able to resolve the issue.

ServiceStack on Heroku with PostgreSQL dabase json return format error

I was setup ServiceStack on Heroku with PostgreSQL (follow http://friism.com/running-net-on-heroku). But json return format error, here is json
344
[{"Id":3,"Code":"EUR","Name":"Đồng EUR","RecVersion":3,"RecId":3,"RecCreated":"2013-09-13T07:29:30.7228990","RecCreatedBy":1,"Status":"1","RecModified":"2013-09-13T07:29:30.7228990","RecModifiedBy":1},{"Id":4,"Code":"JPY","Name":"Đồng Yên Nhật","RecVersion":4,"RecId":4,"RecCreated":"2013-09-13T07:29:30.7275190","RecCreatedBy":1,"Status":"1","RecModified":"2013-09-13T07:29:30.7275190","RecModifiedBy":1},{"Id":2,"Code":"USD","Name":"Đồng Đôla Mỹ","RecVersion":2,"RecId":2,"RecCreated":"2013-09-13T07:29:30.7183870","RecCreatedBy":1,"Status":"1","RecModified":"2013-09-13T07:29:30.7183870","RecModifiedBy":1},{"Id":1,"Code":"VND","Name":"Đồng Việt Nam","RecVersion":1,"RecId":1,"RecCreated":"2013-09-13T07:29:30.7121270","RecCreatedBy":1,"Status":"1","RecModified":"2013-09-13T07:29:30.7121270","RecModifiedBy":1}]
0
It include 344 and 0.
But if i follow
https://github.com/kunjee17/ServiceStackHeroku
json return is OK (http://thawing-shelf-3867.herokuapp.com/Rockstars?format=json)
How to fix it.
Thanks you very much
The json doesn't include the extra numbers - if you click on the 'view json datasource' you'll see your expected json. So, it's not the json format that's wrong.
If you look at the response (using Chrome Dev Tools Network Response tab, for example), you will see that the extra numbers are outside the html tags.
Something is adding extra characters to the ServiceStack response before it's delivered to the client. I'd focus on your config settings. Perhaps it's related to a character encoding not being set correctly in nginx on Heroku?

Downloading file with Express

I'm using Express and I need to download a file from server. I can easily download it with just , but there is some query parameters, that I want to hide from user.
So now I'm trying to use jQuery.ajax that sending a request to Express (srv1) and then Express sending request with my parameters to the another server (srv2). Server responds me with 'Content-disposition' header and a file data. It's ok. And there is a question - can I use that file and respond with it to my initial ajax request?
The problem is, that even res.download() with files (that already on my srv1) doesn't work. Express sets headers well, but no file is prompting to download. Maybe there is a probem in ajax?
You cannot cause the browser to perform a file download with a javascript ajax request (this is a security limitation). See https://stackoverflow.com/a/9970672/266795 for details. You'll need a normal browser GET or POST request to get a proper file save dialog.

How can I scrape an image that doesn't have an extension?

Sometimes I come across an image that I can't scrape so that it can be saved. An example of this is:
https://s3.amazonaws.com/plumdistrict.com-production/perks/12321/image/original.?1325898487
When I hit the url from Internet Explorer I see the image but when I try to get it from the code below I get the following error message "System.Net.WebException The remote server returned an error: (403) Forbidden" error with GetResponse:
string url = "https://s3.amazonaws.com/plumdistrict.com-production/perks/12321/image/original.?1325898487";
WebRequest request = WebRequest.Create(url);
WebResponse response = request.GetResponse();
Any ideas on how to get this image?
Edit:
I am able to get to save images that do have extensions. For example I can scrape the following image just fine:
https://s3.amazonaws.com/plumdistrict.com-production/perks/12659/image/original.jpg?1326828951
Although HTTP is originally supposed to be stateless, there are a lot of implementations that rely on it being stateless. I could configure my webserver to only accept requests for "http://mydomain.com/sexy_avatar.jpg" if you provide a cookie proving you were logged in. If not, I send you a redirect 303 to "http://mydomain.com/avatar_for_public_use.jpg".
Amazon could be doing the same. Try to load the web page using Chrome, and look at the Network view in developer mode (CTRL+SHIFT+J) to see all headers supplied to the website. Maybe you even need to do a full navigation in the same session before you are allowed to see the image. This is certainly the case in many web applications I have developed :-)
Well, it looks like it's being generated from a script (possibly being retrieved from a database). The server should be sending a file/content type to go along with that... but it doesn't seem to be, which I believe is a violation of standards.
My Linux box knows full well that that's a JPEG image once it's on my hard drive, because it examines file headers rather than relying on extensions. Perhaps there is a tool to do the same in Windows?
Edit: Actually, on further contemplation, it seems odd that you'd get a 403 for that. Perhaps the server is actually blocking you from retrieving the file in that manner.

Resources