Tyring to update a URL using Curl on Linux - shell

I am trying to update an ticket and I am trying to do that through Linux. Not sure whether it can be done or not, in the way of searching, I found one blogger sharing, where he/she was also trying to update something
He/She has used the below commands to update the URL.
curl -i -u "script-user:password" -X PUT -d "short_description=Update+me" https:/0005000972
I suspect he/she is trying to update the URL with "Update me".
Is that right? or else Can some one explain what that blogger tried to do??

This is a RESTful request. That is a request using standard HTTP methods (GET, POST, PUT, DELETE) to query a server.
-X PUT specifies the method used
-d "...." specifies the data send to the server
https://... (ill formed in your example) is of course the URL of the target server
Usually, the PUT method is used to replace an existing attribute/value on the server. As the concrete parameters and/or methods available are service dependent, I can only guess here that the intend is to update some attribute named short_description to store the value Update Me (URL encoded -- or more formally x-www-form-urlencoded)
Maybe you should first read a little bit more about those topics, and then, if necessary post an other question describing more in details both the target server and the goal you're trying to achieve on it.

Related

Uploading PDF to AWS Lambda via API Gateway mangles the bits...why?

I have deployed an AWS Lambda function, written in Python, and AWS API Gateway structure to cause POST requests to an API endpoint to be redirected to my function. I want to upload a PDF document to my function and have it store the document in a S3 bucket. The problem I have is that the payload of any POST request to my API is being UTF-8 encoded. I don't want that but can't figure out the magic mojo to disable encoding of the request payload.
I am testing using curl, with the following command line:
curl -XPOST https://xxxxxxxxxx.execute-api.us-west-1.amazonaws.com/test -H 'content-type: application/pdf' --data-binary #document.pdf
UPDATE: I just found the following article describing how API Gateway and Lambda support uploading binary data:
https://aws.amazon.com/blogs/compute/handling-binary-data-using-amazon-api-gateway-http-apis/
This article suggests that all of the complexities that I discussed in the initial formation of my question (still provided below) should not be necessary. All I should need to do to upload binary content to my Lambda function is insure that my request includes an appropriate Content-Type header. I was already doing that, but I massaged my Curl command a bit (modified above) to define my request in exactly the way that is done in this article. I still get UTF-8 encoded data and NOT base-64 encoded data. I tried uploading a jpeg file rather than a PDF so I was doing exactly what was done in the article. Still no love. I don't get it. This article demonstrates exactly what I'm doing. But I don't get the result it suggests I should. Ggggrrrr.
ORIGINAL POST:
I am using Terraform to define my deployment. I want to cause the PDF to not be encoded/mangled at all. This is my first time using API Gateway, and I'm obviously missing some bit of config. The one thing I'm doing specifically right now to say that I want incoming payloads to be treated as binary is via the binary_media_types argument to my API definition in Terraform:
resource aws_api_gateway_rest_api proxy {
...
binary_media_types = [
"application/pdf",
"application/octet-stream",
"*/*"
]
This sets the Binary Media Types configuration associated with the API I've defined. I've confirmed via the AWS Console that this setting is having the desired effect...I can see these types in the console. I should need just the first item in the list, but I've added the others while I try to figure out the problem here. By adding that wildcard item, I believe that it shouldn't matter what the incoming Content-Type is...all payloads should be being treated as binary.
The other bit of config that I know about that might be important is the "integration contentHandling property". Here is the key bit of AWS docs that seems to explain all this:
I think the case that applies to me here is the one I've highlighted, per what I say above. This says to me that I shouldn't need to do anything else, per the "unspecified" value in the table for "contentHandling. I've tried setting the "contentHandling" argument on the integration record of my Terraform config, like this:
resource aws_api_gateway_integration proxy {
...
passthrough_behavior = "WHEN_NO_MATCH"
content_handling = "CONVERT_TO_BINARY"
}
I first tried only specifying the content_handling value. I've also tried setting that value to "CONVERT_TO_TEXT", hoping to then get base64-encoded data. Neither of these has any effect. I've tried adding the passthrough_behavior value as shown. I've also tried replacing "WHEN_NO_MATCH" with "WHEN_NO_TEMPLATES". Nothing I do changes the behavior. I haven't been able to figure out where these settings would show up in the AWS console. If I knew they were necessary, I'd explore this further. But I don't think I need to set these.
What am I missing? How can I POST a PDF document to my AWS Lambda function through API Gateway and have the payload of the request not be converted in any way? TIA!
NOTE: I am aware of this Q/A: PDF Uploaded via AWS API Gateway getting corrupted. The answer there doesn't apply to me, as I need to avoid having to form-encode the upload. The client code that will eventually be doing the upload is set in stone and sends a POST request with a payload that is just the bytes of the PDF.

Use GET params to provide the Web interface with a specific text to annotate

I would like to link to my instance of CoreNLP server, with a specified text (and possibly, a specified set of annotators). (i.e. without having to paste the text then click on Submit)
Is there a way to do this?
(I know and use the API version, but I'm looking for the Web visualisation)
No, the current visualization doesn't let you specify text in the URL (though pull requests are always welcome; the source code lives here).
The server does respond to regular POST requests, e.g., if you want to call CoreNLP from your own webpage though Javascript. For example, the given curl command (from the documentation page:
curl --data 'The quick brown fox jumped over the lazy dog.' 'http://localhost:9000/?properties={%22annotators%22%3A%22tokenize%2Cssplit%2Cpos%22%2C%22outputFormat%22%3A%22json%22}' -o -
The visualization is done with more or less vanilla brat.
So, answering my own question: this is now possible (I believe from CoreNlp 3.8), after a successfully merged pull request from yours truly.
This is the relevant pull request: https://github.com/stanfordnlp/CoreNLP/pull/423

Ueberauth plug fails to assign auth and failure to conn

I'm utilizing Ueberauth in a library I'm building to simplify the addition of authentication in Phoenix.
I've setup an example repo and you can see the code at in the v2 branch
I’m attempting to implement the Ueberauth Identity strategy right now. It appears that although I’ve added the Ueberauth plug to the controller, the ueberauth_failure and ueberauth_auth are not being set in the conn’s assigns. Instead, conn.assigns is %{}.
I’m expecting that with this example curl request:
curl -i -X POST localhost:4000/identity/callback -d '{"user": {"email": "test#example.com", "password": "password"}}'
at least one of those (ueberauth_failure or ueberauth_auth) will be set?
Ueberauth is configured here: https://github.com/britton-jb/test_sentinel_example/blob/master/config/config.exs#L58
The controller implementing the strategy is https://github.com/britton-jb/sentinel/blob/v2/lib/sentinel/controllers/json/session_controller.ex
The controller and route are exposed in https://github.com/britton-jb/test_sentinel_example/blob/master/web/router.ex#L26
by this macro: https://github.com/britton-jb/sentinel/blob/v2/lib/sentinel.ex#L50
#hassox suggested the correct solution, ueberauth identity's plug must be routed to /auth/identity/ and /auth/identity/callback, as soon as I changed the routing to reflect this everything worked.

Dredd can't find my API documentation, how do i tell it where it is if it's not on my local drive (it's on apiary.io server)

I am using the Dredd tool to test my API (which resides on apiary.io).
Question
I would like to provide dredd with a path to my documentation (it even asks for it), however my API doc is on apiary.io but i don't know the exact url that points to it. What would be the correct way to provide dredd with the API path?
What did work (but not what i'm looking for)
Note: I tried downloading the api to my local drive and providing dredd with a local path to the file (yml or apib) which works fine (yay!), but i would like to avoid keeping a local copy and simply providing dredd with the location of my real API doc which is being maintained on the apiary server.
How do I do this (without first fetching the file to local drive)?
Attempts to solve this that failed
I also read (and tried) on the following topics, they may be relevant but i wasn't successful in resolving the issue
- Using authentication token as environment variable
- Providing the domain provided by apiary.io//settings to dredd
- Providing the in the dredd command
all of these attempts still produces the same result, Dredd has no idea where to find the API document unless i provide a path in my local computer to the file (which i have to download or create manually on my computer first).
Any help is appreciated, Thanks!
If I understand it correctly, you would like to use dredd and feed it using the API description document residing on Apiar.io platform, right?
If so, you should be able to do that simply calling the init command with the right options:
dredd init -r apiary -j apiaryApiKey:privateToken -j apiaryApiName:sasdasdasd
You can find the private token going into the Test section of the target API (you'll find the button on the application header).
Let me know if this solves the problem for you - I'll make sure to propagate this and document it accordingly on our help page
P.S: You can also use your own reporter - in that case, simply omit -r apiary when writing the command line parameters.
You can feed Dredd not only with a path to file on your disk, but also with an URL.
If your API in Apiary is public, the API description document (in this case API Blueprint) should have a public URL. For example, if you go to http://docs.apiblueprintapi.apiary.io/, you can see on the left there is a Download link. Unfortunately, the link is visible only for users who do not have access to the editor of the API, so you can’t see the link if you’re owner of the API. Try to log out from Apiary and the link should appear:
Then you can feed Dredd with the link:
$ dredd 'http://docs.apiblueprintapi.apiary.io/api-description-document' 'http://example.com:8080/api'
I agree this isn’t very intuitive and since you’re not the first one to come up with this, I think we’ll think of some ways how to make it easier.
If your API isn't public then unfortunately there's no way to get the URL as of now. However, you can either use GitHub Sync or Apiary CLI to get the file on your disk in an automated manner.

Can I declare credentials only once for a REST API?

I am using Power Query within Power BI Designer to query a REST API. The first request is to:
http://domain/httpAuth/app/rest/server
which returns:
<server>
<builds href="/httpAuth/app/rest/builds"/>
</server>
From there I use Power Query to query http://domain/httpAuth/app/rest/builds in order to get a list of builds and then iterate over the list of builds, calling each one in turn. The format of the URL for each build is:
http://domain/httpAuth/app/rest/builds/id:buildId
The problem is I'm getting prompted to enter credentials for every single request. This is tedious and unworkable (we have a lot of builds).
Is there a way to define the credentials once for (say) stub http://domain/httpAuth/app/rest and have every resource under that stub use the same credentials?
At the moment there is no direct way to do this for HTTP sources. A workaround for now is to connect to the root source first (http://domain/httpAuth/app/rest/builds or just http://domain/) and set the credentials there.
If you trust all of the data sources you are connecting to, you can also disable the firewall by going to the Workbook Settings dialog and selecting the Ignore option for Fast Combine.
EDIT: Sorry, I misread the question. In the case of credentials, connect to the root source first and set the credential there. This credential should be used for the remaining URLs.
I believe you can set an Authorization Header and set it with your request.
(Apologies for for the Wiki link - http://en.wikipedia.org/wiki/Basic_access_authentication)

Resources