How can I query SPARQL endpoints in spring web app - spring

I'm trying to build a web application that will allow an user to write a SPARQL query and select an endpoint, and the app will send the query to the selected endpoint. Is there a simple way to do this?
As far as I understand, you can send GET and POST requests to an endpoint. A way to send an endpoint to https://dbpedia.org/ for example, is with the following url: https://dbpedia.org/sparql?default-graphuri=http%3A%2F%2Fdbpedia.org&query=select+distinct+%3FConcept+where+{[]+a+%3FConcept}+LIMIT+100&format=text%2Fhtml&timeout=0 - where the first part of the url is the selected endpoint and the second part is the entered query. My initial idea was to construct urls this way and than get the results. However, I'm sure there has to be a simpler, more efficient way to do this. I tried to look for solutions online, but I found very little information on this.

Related

API Gateway: can I POST to a method/resource with an API key, but by providing the key in the URL params instead of a header?

So, I set up a couple Lambdas and the API gateway. I got it all working! Cool, so then the next step was to require an API key. Ok cool plenty of resources out there on how to set it up.
So I got that working as well and I could POST using postman and python (requests). I can provide the 'x-api-key' in the headers of the POST and it works, no issues.
HOWEVER, and here's the problem: The program I'm going to ultimately be using to POST to my gateway API doesn't allow you to edit the details of your POST. The program is called splunk, here's what it looks like. Basically it posts some payload for you, the headers/auth/body can't be edited by you, it just sends some pre-configured thing. You just provide the endpoint and it does the rest. This works if I do not require an API key.
So I started thinking, ok no huge problem, I have seen APIs before where you provide the API-Key in the URL and it authenticates you fine. So this would be something like:
https://exampleAPI/sendmydata?x-api-key=12345
However, I cannot get this to work in AWS for the life of me. I haven't found anything by googling. Is this something that's even possible?
Thank you!
If you must use API key usage plans, you could consider getting the posted API key parameter to API Gateway endpoint A from LambdaA and proxy it with the relevant headers to API Gateway endpoint B.

ServiceNow Scripted REST API GET with Body

I setup a GET scripted rest API. However, when I try to send a GET request with a body, ServiceNow (before it hits my code) complains that GET is not allowed to have a body.
Is there a way to disable this restriction? For now as a temporary workaround, I converted the request into a POST. However, this request does not change any state, so I believe it should be a GET. The request only searches for existing items.
GET is used without body, any configuration of a GET is in the URL and header. A query URL looks like this:
https://instance.service-now.com/api/now/table/problem?sysparm_query=active=true^ORDERBYnumber^ORDERBYDESCcategory&sysparm_limit=1
See the documentation here:
https://developer.servicenow.com/app.do#!/rest_api_doc?v=madrid&id=r_TableAPI-GET
Generally it's OK to use a POST to get data, graphQL does this for example, but i think SNOW is configured for GETs only.

api.ai Fullfillment POST requests doesn't append the action in the POST URL

Currently all the Fulfilment requests originating from api.ai are, POST requests to the base url configured in api.ai Fulfilment section. But to be able to have proper routing (microservice style) set-up on the server side it would be more worthwhile to append the action in the POST URL.
For a substantially large project, there can be hundreds of fulfilment actions and managing all of them in single monolithic project is cumbersome. If the action comes in the URL, then we can configure and organise the actions into multiple cloudfunctions in case of firebase hosting / server side microservices.
Edit:
As answered by matthewayne, I can use my own proxy set-up to route the requests to achieve the goal. But I don't want to introduce any additional delay into the request processing. Because I am expecting huge number of webhooks being fired. This would be a very easy implementation for Google api.ai team to incorporate that allows for a greater flexibility! Hence expecting an answer from google team!
Currently this isn't possible with API.AI's webhook design. I'd recommend setting up a proxy service that unpacks the webhook requests from API.AI, inspects the action and sends the proper request to the proper microservice endpoint and then forwards the response back to API.AI once the microservice has returned its result:

Save response from certain WEB resources while recording scenario

I need to create scenario for user interaction with single-page WEB application. The application does lots of AJAX calls in order to authenticate user and get user data.
So I created simple scenario with HTTP Test Script Recorder and tried to record my script.
Everything went well, however I noticed that whilst request data is recorder properly, the response data is not recorder at all. I tried to enable Add assertions and Regex matching - but that didn't work as well.
Can you please advice how do I record response texts as well?
View Results Tree under proxy will record request, responses during recording.
This is useful to understand where a dynamic field comes from. This will help your find from which response X you need to extract data to inject in request X+N.
I think you may find this option useful to add in user.properties:
proxy.number.requests=true
This will give a number to each request and corresponding sampler so you will be able to find response for the request.
Once you have done this, you will start adding Post Processors (Regex, CSS/JQuery, XPAth ...) to Sampler X to extract data from its response.
This will create variables you can then use as ${varName} in request X+N.

Sending POST data via javascript to a PHP safely

I'm trying to work around the fact that twitter uses a call-limit by letting the client do a call to Twitter(By parameters given by my script, like last_id, username etc) and give me the newly found tweets by posting them via an AJAX request that I store in my database after.
However, if i figure out the parameters that are being sent from my getTweets javascript function to my save_tweets.php via a JSON array it's not that hard to post random stuff with an extension like the REST console in chrome.
Obviously i want the tweets to be legit and can not be manipulated(or posted) by anyone. I understand that javascript is clientside and therefore there is not much control over the content being grabbed and put away again but is there a way to be safe that the POST data you send comes from a user/webpage that is allowed to do so?
I've tried thinking of a PHP session token or something, but that doesn't fly since you have to cross-check that token which you therefore have to send along with the JSON array to my PHP.
Thanks in advance,
p.s If you know a different way to not being obstructed by a call limit to Twitter i'd be happy with that too. But 150 calls from a server isn't that much if you get a 1000+ users an hour to your page.

Resources