I received an email saying that JSON-RPC and Global HTTP Batch serving endpoints being discontinued, and that my project on Google Cloud Platform is calling Global HTTP Batch endpoint.
When I check the API dashboard of the project, however, "Google Cloud Storage JSON API" shows no usage for the last 30 days.
Does that mean the project no longer calls this endpoint?
If not (= if there is still a chance that we call this endpoint), how can I see whether a change that I will make to eliminate the call does actually eliminate the call?
If you are using a client from Google to make your request you are no longer calling the Global HTTP Batch endpoint as they have been updated to use the new API specific Batch endpoint.
You can still use the dev tools from your browser to check the request url from your app.
Related
I am currently trying to build a go API using gin for a web and mobile application. I am new to the world of WebSockets and Go so I was wondering how I would go about triggering a GET request from the client after a relevant POST request was made ie: the POST request contained the user's ID so the clients who require information regarding that user are properly updated. Currently, I have the POST and GET requests which do what I need them, but I'm a little lost about how to make the entire flow realtime using WebSockets.
I believe this example of server-sent-events should address the question. Once a POST handler has been called, send a flag to the GET endpoint via a channel and then send an event through there.
I want to send SMS from AngularJS web application using Ozeki sms gateway. Can anyone tell me how to do this? pr suggest me some reference link or code sample.
Plain sending
Assume we skipping other protocols available inside Ozeki Sms NG product (like SMPP, Email, DB etc), and getting to HTTP protocol only, you can go this way:
Prerequisites:
Figure out best way for you to make HTTP request to send SMS
(I'm not AngJS guy so may be there are already few ways to make HTTP-request from Angular, but at least any Ajax method passing params to executing PHP-script for making HTTP request (with curl, file_get_contents) will be totally Ok).
Make sure your Ozeki SMS server is reacheable via IP/domainname etc by your PHP-script so your code can reach its endpoint.
Doing it:
Inside Ozeki install service provider like HTTP Client
http://www.ozekisms.com/index.php?owpn=195&info=service-provider-connections/http-client-connection
or HTTP Server (more powerful version of HTTP Client allowing call back URLs)
http://www.ozekisms.com/index.php?owpn=197&info=service-provider-connections/http-server-connection
Then according (to docs) execute request like
http://server_ip:9501/api?action=sendmessage&username=________&password=________&originatior=__________________&recipient=__________________&messagetype=SMS:TEXT&messagedata=______________
*Some fields are not necessary, it may vary depending on Ozeki version you use.
** port 9501 - is a default Ozeki HTTP port which may be changed in general settings, also it has HTTPS port as well. Basically the correct port is the same which you already use when accessing Ozeki Web GUI.
After executing sending request (try from browser or from something like Postman first) you should get responce in XML format informing you about result of your transaction.
Possible next step... DLRs
Getting delivery reports (if supported by your operator) is a common "i want it too" question.
In case you need them - there is great embedded feature inside "HTTP Server" connector (mentioned above).
Here you can see more details
http://www.ozekisms.com/index.php?owpn=431
"reporturl" - is a field you may use to set kind of "call back url". In other words in this optional field you may specify full URL and list fields to be passed along. So you only have to create your own endpoint to catch them (as GET request from Ozeki server) and use inside your software.
Currently all the Fulfilment requests originating from api.ai are, POST requests to the base url configured in api.ai Fulfilment section. But to be able to have proper routing (microservice style) set-up on the server side it would be more worthwhile to append the action in the POST URL.
For a substantially large project, there can be hundreds of fulfilment actions and managing all of them in single monolithic project is cumbersome. If the action comes in the URL, then we can configure and organise the actions into multiple cloudfunctions in case of firebase hosting / server side microservices.
Edit:
As answered by matthewayne, I can use my own proxy set-up to route the requests to achieve the goal. But I don't want to introduce any additional delay into the request processing. Because I am expecting huge number of webhooks being fired. This would be a very easy implementation for Google api.ai team to incorporate that allows for a greater flexibility! Hence expecting an answer from google team!
Currently this isn't possible with API.AI's webhook design. I'd recommend setting up a proxy service that unpacks the webhook requests from API.AI, inspects the action and sends the proper request to the proper microservice endpoint and then forwards the response back to API.AI once the microservice has returned its result:
I am currently trying to setup AWS Api Gateway, to proxy to another api, that has fully functional methods, response content, status codes etc. This is fairly simple to setup, but I have noticed that the Api Gateway always returns 200 OK no matter what the origin api responds with.
Fx. if there was a bad request (in the origin api) which results in a error message in JSON and a 400 Bad Request, the Api Gateway will respond with a the exact same error message, but a status code of 200 OK
If I remove all settings from the Message Response in the API Gateway web-interface, I get an internal error in the API Gateway. Can it be true that I have to map all the different status codes from the origin api manually in the Api Gateway?
I would prefer if it was possible to just let the status code (as well as the response, which currently works great) pass through, and not have the Api Gateway touch it in any way.
Proxy integration can be used to achieve this. In this case, it is HTTP Proxy. Lambda Proxy integration can also be used but will need some code logic in lambda. API GW will then return the result as-is.
You are correct that currently when using API Gateway you are required to map all response codes in your integration responses. We have heard this "pass through" request from other customers and we may consider including this in future updates to the service.
I am using NGINX as my web server for html/js/css files and my web app UI. It is a single page app that uses AJAX requests to a back end JEtty server. Previously I deployed everything in Jetty and ajax calls worked fine. In separating the back end from the web UI tier, I am now trying to figure out how to configure NGINX to allow AJAX requests to pass through to Jetty. But, I ALSO want to prevent someone from watching network traffic and seeing the ajax calls my app makes, then scripting those themselves. To do this, I believe if I can configure nginx to ADD a custom header to the requests as they pass through (is this even possible?) I could then only accept requests with those headers at my Jetty API level.
If that is possible, is it the right way to handle this so that outsiders can't get in to my back end API? Is there a way they could figure out that my nginx server is adding a header short of breaking in to my server and figuring out the configuration?
If your application calls your api via Ajax on the client there's nothing you can do to stop someone from calling it directly (assuming they otherwise have access to the page). At the end of the day, an Ajax request is just a request made from the client in JS. Now, there are lots of stupid ways to make it more difficult, but, if anyone really wants to call your api directly, they can.
If you're just talking about only allowing access through nginx (or specifically your /api location block), just bind jetty to localhost only.