Egress via Static IP for HTTP requests from Google Cloud Workflows - NAT/Proxy? - google-workflows

Is there any way to egress via a static IP for HTTP requests from GCP Workflows?
I know this is possible for Cloud Functions (as per https://cloud.google.com/functions/docs/networking/network-settings#associate-static-ip), but can't find any documentation about equivalent functionality in Workflows.
Thanks in advance :-)
Tom

Related

It's possible to setup NATS protocol on heroku?

I've seen a lot of documentation and tutorials how to setup HTTPS and Websockets on heroku, but it's possible to setup another protocol like TLS or NATS?
If it's possible how can I do that?
Unfortunately, no.
Inbound requests are received by a load balancer that offers SSL
termination. From here they are passed directly to a set of routers.
The routers are responsible for determining the location of your
application’s web dynos and forwarding the HTTP request to one of
these dynos.
https://devcenter.heroku.com/articles/http-routing#routing
Not supported
TCP Routing
https://devcenter.heroku.com/articles/http-routing#not-supported
Heroku offers only http/https routing for applications hosted on it.

How can I use static IP addresses for outbound https(s) without Fixie?

Fixie is an addon that provides static IP addresses for outbound HTTP and HTTPS requests. However, their pricing is quite harsh; $1,999/mo per app for my use-case. I have a lot of outbound HTTPS requests.
How do these outbound proxies work? Is there an open-source alternative or commercial alternative that I can self-host on for example AWS?
For fixed Static IP address you can use the EIP (Elastic IP address)
It not only fulfills what you are seeking, i.e fixed client/host address but also provides you with a lot of flexibility. Eg: some key points:
Once provisioned, the address is associated with your account.So, you basically own it to use it with any instance you like.
You can associate and disassociate with an EC2 instance, for a cheap way to implement fault tolerance, by associating with a different instance if one instance fails.
You can associate with a NAT instance to provide proxy functionality.

Spring - Make microservice only accessible internally

How can I setup a microservice which can only be called by other internal services. The microservice should not be accessible in public, because it has access to databases with secret information. I already tried to secure the microservice with spring security but in this case I got problems with the FeignClient concerning authorization.
Assuming that you are unable to solve this problem with infrastructure
(which is the only correct way to solve this problem),
there are several (bad) techniques you can use:
IP Address White List - Create a list of good IP Addresses, reject any request from an address not on the list.
IP Address Zone - Variation of the white list. Create a list of partial IP Addresses and reject any request that does not match one of the partial addresses.
Non-Routing IP Only - Variation of IP Address Zone. Only accept requests from non-routing IP addresses only (these can only be on your local network). Here is a Wiki page of non-routing IP addresses
Magic token. Only accept requests that include a specific token. This is especially terrible, because somebody can watch your traffic and discover the token.
There are probably other options, but go with Infrastructure.
This is really an infrastructure question. Typically you want to have a private network with all your resources internally - the so called De-Militarized-Zone or DMZ - and then have a second network or endpoint bridge that provides external access. The internal network should not be reachable from the internet. The endpoint could be a single server or an array of servers that are implemented as a bastion host and will authenticate and authorize callers and forward calls to the private network that are legitimate.
The API gateway (or edge-server) pattern is often used to realize this. A good configuration of the gateway is important.
Here is an article of how to do it with the Amazon cloud.
And here a link to Kong, a general API gateway that you can deploy yourself.

One Web API calls the other Web APIs

I have 3 Web API Servers which have the same functionality. I am going to add another Web API server which will be used only for Proxy. So All clients from anywhere and any devices will call Web API Proxy server and the Proxy server will transfer randomly the client requests to the other 3 Web API servers.
I am doing this way because:
There are a lot of client requests in a minute and I can not use only 1 Web API server.
If one server was dead, clients can still send request to the other servers. (I need at least 1 web servers response to the clients )
The Question is:
What is the best way to implement the Web API Proxy server?
Is there a better way to handle high volume client requests?
I need at least 1 web server response to the clients. If I have 3 servers and 2 of them are dead.
Please give me some links or documents that can help me.
Thanks
Sounds like you need a reverse proxy. Apache HTTP Server and NGINX can be configured to act as a load balanced reverse proxy.
NGINX documentation: http://nginx.com/resources/admin-guide/reverse-proxy/
Apache HTTP Server documentation: http://httpd.apache.org/docs/2.2/mod/mod_proxy.html
What you are describing is call Load Balancing and Azure (which seems you are using from your comments) provides it out of the box both for Cloud Services and Websites. You should create as many instances as you like under the same cloud service and open a specific port (which will be load balanced) under cloud service endpoints.

Server to Server Communication using HTTPS/SSL

I have two servers which exchanges information using HTTP/Rest Protocol. Now I want to secure the communication between these two servers. So I planned to implement the information exchange over HTTPS/Rest.
Is this possible? if possible please provide me some examples.
Info : I am using Apache Httpd on CentOS.
Thank you
Regards,
Dinesh

Resources