Spring - Make microservice only accessible internally - spring

How can I setup a microservice which can only be called by other internal services. The microservice should not be accessible in public, because it has access to databases with secret information. I already tried to secure the microservice with spring security but in this case I got problems with the FeignClient concerning authorization.

Assuming that you are unable to solve this problem with infrastructure
(which is the only correct way to solve this problem),
there are several (bad) techniques you can use:
IP Address White List - Create a list of good IP Addresses, reject any request from an address not on the list.
IP Address Zone - Variation of the white list. Create a list of partial IP Addresses and reject any request that does not match one of the partial addresses.
Non-Routing IP Only - Variation of IP Address Zone. Only accept requests from non-routing IP addresses only (these can only be on your local network). Here is a Wiki page of non-routing IP addresses
Magic token. Only accept requests that include a specific token. This is especially terrible, because somebody can watch your traffic and discover the token.
There are probably other options, but go with Infrastructure.

This is really an infrastructure question. Typically you want to have a private network with all your resources internally - the so called De-Militarized-Zone or DMZ - and then have a second network or endpoint bridge that provides external access. The internal network should not be reachable from the internet. The endpoint could be a single server or an array of servers that are implemented as a bastion host and will authenticate and authorize callers and forward calls to the private network that are legitimate.
The API gateway (or edge-server) pattern is often used to realize this. A good configuration of the gateway is important.
Here is an article of how to do it with the Amazon cloud.
And here a link to Kong, a general API gateway that you can deploy yourself.

Related

Laravel Request IP Address: will Requests coming from VPNs show the same IP address or not?

Currently I am developing an HTTP server and I am using the throttle (access limitation per minute) functionality of Laravel based on IP address.
However I am afraid that when a VPN and/or Proxy Server is used by different people the incoming request will show the same IP address. The rate limitation is included only to prevent dedicated DOS attacks and I don't want the user of my website to be blocked by rate limitation if they are using a VPN.
First of all, I don't have a solid understanding of how IP addresses are obtained and stored in the Request object. I assume it is included in the HTTP request header however I wasn't able to find it in Google Chrome's developer tool, "Network" tab. The developer tool only shows the destination address and not the source ip address in the "Request Header" session.
Next, I don't have a testing environment where I can test whether the IP address will be the same when sending by different machines using the same VPN, hence I have to ask the question here.
Any help would be appreciated.
will Requests coming from VPNs show the same IP address or not?
Yes, it will show up as the same IP address as this is the whole purpose of using a VPN service, to change the user IP address.
However, if you want to detect if a user is using VPN there are third-party services to help you with that https://ipinfo.io/

IIS specify outbound IP address for site/handler

I have a handler deployed on IIS which proxies communication to specific URLs. I need to specify IP address for outbound communication called from this handler different than IP address for general communication from server.
I can isolate those handler to different IIS site if needed.
Currently, I'm redirecting requests for this handler to different server via ARR and URLRewrite, but I'd like to avoid this.
On linux, there is solution to use SRC-NAT rule for specific user, if process was owned by this user (https://serverfault.com/questions/236721/bind-process-or-user-to-specific-ip-linux).
EDIT: If handler was isolated to different site, this site also can be run in different application pool and/or different identity.
Thanks for any advice.

Client-side load balancing in practice seems to be almost the same as server-side load balancing. Is that so?

In server-side load balancing, the clients call an intermediate server, which then decides which instance of the actual server (or microservice) to call.
In client-side load balancing also, the clients call an intermediate server (the API gateway - Zuul for instance, configured with a load-balancer - Ribbon for instance and a naming server - Eureka for instance), which then decides which instance of the microservice to call.
Unless we include the API gateway as part of the client, the client still doesn't know the IP address of the exact server to which it should send the request. Seems to me, to be a lot like server-side load-balancing. Is there something I'm missing?
(Including the API gateway as part of client seems weird, since its usually deployed on a different server from the client)
In Client Side load balancing, the Client is doing the heavy lifting of discovery and connection to the origin server. The client may reference a lookup (Eureka, Consul, maybe DDNS), to discover what the end destination is and the registry will dole out a valid origin. The communication is direct, client to server without a middle man.
In Server Side load balancing, the client is dumb, and makes a call to a predetermined address (usually DNS or static IP). That device then either proxies (TCP or protocol level) the connection to the origin server based on either a lookup, heartbeat, etc.
I've seen benefits in client side routing in that as long as you have IP connectivity between client and server, the work of the infrastructure is trivial to add new services, locations, products, apps, etc. As long as the new server can "register" with the registry, and the client has IP access to the server, it just works and IT does not have to be involved in rolling out your new service.
The drawback is it makes the client a little more heavy, it does require IP access direct from client to server, and may be confusing for traditional IT folks and auditors. Each client needs to be aware of the registry and have code to make calls (or use a sidecar/sidekick).
I've seen it in practice where a group started to transition their apps to a Docker environment, and they were able to run their Docker based apps along side their non-docker versions at the same time w/o having to get IT involved and do a lot of experimentation and testing quickly and autonomously.
If you have autonomous teams, are highly advanced on the devops spectrum, and have a lot of trust with your teams, Client Side routing and load balancing may be a good experience for you.

How can I use static IP addresses for outbound https(s) without Fixie?

Fixie is an addon that provides static IP addresses for outbound HTTP and HTTPS requests. However, their pricing is quite harsh; $1,999/mo per app for my use-case. I have a lot of outbound HTTPS requests.
How do these outbound proxies work? Is there an open-source alternative or commercial alternative that I can self-host on for example AWS?
For fixed Static IP address you can use the EIP (Elastic IP address)
It not only fulfills what you are seeking, i.e fixed client/host address but also provides you with a lot of flexibility. Eg: some key points:
Once provisioned, the address is associated with your account.So, you basically own it to use it with any instance you like.
You can associate and disassociate with an EC2 instance, for a cheap way to implement fault tolerance, by associating with a different instance if one instance fails.
You can associate with a NAT instance to provide proxy functionality.

How to integrate internal APIs (Not accessible outside office network) to slack slash commands

I am trying to use slash commands to my one of the slack channel. I tried to do a POC using git API and it worked fine.
I first created a slash command from this link :
https://api.slack.com/censored/slash-commands
Commnad: /poc
Request URL: http://jsonplaceholder.typicode.com/posts
This worked fine when I type /opc on slack chat box of my channel. It returns some data.
But when I change the Request URL to an internal API, which is accessible only from the office domain, I get error:
Darn – that slash command didn't work (error message: Failure when
receiving data from the peer). Manage the command at .
I believe, slack is not able to access my internal URL in case. Is that possible to see the slack logs?
Can anyone please help me here.
This can not work, since the request URL needs to be accessible from the public Internet in order to work with Slack.
In general most of Slack's interactive features (Slash commands, Interactive messages, Modals, Events API, ...) require your app to provide a public endpoint that can be called by Slack via HTTP.
In order to access internal APIs with Slack you will need some kind of gateway or tunnel through the firewall of your company that exposes the request URL to Slack. There are many ways how to do that and the solution needs to be designed according to the security policy of your company.
Here are a couple of suggestions:
VPN tunnel
One approach would be to run your script for the slash command on an internal webserver (one that has access to the internal API) use a VPN tunnel to expose that web server to the Internet, e.g. with a tool like ngrok.
DMZ
Another approach would be to run your app in the DMZ of your companies network and configure the firewall on both sides to allow access to Slack form the public Internet and your app to you your internal network.
Bridge
Another approach is to host and run that part of your app that interacts with Slack on the public Internet and the part that interacts with your internal network on your internal company network. Then add a secure connection that allows the public part to communicate with the part running on the internal company network.
If opening a connection into the internal network is not an option, there is another way that can allow communication with internal services by inverting the communication direction with a queue.
To do this, you need to deploy a public endpoint that accepts the requests from Slack and puts them onto a queue (e. g. AWS Lambda + SQS, Flask + RabbitMQ) and then poll the queue from the internal network. The polling needs to happen fairly often (at least once a second) to ensure the communication is quick enough for the users not to notice the lag too much. By doing this you can avoid exposing any endpoint of the internal network.
The drawbacks of this approach are more infrastructure complexity and slower response times, but it can be a good alternative in some corporate environments.

Resources