Introduction
I am running multiple, i call them consul-stacks. They do always look like:
- 1 consul server
- 9 consul nodes
Each node offers some services - just a classic web-stack and more (not interesting for this question).
Gossip is used to protect the server getting queried by arbitrary nodes and reveal data.
Several consul-template / tiller "watchers" are waiting to dynamically configure the nodes/services on KV changes
Goal
Lets say i have 10 of those stacks ( number is dynamic ) and i want to build a web-app controlling the consul-KV of each stack using a specific logic
What i have right now
I have created a thor+diplomat tool to wrap the logic i need to create specific KV entries. I implemented it while running it on the "controller" container in the stack, talking to localhost:8500 - which then authenticates with gossip and writes to the server.
Question
What concept would i now use to move this tool a remote ( not part of the consul-stack ) server, while being able to write into each consul-stacks KV.
Sure, i can use diplomat to connect to stack1.tld:8500 - but this would mean i open the HTTP port and need to secure it somehow ( not protected by gossip? somehow, only RPC? ) and also protect the /ui.
Is there a better way to connect to each of those stacks?
use an nginx proxy server with basic auth in fron of 8500 to protect the access?
also using ssl-interception on this port and still using 8500 or rather use the a configured https port (in consul HTTPS API)
use ACLs to protect the access? ( a lot of setup to allow the access for the stack members - need of TLS?)
In general, without using TLS ( which needs to much work for the clients to setup ), what concepts would fit this need communicating to the stack-server to write into its KV, securely.
If i missed something, happy to add anything you ask for
The answer on this is
Enable ACLs on the consul-server
{
"acl_datacenter": "stable",
"acl_default_policy": "deny",
"acl_down_policy": "deny"
}
Create a general acl token with write/write/write
consul-cli acl create --management=false --name="general_node" --rule "key::write" --rule "event::write" --rule "service::write" --token=<master-token>
Ensure to use your master-token here, created during the server start
Optionally also configure gossip to let your clients communicate encrypted ( otherwise ACLs kind of not make sense )
Add the general token to the consul-client you use remotely to be able to talk to the remote consul - since this consul will no longer be publicly doing anything ( without token )
Related
I need to connect and send request for websocket from different IPs in jmeter to my singalR server. How can I do it. I know in case of HTTP request we can do that in jmeter by creating multiple IP addresses alias on the machine as mentioned in the link https://www.blazemeter.com/blog/how-to-send-jmeter-requests-from-different-ips.
How this process will work for websockets.?
Thanks.
It will not as the possibility to set outgoing IP address needs to be present in the WebSocket plugin you're using.
Currently available solution is to allocate as many machines as IP addresses you need and run JMeter in distributed mode. If a single machine is powerful enough you can kick off several JMeter slave processes there, keep in mind that:
you need to have these IP addresses (or aliases) defined at OS level
you need to bind the slaves to different ports
If you can do Java programming you can add it yourself, the project lives at https://github.com/ptrd/jmeter-websocket-samplers, somewhere here
If you cannot - you can ask the plugin developer to add this feature either via GitHub or try reaching out to him via JMeter Plugins Support Forum
I have a .NET Kafka client (using librdkafka via a Confluent's .NET client) running on a physical server with two network interfaces active. One is 10G and the other is 1G, both of them have static IP addresses assigned. Our networking team handles the configurations and is unlikely to change their practices for one application so I'd like to handle this client-side. I should also mention that the 1G interface and 10G interfaces are on the same network.
Since my Kafka cluster (3-node) is all 10G, I would like to require my application's consumer to bind to the 10G IP address. Looking through all of the documentation, I can't find anything about defining this on the client.
I would like to avoid any "hacky" solutions like setting Kafka to deny any non-whitelisted IP addresses or DNS tomfoolery.
Thanks in advance!
Just to be sure.., Do you know if your server is doing interface bonding (means the traffic will load balance between each interface, though, it's unlikely to do binding on different speed interfaces..)?
If not, as your two interfaces are on the same network, it means you will only use one interface to reach the network (except if you have exotic routing config). This interface will be defined by your default route.
If it's a Linux server, you can do as follows :
ip route
default via X.X.X.X dev YOURDEFAULTINTERFACE
If it's the 10G, you have nothing to do and you can be sure it will use this interface.
If not, you cannot do anything Kafka side, as it's purely OS settings side. Your Kernel will forward any traffic through this default interface.
Again.. I insist on the fact that this is because both your interfaces are in the same network.
If you have any doubts with this, please share your network configuration in details ( result of ip addr and ip route)
Yannick
Both proxy server and anonymizer working as a mediator between your ip and target website.
As per my understanding -> In anonymizer, specially in "Network
anonymizer" it pass the request from all the computer connected to
same network so it will complex to trace your ip.
Is my above understanding is correct?
What is the main difference or they are same?
I found this article
So, while the proxy server is the actual server that passes your information, anonymizer is just the name given to the online service per se. An anonymizer uses underlying proxy servers to be able to give you anonymity.
I am working on some legacy code on Windows for a desktop app in "C.
The client needs to know the geo-location of the user who is running the application.
I have the geo-location code all working (using MaxMind: http://dev.maxmind.com/).
But now I'm looking for help in getting their external IP.
From all the discussions on this topic throughout SO and elsewhere it seems that there is a way to do this by connecting to a "reliable" host (server) and then doing some kind of lookup. I'm not too savvy on WinSock but this is the technology that may be the simplest to use.
Another option is to use WinHttpConnect technology.
Both have "C" interfaces.
Thank you for your support and suggestions.
You can write a simple web service that checks the IP address(es) that the program presents when connecting to that web service.
Look at http://whatismyip.com for an example.
Note that multiple addresses can be presented by the HTTP protocol if there are proxy servers along the route.
You can design your simple web service to get the IP of the client. See
How do I get the caller's IP address in a WebMethod?
and then return that address back to the caller.
Note that in about 15% of cases (my experience metric) the geo location will be way off. The classic example is that most AOL users are routed through a small number of proxy servers. However, there are many other cases where the public IP does not match the user's actual location. Additionally, Geo IP databases are sometimes just wrong.
Edit
It is not possible to detect your external IP address using only in-browser code.
The WebSocket has no provision to expose your external IP address.
https://www.rfc-editor.org/rfc/rfc6455
You need an outside server to tell you what IP it sees.
I'm running into some deployment issues using Akka remoting to implement a small search application.
I want to deploy my ActorSystem on a set of local cluster machines to use them as workers, but I'm a bit confused for what to put into my application.conf to make this happen. For example, I can use:
akka.remote {
transport = "akka.remote.netty.NettyRemoteTransport"
netty {
hostname = "0.0.0.0"
port = 2552
}
}
Each worker just runs the ActorSystem at startup.
This allows my worker machines to bind to their address when they start up, but then they refuse to listen to messages:
beaker-24: [ERROR] ... dropping message DaemonMsgWatch for non-local recipient akka://SearchService#beaker-24:2552/remote at akka://SearchService#0.0.0.0:2552
The documentation I've found for this so far only discusses deployment on my localhost, which is not so useful :). I'm hoping there is a way to do this without generating a separate configuration for each host.
Update:
Using an empty string as the hostname allows for contacting the host via the normal IP address. Addressing using the hostname itself doesn't work at the moment.
Setting “0.0.0.0” as host name will currently basically disable remoting, because that is not a legal IP to send to. Background: actor references get the configured IP (or host name) inserted in their address part when they leave the local system, and that is exactly their “pointer home” for other systems to send messages back.
There has been an effort by Scott which would enable a system to receive replies to a different address here, but that is not included yet—and we may well chose a different solution to this problem.