Akka.Net Clustering Simple Explanation - cluster-computing

I try to do a simple cluster using akka.net.
The goal is to have a server receiving request and akka.net processing it through it cluster.
For testing and learning I create a simple WCF service that receive a math equation and I want to send this equation to be solved.
I have one project server and another client.
The configuration on the server side is :
<![CDATA[
akka {
actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
debug {
receive = on
autoreceive = on
lifecycle = on
event-stream = on
unhandled = on
}
deployment {
/math {
router = consistent-hashing-group #round-robin-pool # routing strategy
routees.paths = [ "/user/math" ]
virtual-nodes-factor = 8
#nr-of-instances = 10 # max number of total routees
cluster {
enabled = on
max-nr-of-instances-per-node = 2
allow-local-routees = off
use-role = math
}
}
}
}
remote {
helios.tcp {
transport-class = "Akka.Remote.Transport.Helios.HeliosTcpTransport, Akka.Remote"
applied-adapters = []
transport-protocol = tcp
port = 8081
hostname = "127.0.0.1"
}
}
cluster {
seed-nodes = ["akka.tcp://ClusterSystem#127.0.0.1:8081"] # address of seed node
}
}
]]>
On the Client side the configuration is like this :
<![CDATA[
akka {
actor.provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
remote {
log-remote-lifecycle-events = DEBUG
log-received-messages = on
helios.tcp {
transport-class = "Akka.Remote.Transport.Helios.HeliosTcpTransport, Akka.Remote"
applied-adapters = []
transport-protocol = tcp
port = 0
hostname = 127.0.0.1
}
}
cluster {
seed-nodes = ["akka.tcp://ClusterSystem#127.0.0.1:8081"] # address of the seed node
roles = ["math"] # roles this member is in
}
actor.deployment {
/math {
router = round-robin-pool # routing strategy
routees.paths = ["/user/math"]
nr-of-instances = 10 # max number of total routees
cluster {
enabled = on
allow-local-routees = on
use-role = math
max-nr-of-instances-per-node = 10
}
}
}
}
]]>
The cluster connection seems to correctly be made. I see the status [UP] and the association with the role "math" that appeared on the server side.
Event follwing the example on the WebCramler, I don't achieved to make a message to be delivered. I always get a deadletters.
I try like this :
actor = sys.ActorOf(Props.Empty.WithRouter(FromConfig.Instance), "math");
or
var actor = sys.ActorSelection("/user/math");
Does someone know a good tutorial or could help me ?
Thanks

Some remarks:
First: assuming your sending work from the server to the client. Then you are effectively remote deploying actors on your client.
Which means only the server node needs the actor.deployment config section.
The client only needs the default cluster config (and your role setting ofcourse).
Second: Try to make it simpler first. Use a round-robin-pool instead. Its much simpler. Try to get that working. And work your way up from there.
This way its easier to eliminate configuration/network/other issues.
Your usage: actor = sys.ActorOf(Props.Empty.WithRouter(FromConfig.Instance), "math"); is correct.
A sample of how your round-robin-pool config could look:
deployment {
/math {
router = round-robin-pool # routing strategy
nr-of-instances = 10 # max number of total routees
cluster {
enabled = on
max-nr-of-instances-per-node = 2
allow-local-routees = off
use-role = math
}
}
}
Try this out. And let me know if that helps.
Edit:
Ok after looking at your sample. Some things i changed
ActorManager->Process: Your creating a new router actor per request. Don't do that. Create the router actor once, and reuse the IActorRef.
Got rid of the minimal cluster size settings in the MathAgentWorker project
Since your not using remote actor deployment. I changed the round-robin-pool to a round-robin-group.
After that it worked.
Also remember that if your using the consistent-hashing-group router you need to specify the hashing key. There are various ways to do that, in your sample i think the easiest way would be to wrap the message your sending to your router in a ConsistentHashableEnvelope. Check the docs for more information.
Finally the akka deployment sections looked like this:
deployment {
/math {
router = round-robin-group # routing strategy
routees.paths = ["/user/math"]
cluster {
enabled = on
allow-local-routees = off
use-role = math
}
}
}
on the MathAgentWorker i only changed the cluster section which now looks like this:
cluster {
seed-nodes = ["akka.tcp://ClusterSystem#127.0.0.1:8081"] # address of the seed node
roles = ["math"] # roles this member is in
}
And the only thing that the ActorManager.Process does is:
return await Program.Instance.RouterInstance.Ask<TResult>(msg, TimeSpan.FromSeconds(10));

Related

How to get newly created instance id using Terraform

I am creating AWS ec2 instance(s) using auto scaling group and launch template. I would like to get instance ids of the newly launched instances. Is this possible?
For brevity purpose I have removed some code
resource "aws_launch_template" "service_launch_template" {
name_prefix = "${var.name_prefix}-lt"
image_id = var.ami_image_id
iam_instance_profile {
name = var.instance_profile
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_lb_target_group" "service_target_group" {
name = "${var.name_prefix}-tg"
target_type = "instance"
vpc_id = var.vpc_id
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "service_autoscaling_group" {
name = "${var.name_prefix}-asg"
max_size = var.max_instances
min_size = var.min_instances
desired_capacity = var.desired_instances
target_group_arns = [aws_lb_target_group.service_target_group.arn]
health_check_type = "ELB"
launch_template {
id = aws_launch_template.service_launch_template.id
version = aws_launch_template.service_launch_template.latest_version
}
depends_on = [aws_alb_listener.service_frontend_https]
lifecycle {
create_before_destroy = true
}
}
resource "aws_alb" "service_frontend" {
name = "${var.name_prefix}-alb"
load_balancer_type = "application"
lifecycle {
create_before_destroy = true
}
}
resource "aws_alb_listener" "service_frontend_https" {
load_balancer_arn = aws_alb.service_frontend.arn
protocol = "HTTPS"
port = "443"
}
This is working. But I would like to output the instance ids of the newly launched instances. From terraform documentation looks like the aws_launch_template or aws_autoscaling_group does not export the instance ids. What are my options here?
Terraform is probably completing, and exiting, before the auto-scaling group has even triggered a scale-up event and created the instances. There's no way for Terraform to know about the individual instances, since Terraform isn't managing those instances, the auto-scaling group is managing them. You would need to use another tool, like the AWS CLI, to get the instance IDs.

grpc custom load balancer not detecting new server addition in cluster

I am building a distributed workflow orchestrator, grpc is used to communicate with the server cluster by workers.If a new server is added to the server grpc client is not able to detect this change. However i have done a workaround by adding a max connection age to the server options
grpc.KeepaliveParams(keepalive.ServerParameters{
MaxConnectionAge: time.Minute * 1,
})
We have two implementation of workers, one in golang and other in java this workaround works perfectly in golang client. Every minute the client makes new connection and is able to detect new servers in cluster. But this is not working with java client.
public CustomNameResolverFactory(String host, int port) {
ManagedChannel managedChannel = NettyChannelBuilder
.forAddress(host, port)
.withOption( ChannelOption.CONNECT_TIMEOUT_MILLIS, 10000 )
.usePlaintext().build();
GetServersRequest request = GetServersRequest.newBuilder().build();
GetServersResponse servers = TaskServiceGrpc.newBlockingStub(managedChannel).getServers(request);
List<Server> serversList = servers.getServersList();
System.out.println(servers);
LOGGER.info("found servers {}", servers);
for (Server server : serversList) {
String rpcAddr = server.getRpcAddr();
String[] split = rpcAddr.split(":");
String hostName = split[0];
int portN = Integer.parseInt(split[1]);
addresses.add(new EquivalentAddressGroup(new InetSocketAddress(hostName, portN)));
}
}
Java client code- https://github.com/Mohitkumar/orchy-worker-java/blob/master/src/main/java/com/orchy/client/CustomNameResolverFactory.java
Golang client code- https://github.com/Mohitkumar/orchy/blob/main/worker/lb/resolver.go

Creating backend service with multiple umigs

Need some help creating a backend service with multiple umigs by using terraform code. using the below code but not able to assign multiple umigs.
Any help will be highly appreciated.
resource "google_compute_backend_service" “test-bs" {
project = module.test.project_id
name = "test-bs"
provider = google
protocol = "HTTPS"
port_name = "services"
load_balancing_scheme = "EXTERNAL"
timeout_sec = 30
enable_cdn = false
health_checks = [google_compute_health_check.test-hc.id]
backend {
group =“https://www.googleapis.com/compute/v1/projects/test/zones/us-east4-c/instanceGroups/umig-01"]
, "https://www.googleapis.com/compute/v1/projects/test/zones/us-east4-c/instanceGroups/umig-02"}
balancing_mode = "UTILIZATION"
capacity_scaler = 1.0

How to whitelist Atlassian/Bitbucket IPs in AWS EC2 security group?

We want Bitbucket webhooks to trigger our CI tool which runs on an AWS EC2 instance, protected with ingress rules from general access.
Bitbucket provides a page listing their IP addresses at https://support.atlassian.com/bitbucket-cloud/docs/what-are-the-bitbucket-cloud-ip-addresses-i-should-use-to-configure-my-corporate-firewall/
They also have a machine-consumable version at https://ip-ranges.atlassian.com/ for Atlassian IPs in general.
I wonder, what is an efficient approach to add and maintain this list in AWS EC2 security groups, e.g. via terraform.
I ended up scraping the machine-consumable json from their page, and let terraform manage the rest. The step of getting the json is left as a manual task.
resource "aws_security_group_rule" "bitbucket-ips-sgr" {
security_group_id = "your-security-group-id"
type = "ingress"
from_port = 443
to_port = 443
protocol = "TCP"
cidr_blocks = local.bitbucket_cidrs_ipv4
ipv6_cidr_blocks = local.bitbucket_cidrs_ipv6
}
locals {
bitbucket_cidrs_ipv4 = [for item in local.bitbucket_ip_ranges_source.items:
# see https://stackoverflow.com/q/47243474/1242922
item.cidr if length(regexall(":", item.cidr)) == 0
]
bitbucket_cidrs_ipv6 = [for item in local.bitbucket_ip_ranges_source.items:
# see https://stackoverflow.com/q/47243474/1242922
item.cidr if length(regexall(":", item.cidr)) > 0
]
# the list originates from https://ip-ranges.atlassian.com/
bitbucket_ip_ranges_source = jsondecode(
<<JSON
the json output from the above URL
JSON
)
}
I improved on Richard's answer and wanted to add that TF's http provider can fetch the JSON for you, and, with a slight tweak to the jsondecode() call, that same for loop still plays.
provider "http" {}
data "http" "bitbucket_ips" {
url = "https://ip-ranges.atlassian.com/"
request_headers = {
Accept = "application/json"
}
}
locals {
bitbucket_ipv4_cidrs = [for c in jsondecode(data.http.bitbucket_ips.body).items : c.cidr if length(regexall(":", c.cidr)) == 0]
bitbucket_ipv6_cidrs = [for c in jsondecode(data.http.bitbucket_ips.body).items : c.cidr if length(regexall(":", c.cidr)) > 0]
}
output "ipv4_cidrs" {
value = local.bitbucket_ipv4_cidrs
}
output "ipv6_cidrs" {
value = local.bitbucket_ipv6_cidrs
}

Icinga2 notification just once on state change

I have set up icinga2 to monitor a few services with different intervals, so one service might be checked every 10 seconds. If it gives a critical error I will receive a notification, but I will receive it every 10 seconds if the error persists, or until I acknowledge it. I just want to receive it once for each state change. Then maybe after a specified time again, but it is not that important.
Here is my config:
This is more or less the standard template.conf, but I have added the "interval=0s", because I read that it should prevent notifications from being sent multiple times.
template Notification "mail-service-notification" {
command = "mail-service-notification"
interval = 0s
states = [ OK, Critical ]
types = [ Problem, Acknowledgement, Recovery, Custom,
FlappingStart, FlappingEnd,
DowntimeStart, DowntimeEnd, DowntimeRemoved ]
vars += {
notification_logtosyslog = false
}
period = "24x7"
}
And here is the part of the notification.conf that includes the template:
object NotificationCommand "telegram-service-notification" {
import "plugin-notification-command"
command = [ SysconfDir + "/icinga2/scripts/telegram-service-notification.sh" ]
env = {
NOTIFICATIONTYPE = "$notification.type$"
SERVICEDESC = "$service.name$"
HOSTNAME = "$host.name$"
HOSTALIAS = "$host.display_name$"
HOSTADDRESS = "$address$"
SERVICESTATE = "$service.state$"
LONGDATETIME = "$icinga.long_date_time$"
SERVICEOUTPUT = "$service.output$"
NOTIFICATIONAUTHORNAME = "$notification.author$"
NOTIFICATIONCOMMENT = "$notification.comment$"
HOSTDISPLAYNAME = "$host.display_name$"
SERVICEDISPLAYNAME = "$service.display_name$"
TELEGRAM_BOT_TOKEN = TelegramBotToken
TELEGRAM_CHAT_ID = "$user.vars.telegram_chat_id$"
}
}
apply Notification "telegram-icingaadmin" to Service {
import "mail-service-notification"
command = "telegram-service-notification"
user_groups = [ "icingaadmins" ]
assign where host.name
}
I think you had a typo.
It should work if you set interval = 0 (not "interval = 0s")
After that change you must restart the icinga service.

Resources