Prometheus settings for Tarantool - tarantool

Please share an example of valid Prometheus settings to use with Tarantool.
https://github.com/tarantool/metrics/tree/master/metrics/plugins/prometheus
This seems to work, put shows nothing:
prometheus = require('metrics.plugins.prometheus')
metrics = require('http.server').new('0.0.0.0', 8080)
router = require('http.router').new({charset = "utf8"})
metrics:set_router(router)
router:route( { path = '/metrics' }, prometheus.collect_http)
metrics:start()

Try this:
metrics = require('metrics')
metrics.enable_default_metrics()
prometheus = require('metrics.plugins.prometheus')
metrics = require('http.server').new('0.0.0.0', 8080)
router = require('http.router').new({charset = "utf8"})
metrics:set_router(router)
router:route( { path = '/metrics' }, prometheus.collect_http)
metrics:start()

Related

How do I connect redis and redis-insight containers on my network

I've written a .tf file that spins up a redis and redis-insight container in their private docker network (openstack instance), but when I ngrok to redis-insight I get this error:
Redis-insight in browser
I can't seem to get the environment variables on the redis-insight resource right.
I've tried many combinations of the env vars in the redis-insight resource.
Since I'm using ngrok for tunneling I set the RITRUSTEDORIGINS var to its port (http://localhost:4040) following the example of this page in the redis documentation that uses nginx as a proxy, but to no luck.
What environment variables should I be using on my redis-insight resource?
This is what I have written so far:
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
version = "2.23.1"
}
}
}
provider "docker" {}
resource "docker_network" "redis_network" {
name = "redis_network"
}
resource "docker_image" "redis" {
name = "redis:latest"
keep_locally = false
}
resource "docker_container" "redis" {
image = docker_image.redis.image_id
name = "redis"
ports {
internal = 6379
external = 6379
}
network_mode = docker_network.redis_network.name
}
resource "docker_image" "redis-insight" {
name = "redislabs/redisinsight:latest"
keep_locally = false
}
resource "docker_container" "redis-insight" {
image = docker_image.redis-insight.image_id
name = "redis-insight"
ports {
internal = 8001
external = 8001
}
network_mode = docker_network.redis_network.name
depends_on = [docker_container.redis]
env = [
"REDIS_URL=redis://redis:6379",
"REDIS_PASSWORD=password",
# "REDIS_DATABASE=1",
# "REDIS_TLS=true",
# "INSIGHT_DEBUG=true",
# "RIPORT=8001",
# "RIPROXYENABLE=t",
"RITRUSTEDORIGINS=http://localhost:4040"
]
}
Whats the hostname and port of RedisInsight you are accessing from your browser? If its not localhost:4040, set that in RITRUSTEDORIGINS.
If it is localhost:4040, set RITRUSTEDORIGINS to http://localhost:4040.
Set the right protocol (http or https), hostname and port. This should match the one you use in browser.

OCI: How to get in terraform OKE prepared images?

all
I want to select automatically the image for the node in kubernetes nodepool when I select Shape, Operating System and Version. For this, I have this datasource
data "oci_core_images" "images" {
#Required
compartment_id = var.cluster_compartment
#Optional
# display_name = var.image_display_name
operating_system = var.cluster_node_image_operating_system
operating_system_version = var.cluster_node_image_operating_system_version
shape = var.cluster_node_shape
state = "AVAILABLE"
# sort_by = var.image_sort_by
# sort_order = var.image_sort_order
}
and I select the image in oci_containerengine_node_poolas
resource "oci_containerengine_node_pool" "node_pool01" {
# ...
node_shape = var.cluster_node_shape
node_shape_config {
memory_in_gbs = "16"
ocpus = "1"
}
node_source_details {
image_id = data.oci_core_images.images.images[0].id
source_type = "IMAGE"
}
}
But my problem seems to be that not all images are prepared for OKE (with the OKE software install in cloudinit).
So the documentation suggest to use the oci cli command:
oci ce node-pool-options get --node-pool-option-id all
And my question is: How can I do this in data in terraform (recover only OKE ready images)
You can use the oci_containerengine_node_pool_option
data "oci_containerengine_node_pool_option" "test_node_pool_option" {
#Required
node_pool_option_id = oci_containerengine_node_pool_option.test_node_pool_option.id
#Optional
compartment_id = var.compartment_id
}
Ref doc : https://registry.terraform.io/providers/oracle/oci/latest/docs/data-sources/containerengine_node_pool_option
Github issue : https://github.com/oracle-terraform-modules/terraform-oci-oke/issues/263
Change log release details : https://github.com/oracle-terraform-modules/terraform-oci-oke/blob/main/CHANGELOG.adoc#310-april-6-2021

Creating backend service with multiple umigs

Need some help creating a backend service with multiple umigs by using terraform code. using the below code but not able to assign multiple umigs.
Any help will be highly appreciated.
resource "google_compute_backend_service" “test-bs" {
project = module.test.project_id
name = "test-bs"
provider = google
protocol = "HTTPS"
port_name = "services"
load_balancing_scheme = "EXTERNAL"
timeout_sec = 30
enable_cdn = false
health_checks = [google_compute_health_check.test-hc.id]
backend {
group =“https://www.googleapis.com/compute/v1/projects/test/zones/us-east4-c/instanceGroups/umig-01"]
, "https://www.googleapis.com/compute/v1/projects/test/zones/us-east4-c/instanceGroups/umig-02"}
balancing_mode = "UTILIZATION"
capacity_scaler = 1.0

How to whitelist Atlassian/Bitbucket IPs in AWS EC2 security group?

We want Bitbucket webhooks to trigger our CI tool which runs on an AWS EC2 instance, protected with ingress rules from general access.
Bitbucket provides a page listing their IP addresses at https://support.atlassian.com/bitbucket-cloud/docs/what-are-the-bitbucket-cloud-ip-addresses-i-should-use-to-configure-my-corporate-firewall/
They also have a machine-consumable version at https://ip-ranges.atlassian.com/ for Atlassian IPs in general.
I wonder, what is an efficient approach to add and maintain this list in AWS EC2 security groups, e.g. via terraform.
I ended up scraping the machine-consumable json from their page, and let terraform manage the rest. The step of getting the json is left as a manual task.
resource "aws_security_group_rule" "bitbucket-ips-sgr" {
security_group_id = "your-security-group-id"
type = "ingress"
from_port = 443
to_port = 443
protocol = "TCP"
cidr_blocks = local.bitbucket_cidrs_ipv4
ipv6_cidr_blocks = local.bitbucket_cidrs_ipv6
}
locals {
bitbucket_cidrs_ipv4 = [for item in local.bitbucket_ip_ranges_source.items:
# see https://stackoverflow.com/q/47243474/1242922
item.cidr if length(regexall(":", item.cidr)) == 0
]
bitbucket_cidrs_ipv6 = [for item in local.bitbucket_ip_ranges_source.items:
# see https://stackoverflow.com/q/47243474/1242922
item.cidr if length(regexall(":", item.cidr)) > 0
]
# the list originates from https://ip-ranges.atlassian.com/
bitbucket_ip_ranges_source = jsondecode(
<<JSON
the json output from the above URL
JSON
)
}
I improved on Richard's answer and wanted to add that TF's http provider can fetch the JSON for you, and, with a slight tweak to the jsondecode() call, that same for loop still plays.
provider "http" {}
data "http" "bitbucket_ips" {
url = "https://ip-ranges.atlassian.com/"
request_headers = {
Accept = "application/json"
}
}
locals {
bitbucket_ipv4_cidrs = [for c in jsondecode(data.http.bitbucket_ips.body).items : c.cidr if length(regexall(":", c.cidr)) == 0]
bitbucket_ipv6_cidrs = [for c in jsondecode(data.http.bitbucket_ips.body).items : c.cidr if length(regexall(":", c.cidr)) > 0]
}
output "ipv4_cidrs" {
value = local.bitbucket_ipv4_cidrs
}
output "ipv6_cidrs" {
value = local.bitbucket_ipv6_cidrs
}

Akka.Net Clustering Simple Explanation

I try to do a simple cluster using akka.net.
The goal is to have a server receiving request and akka.net processing it through it cluster.
For testing and learning I create a simple WCF service that receive a math equation and I want to send this equation to be solved.
I have one project server and another client.
The configuration on the server side is :
<![CDATA[
akka {
actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
debug {
receive = on
autoreceive = on
lifecycle = on
event-stream = on
unhandled = on
}
deployment {
/math {
router = consistent-hashing-group #round-robin-pool # routing strategy
routees.paths = [ "/user/math" ]
virtual-nodes-factor = 8
#nr-of-instances = 10 # max number of total routees
cluster {
enabled = on
max-nr-of-instances-per-node = 2
allow-local-routees = off
use-role = math
}
}
}
}
remote {
helios.tcp {
transport-class = "Akka.Remote.Transport.Helios.HeliosTcpTransport, Akka.Remote"
applied-adapters = []
transport-protocol = tcp
port = 8081
hostname = "127.0.0.1"
}
}
cluster {
seed-nodes = ["akka.tcp://ClusterSystem#127.0.0.1:8081"] # address of seed node
}
}
]]>
On the Client side the configuration is like this :
<![CDATA[
akka {
actor.provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
remote {
log-remote-lifecycle-events = DEBUG
log-received-messages = on
helios.tcp {
transport-class = "Akka.Remote.Transport.Helios.HeliosTcpTransport, Akka.Remote"
applied-adapters = []
transport-protocol = tcp
port = 0
hostname = 127.0.0.1
}
}
cluster {
seed-nodes = ["akka.tcp://ClusterSystem#127.0.0.1:8081"] # address of the seed node
roles = ["math"] # roles this member is in
}
actor.deployment {
/math {
router = round-robin-pool # routing strategy
routees.paths = ["/user/math"]
nr-of-instances = 10 # max number of total routees
cluster {
enabled = on
allow-local-routees = on
use-role = math
max-nr-of-instances-per-node = 10
}
}
}
}
]]>
The cluster connection seems to correctly be made. I see the status [UP] and the association with the role "math" that appeared on the server side.
Event follwing the example on the WebCramler, I don't achieved to make a message to be delivered. I always get a deadletters.
I try like this :
actor = sys.ActorOf(Props.Empty.WithRouter(FromConfig.Instance), "math");
or
var actor = sys.ActorSelection("/user/math");
Does someone know a good tutorial or could help me ?
Thanks
Some remarks:
First: assuming your sending work from the server to the client. Then you are effectively remote deploying actors on your client.
Which means only the server node needs the actor.deployment config section.
The client only needs the default cluster config (and your role setting ofcourse).
Second: Try to make it simpler first. Use a round-robin-pool instead. Its much simpler. Try to get that working. And work your way up from there.
This way its easier to eliminate configuration/network/other issues.
Your usage: actor = sys.ActorOf(Props.Empty.WithRouter(FromConfig.Instance), "math"); is correct.
A sample of how your round-robin-pool config could look:
deployment {
/math {
router = round-robin-pool # routing strategy
nr-of-instances = 10 # max number of total routees
cluster {
enabled = on
max-nr-of-instances-per-node = 2
allow-local-routees = off
use-role = math
}
}
}
Try this out. And let me know if that helps.
Edit:
Ok after looking at your sample. Some things i changed
ActorManager->Process: Your creating a new router actor per request. Don't do that. Create the router actor once, and reuse the IActorRef.
Got rid of the minimal cluster size settings in the MathAgentWorker project
Since your not using remote actor deployment. I changed the round-robin-pool to a round-robin-group.
After that it worked.
Also remember that if your using the consistent-hashing-group router you need to specify the hashing key. There are various ways to do that, in your sample i think the easiest way would be to wrap the message your sending to your router in a ConsistentHashableEnvelope. Check the docs for more information.
Finally the akka deployment sections looked like this:
deployment {
/math {
router = round-robin-group # routing strategy
routees.paths = ["/user/math"]
cluster {
enabled = on
allow-local-routees = off
use-role = math
}
}
}
on the MathAgentWorker i only changed the cluster section which now looks like this:
cluster {
seed-nodes = ["akka.tcp://ClusterSystem#127.0.0.1:8081"] # address of the seed node
roles = ["math"] # roles this member is in
}
And the only thing that the ActorManager.Process does is:
return await Program.Instance.RouterInstance.Ask<TResult>(msg, TimeSpan.FromSeconds(10));

Resources