Nomad Hashicorp basic networking beetween tasks - nomad

I'm starting to do some tests with nomad and I could use a bit on help on the easiest way to add networking to a group task. Basically my questions are:
Which is the easiest way to add internal networking between tasks?
and
Shouldn't the tasks on the same group have default access to each other? Or there is something I'm doing wrong?
I have this configuration:
job "job" {
datacenters = [ "dc1" ]
type = "service"
group "group" {
count = 1
task "db" {
kill_timeout = "120s"
driver = "docker"
config {
image = "dbimage"
port_map {
db = 3306
}
}
env {
MYSQL_DATABASE = "db"
MYSQL_ROOT_PASSWORD = "pass"
}
service {
name = "db"
port = "db"
}
resources {
memory = 256
network {
mode = "host"
port "db" {}
}
}
}
task "app1" {
driver = "docker"
kill_timeout = "120s"
config {
image = "app1"
port_map {
app1 = 5000
}
}
service {
name = "app1"
port = "app1"
}
resources {
memory = 128
network {
mode = "host"
port "app1" {}
}
}
}
task "app2" {
driver = "docker"
kill_timeout = "120s"
config {
image = "app2:image"
port_map {
app2 = 4000
}
}
env {
.....
}
service {
name = "app2"
port = "app2"
}
resources {
memory = 256
network {
mode = "host"
port "app2" {}
}
}
}
}
}
and I would like that my app1 and app2 could talk internally to each other and to the db. I have read about the the nomad ADDRESS variables that are passed to each container and I tried to reach connectivity but I get connection refused.
Is the only way to accomplish this behaviour with connect ? Or there is a simpler way? Appreciate the help :)

You can use NOMAD_ADDR_task_port variable for connect to another task in job group.
For example, use $NOMAD_ADDR_db_db environment variable in app1 and app2 tasks to get ip:port pair of db task.
For more info, look https://www.nomadproject.io/docs/runtime/environment

Related

Get aws_lambda_event_source_mapping from sqs.tf and use it in lambda.tf

I have my terraform setup this way:
lambda.tf
sqs.tf
In lambda.tf
locals {
my_lambda = aws_lambda_function.vdf_lambda["my-lambda"]
}
...
environment {
variables = {
"MY_QUEUE_URL" = local.my_queue.id
"MY_TRIGGER_ID"= local.my_queue_trigger.uuid
}
}
in sqs.tf
locals {
my_queue = aws_sqs_queue.fifo_queue["my-queue"]
my_queue_trigger = aws_lambda_event_source_mapping.my_lambda_trigger
}
...
resource "aws_lambda_event_source_mapping" "my_lambda_trigger" {
batch_size = 1
event_source_arn = aws_sqs_queue.fifo_queue["my-queue"].arn
function_name = local.my_lambda.function_name
}
When I run terraform plan I got this error:
Error: Cycle: local.my_lambda (expand), aws_lambda_event_source_mapping.my_lambda_trigger,
local.my_queue_trigger (expand), aws_lambda_function.my_lambdas
My guess is the trigger was not created yet when I try to get its UUID in lambda.tf. So how do I fix this? How to get the trigger UUID in lambda.tf?

Writing acceptance test to terraform module and transfer it to resource acceptance test

I was wondering if I can take a module let's say ec2_instances module and write an acceptance test to it within Golang.
I also attached a function that tests a resource of customer provider I wrote although I have no clue how to make the connection between the two.
locals {
nodes = toset(["a", "b", "c"])
}
module "ec2_instances" {
source = "terraform-aws-modules/ec2-instance/aws"
version = "~> 3.0"
for_each = local.nodes
name = "${each.key}.ravendb"
ami = "ami-09e67e426f25ce0d7"
instance_type = "t2.micro"
key_name = "omer-tf"
monitoring = false
vpc_security_group_ids = ["sg-03525214aee516d50"]
subnet_id = "subnet-0ee70783a7dd19aa5"
associate_public_ip_address = true
# ebs {
# # avoid data loss is changing the configuration and TF decides to terminate the instance
# # real world scenario should have dedicated EBS volumes resources managed separately from instances
# delete_on_termination = false
# }
}
func TestAccResourceRavenDbServer(t *testing.T) {
var (
resourceName = "resource_ravendb_server.test"
)
resource.ParallelTest(t, resource.TestCase{
PreCheck: func() {
testAccPreCheck(t)
checkAwsEnv(t)
},
ProviderFactories: testAccProviderFactories,
CheckDestroy: testAccRavenDbServerDestroy,
Steps: []resource.TestStep{
{
Config: testAccRavenDbServerConfig(),
Check: resource.ComposeTestCheckFunc(
resource.TestCheckResourceAttr(resourceName, "healthcheck_database", "firewire"),
),
},
},
})
}

Jenkins pipeline RejectedAccessException error

I am trying to setup a jenkins pipeline script that sends out an email when there is a job that have been running for more than 24 hours.
// Long running jobs
pipeline {
agent any
environment {
EMAIL_ALERT_TO = "address"
EMAIL_ALERT_CC = "address"
}
stages {
stage('def methods') {
steps {
script {
Jenkins.instance.getAllItems(Job).each(){ job -> job.isBuildable()
if (job.isBuilding()){
def myBuild= job.getLastBuild()
def runningSince= groovy.time.TimeCategory.minus( new Date(), myBuild.getTime() )
echo "myBuild = ${myBuild}"
echo "runningSince = ${runningSince}"
env.myBuild = myBuild
env.runningSince = runningSince
}
}
}
}
}
}
post {
// Email out the results
always {
script {
if (runningSince.hours >= 1){
mail to: "${env.EMAIL_ALERT_CC}",
cc: "${env.EMAIL_ALERT_CC}",
subject: "Long Running Jobs",
body: "Build: ${myBuild} ---- Has Been Running for ${runningSince.hours} hours:${runningSince.minutes} minutes"
}
}
}
}
}
I am seeing RejectedAccessException which appears to be related to arrays/list.
This is what I believe you are looking for
https://issues.jenkins-ci.org/browse/JENKINS-54952?page=com.atlassian.jira.plugin.system.issuetabpanels%3Achangehistory-tabpanel

error in module ipv4NetworkConfigurator, configurator module 'ipv4NetworkConfigurator' not found

i am working in omnet++ for network simulation, i want to do a network client server with 2 routers as topology, i used the module ipv4NetworkConfigurator for assigning ip address automatically and routing table. But i can't understand the error 'ipv4NetworkConfigurator' not found
\\file ClientServeur.ned
package networkclientserver.simulations;
import inet.networklayer.configurator.ipv4.IPv4NetworkConfigurator;
import inet.node.inet.Router;
import inet.node.inet.StandardHost;
import ned.DatarateChannel;
network ClientServer
{
submodules:
Client: StandardHost {
#display("p=56,154");
}
Server: StandardHost {
#display("p=501,154;i=device/server");
}
R1: Router {
#display("p=201,154");
}
R2: Router {
#display("p=342,154");
}
Configurator: IPv4NetworkConfigurator {
#display("p=251,62");
}
connections:
Client.pppg++ <--> DatarateChannel { delay = 100ms; datarate =
64kbps; } <--> R1.pppg++;
R1.pppg++ <--> DatarateChannel { delay = 100ms; datarate = 64kbps; }
<--> R2.pppg++;
R2.pppg++ <--> DatarateChannel { delay = 100ms; datarate = 64kbps; }
<--> Server.pppg++;
}
\\File omnetpp.ini
network = ClientServer
description = "Fully automatic IP address assignment"
# Configurator settings
tkenv-plugin-path = ../../../etc/plugins
record-eventlog = true
**.networkLayer.configurator.networkConfiguratorModule =
"Ipv4networkconfigurator"
**.channel.throughput.result-recording-modes = all
*.Configurator.dumpAddresses = true
*.Configurator.dumpTopology = true
*.Configurator.dumpLinks = true
*.Configurator.dumpRoutes = true
# Routing settings
*.*.ipv4.arp.typename = "GlobalArp"
#*.*.ipv4.routingTable.netmaskRoutes= ""
sim-time-limit = 100s
**.tcpType = "TCP"
**.Client.numTcpApps = 1
**.Client.tcpApp[*].typename = "TCPSessionApp"
**.Client.tcpApp[*].connectAddress = "server"
**.Client.tcpApp[*].connectPort = 80
**.Client.tcpApp[*].sendBytes = 10MiB
**.Server.numTcpApps = 1
**.Server.tcpApp[*].typename = "TCPSinkApp"
**.Server.tcpApp[*].localAddress = ""
**.Server.tcpApp[*].localPort = 80
**.tcpApp[*].dataTransferMode = "object"
**.R1.ppp[*].queueType ="DropTailQueue"
**.R1.ppp[*].queue.frameCapacity = 10
**.ppp[*].numOutputHooks = 1
**.ppp[*].outputHook[*].typename = "ThruputMeter"
\\end file
error in module (inet::IPv4NodeConfigurator), Configurator module 'ipv4NetworkConfigurator' not found
By default all the nodes expect that the configurator is called configurator (starts with lowercase), while you have
v
Configurator: IPv4NetworkConfigurator
^
where it starts with uppercase. As a convention it is recommended to start module names, types, interface names with uppercase, while use lowercase names for parameters, gates and submodule names.

Juniper SRX 220

I'm a newbie to Juniper and SRX. We have just setup a cluster with 2 Juniper SRX 220 devices and I’m just struggling to setup reth interfaces. The Juniper's have to 2 uplinks to a Cicso ASA. At the moment interface ge-0/0/0, ge-3/0/0 and ge-0/0/1, ge-/0/01 are connected to the ASA. I have setup a VLAN 's 192 and added the reth1 interface to this VLAN. I can ping the reth1 interface but cannot ping interface on the ASA interface at the other end. Please can someone advise what i have done wrong. Config below.
chassis {
cluster {
reth-count 2;
redundancy-group 0 {
node 0 priority 100;
node 1 priority 1;
}
redundancy-group 1 {
node 0 priority 100;
node 1 priority 1;
preempt;
interface-monitor {
ge-3/0/1 weight 255;
ge-0/0/1 weight 255;
}
}
}
}
interfaces {
interface-range interfaces-fwtransit {
member ge-0/0/0;
member ge-3/0/0;
unit 0 {
family ethernet-switching {
vlan {
members fwtransit;
}
}
}
}
ge-0/0/1 {
gigether-options {
redundant-parent reth1;
}
}
ge-0/0/3 {
unit 0 {
family inet {
address 10.100.0.252/24;
}
}
}
ge-3/0/1 {
gigether-options {
redundant-parent reth1;
}
}
fab0 {
fabric-options {
member-interfaces {
ge-0/0/5;
}
}
}
fab1 {
fabric-options {
member-interfaces {
ge-3/0/5;
}
}
}
reth0 {
vlan-tagging;
redundant-ether-options {
redundancy-group 1;
}
}
reth1 {
vlan-tagging;
redundant-ether-options {
redundancy-group 1;
}
unit 192 {
description untrust;
vlan-id 192;
family inet {
address 192.168.2.252/24;
}
}
}
vlan {
unit 0 {
family inet {
address 192.168.1.1/24;
}
}
unit 162 {
family inet {
address 172.31.254.3/24;
}
}
unit 192 {
family inet {
address 192.168.2.3/24;
}
}
}
}
routing-options {
static {
route 10.100.0.0/24 next-hop 10.100.0.1;
}
}
protocols {
stp;
}
security {
zones {
security-zone trust {
interfaces {
ge-0/0/3.0 {
host-inbound-traffic {
system-services {
ping;
https;
ssh;
}
}
}
}
}
security-zone untrust {
host-inbound-traffic {
system-services {
ping;
}
}
interfaces {
vlan.162;
vlan.192;
}
}
}
}
vlans {
fwtransit {
vlan-id 162;
l3-interface vlan.162;
}
web_dmz {
vlan-id 192;
l3-interface vlan.192;
}
}
My understanding is you have something like this:
Topology:
As you already have the ICMP under the host-inbound-traffic you could check:
As an initial down/dirty test a security-policy permitting everything. A premise for this: "The Junos OS examines security policies if the traffic destination is any interface other than the incoming interface."
2.Monitor traffic on the interface, make sure that ICMP ECHOs are leaving the wire, if there is no reply, something on the ASA could be.
Have you checked interface statistics for drops or errors?
Please check that you have configured the correct policies with:
- show configuration security policies
You can configure a policy with:
set security policy from-zone xxx to-zone xxx policy my-policy match source-address any destination-address any application any
set security policy from-zone xxx to-zone xxx policy my-policy then permit
and try to ping the ASA-Interface by specifying the source Interface:
- ping x.x.x.xinterface ge-0/0/0
maybe you also want to define a loopback Interface and add this Interface to your: "trust"-security-zone

Resources