OMNeT++ doesn't recognize nodes created by VACaMobil - omnet++

I am starting to use VACaMobil in a project. However, I am having a problem.
I don't know if I am wrong, but I understand that TraCIScenarioManagerLaunchd
(ant therefore, VACaMobil) creates dynamically the nodes representing each vehicle, i. e.
the car module isn't defined in the network file, but it's declared in the omnetpp.ini file, in
parameters ".manager.moduleType" and ".manager.moduleName". In my case, the module
type is a custom car module based in the Car module of inet/examples/VACaMobil/Cars, and
the module name is "coche".
Here is the omnetpp.ini file:
[General]
network = Cars
cmdenv-express-mode = true
cmdenv-autoflush = true
cmdenv-status-frequency = 10000000s
repeat = 10
tkenv-plugin-path = ../../../etc/plugins
tkenv-image-path = bitmaps
#sim-time-limit = 6000s
check-signals = true
**.manager.**.scalar-recording = true
**.manager.**.vector-recording = true
**.manetrouting.**.scalar-recording = true
**.movStats.**.scalar-recording = true
**.movStats.**.vector-recording = true
**.mac.**.scalar-recording = true
**.mac.**.vector-recording = true
**.scalar-recording = true
**.vector-recording = true
*.channelControl.carrierFrequency = 2.4GHz
*.channelControl.pMax = 2mW
*.channelControl.sat = -110dBm
*.channelControl.alpha = 2
*.channelControl.numChannels = 1
# TraCIScenarioManagerLaunchd
*.manager.updateInterval = 1s
*.manager.host = "localhost"
*.manager.port = 9999
*.manager.moduleType = "rcdp9.TAdhocHost"
*.manager.moduleName = "coche"
*.manager.moduleDisplayString = ""
*.manager.autoShutdown = true
*.manager.margin = 25
*.manager.doNothing = false
# nic settings
**.wlan.bitrate = 24Mbps
**.wlan.opMode = "g"
**.wlan.mgmt.frameCapacity = 10
**.wlan.mgmtType = "Ieee80211MgmtAdhoc"
**.wlan.mac.basicBitrate = 24Mbps
**.wlan.mac.controlBitrate = 24Mbps
**.wlan.mac.address = "auto"
**.wlan.mac.maxQueueSize = 14
**.wlan.mac.rtsThresholdBytes = 3000B
**.wlan.mac.retryLimit = 7
**.wlan.mac.cwMinData = 7
**.wlan.radio.transmitterPower = 2mW
**.wlan.radio.thermalNoise = -110dBm
**.wlan.radio.sensitivity = -85dBm
**.wlan.radio.pathLossAlpha = 2
**.wlan.radio.snirThreshold = 4dB
**.getStatistics = true
**.statFiles = "${resultdir}/${configname}-${runnumber}-"
**.channelNumber = 0
[Config RCDP]
**.coche[0..49].app1.localPort = 1000
**.coche[0..49].app1.destPort = 1000
**.coche[0..49].app1.messageLength = ${50000000B ! length}
**.coche[0..49].app1.burstSize = 10
**.coche[0..49].app1.bandWidth = 147200bps
**.coche[0..49].app1.alpha = 0.5
**.coche[0..49].app1.beta = 0.8
**.coche[0..49].app1.destAddresses = moduleListByPath("**.coche[50..99]")
**.coche[0..49].networkLayer.configurator.networkConfiguratorModule = "configurator"
**.coche[50..99].app2.localPort = 1000
**.coche[50..99].app2.messageLength = ${length = 50000000B}
**.coche[50..99].app2.bandWidth = 147200bps
**.coche[50..99].app2.maxBandWidth = 24Mbps
#**.meanNumberOfCars = ${100, 200, 300, 400}
**.meanNumberOfCars = 100
**.warmUpSeconds = 0s
**.autoShutdown = false
*.manager.launchConfig = xmldoc("VACaMobil/Milan/downtown.launch.xml")
# manet routing
**.routingProtocol = ${"AODVUU", "DYMO", "OLSR"}
[Config TCP]
**.coche[0..49].app3.localPort = 1000
**.coche[0..49].app3.startTime = 0s
**.coche[0..49].app3.stopTime = 100s
**.coche[0..49].app3.thinkTime = 1s
**.coche[0..49].app3.idleInterval =3s
**.coche[0..49].app3.requestLength = 50000000B
**.coche[0..49].app3.numRequestsPerSession = 100
**.coche[0..49].app3.connectAddress = moduleListByPath("**.coche[50..99]")
**.coche[0..49].networkLayer.configurator.networkConfiguratorModule = "configurator"
**.coche[50..99].app4.dataTransferMode = "object"
#carGRCnator
**.getStatistics = true
**.statFiles = "${resultdir}/${configname}-${runnumber}-"
#**.meanNumberOfCars = ${100, 200, 300, 400}
**.meanNumberOfCars = 100
**.warmUpSeconds = 0s
**.autoShutdown = false
*.manager.launchConfig = xmldoc("VACaMobil/Milan/downtown.launch.xml")
# manet routing
**.routingProtocol = ${"AODVUU", "DYMO", "OLSR"}
Here is the car module (TAdhocHost.ned):
package rcdp9;
import inet.base.NotificationBoard;
import inet.networklayer.autorouting.ipv4.HostAutoConfigurator;
import inet.networklayer.common.InterfaceTable;
import inet.mobility.single.TraCIMobility;
import inet.networklayer.ipv4.RoutingTable;
import inet.transport.tcp.TCP;
import inet.transport.udp.UDP;
import inet.nodes.inet.NetworkLayer;
import inet.linklayer.ieee80211.Ieee80211Nic;
import inet.networklayer.IManetRouting;
module TAdhocHost
{
parameters:
#node();
string routingProtocol #enum("AODVUU","DYMOUM","DYMO","DSRUU","OLSR","OLSR_ETX","DSDV_2","Batman") = default("");
gates:
input radioIn #directIn;
submodules:
notificationBoard: NotificationBoard {
parameters:
#display("p=60,160");
}
interfaceTable: InterfaceTable {
parameters:
#display("p=60,240");
}
app1: RCDPClient {
parameters:
#display("p=304,56");
}
app2: RCDPServer {
parameters:
#display("p=210,56");
}
app3: TCPClient {
parameters:
#display("p=378,56");
}
app4: TCPServer {
parameters:
#display("p=147,56");
}
mobility: TraCIMobility {
parameters:
#display("p=60,459");
}
routingTable: RoutingTable {
parameters:
IPForward = true;
routerId = "";
routingFile = "";
#display("p=60,326");
}
udp: UDP {
parameters:
#display("p=304,192");
}
tcp: TCP {
parameters:
#display("p=219,192");
}
networkLayer: NetworkLayer {
parameters:
proxyARP = false;
#display("p=304,327;q=queue");
gates:
ifIn[1];
ifOut[1];
}
manetrouting: <routingProtocol> like IManetRouting if routingProtocol != "" {
#display("p=522,307");
}
wlan: Ieee80211Nic {
parameters:
#display("p=304,461;q=queue");
}
ac_wlan: HostAutoConfigurator {
#display("p=60,401");
}
connections allowunconnected:
udp.appOut++ --> app1.udpIn;
udp.appIn++ <-- app1.udpOut;
udp.appOut++ --> app2.udpIn;
udp.appIn++ <-- app2.udpOut;
udp.ipOut --> networkLayer.transportIn++;
udp.ipIn <-- networkLayer.transportOut++;
tcp.appOut++ --> app3.tcpIn;
tcp.appIn++ <-- app3.tcpOut;
tcp.appOut++ --> app4.tcpIn;
tcp.appIn++ <-- app4.tcpOut;
tcp.ipOut --> networkLayer.transportIn++;
tcp.ipIn <-- networkLayer.transportOut++;
wlan.upperLayerOut --> networkLayer.ifIn[0];
wlan.upperLayerIn <-- networkLayer.ifOut[0];
networkLayer.transportOut++ --> manetrouting.from_ip if routingProtocol != "";
networkLayer.transportIn++ <-- manetrouting.to_ip if routingProtocol != "";
radioIn --> wlan.radioIn;
}
Here is the network file (Cars.ned):
package rcdp9;
import inet.world.VACaMobil.VACaMobil;
import inet.networklayer.autorouting.ipv4.IPv4NetworkConfigurator;
import inet.nodes.inet.AdhocHost;
import inet.world.radio.ChannelControl;
import inet.world.traci.TraCIScenarioManagerLaunchd;
network Cars
{
submodules:
configurator: IPv4NetworkConfigurator {
#display("p=396,221");
}
channelControl: ChannelControl {
#display("p=396,310");
}
manager: VACaMobil {
#display("p=322,405");
}
connections allowunconnected:
}
The problem is that in the omnetpp.ini, OMNeT++ gives the following warning
in lines beginning in ".channelNumber", ".wlan.", ".coche", and ".routingProtocol":
"Warning: Unused entry (does not match any parameters) Does not match any module parameters."
Apparently, OMNeT++ complains because the module TAdhocHost, whose name is "coche",
isn't defined in Cars.ned, but isn't assumed that OMNeT++ automatically recognizes that "coche"
is created by VACaMobil? I am doing something wrong? I would appreciate any help.

In the simplest case (all modules are created via appropriate statements in a simulation's .ned files) the IDE can simply parse all .ned files to know what modules exist.
However, the Veins TraCIScenarioManager directly uses the OMNeT++ C++ API to create additional modules at runtime. Before the simulation is run, the OMNeT++ IDE has no way of knowing that this will happen.
Without understanding what your C++ code does, it cannot know that the module parameters in your .ini file will (not at the start, but at a later point in the simulation) refer to a module that will exist.
There might be ways around this (declaring in a .ned file that a module should not be created automatically, but can be assumed to exist anyway), but I don't recall any that is not a hack.

Related

LTE and WI-FI in vehicular netwrok

I am trying to make a WI-FI offloading scenario, In vehicular network using Omnet++ simulator.
At the first step I want to run the simulator with both modules added to the car before adding the offloading algorithm. I have defined the car module as extend of StandardHost and added the LteNic modules to it,However when I am trying to run the initialize file I got the message "finish with errors".
the car module
package lte_opf_wifi.cars;
import inet.applications.contract.ITCPApp;
import inet.applications.contract.IUDPApp;
import inet.mobility.contract.IMobility;
import inet.networklayer.common.InterfaceTable;
import inet.networklayer.contract.IRoutingTable;
import inet.networklayer.contract.INetworkLayer;
import inet.networklayer.configurator.ipv4.HostAutoConfigurator;
import inet.transportlayer.tcp.TCP;
import inet.transportlayer.udp.UDP;
import lte.stack.ILteNic;
import inet.node.inet.StandardHost;
//
// Car Module
//
module Car extends StandardHost
{
parameters:
#networkNode();
#display("i=device/car;is=vs;bgb=400,518");
//# Node specs
string nodeType = "UE"; // DO NOT CHANGE
int masterId;
int macNodeId = default(0); // TODO: this is not a real parameter
int macCellId = default(0); // TODO: this is not a real parameter
//# D2D capability
string nicType = default("LteNicUe");
//# Network Layer specs
*.interfaceTableModule = default(absPath(".interfaceTable"));
*.routingTableModule = default(absPath(".routingTable"));
gates:
input radioInlte #directIn ;
submodules:
// NOTE: instance must be named "lteNic"
lteNic: <nicType> like ILteNic {
nodeType = nodeType;
#display("p=250,407");
}
// network layer
configurator: HostAutoConfigurator {
#display("p=49.068,22.968");
}
connections allowunconnected:
lteNic.radioIn <-- radioInlte;
networkLayer.ifOut++ --> lteNic.upperLayerIn;
networkLayer.ifIn++ <-- lteNic.upperLayerOut;
}
the network ned file
// SimuLTE
//
// This file is part of a software released under the license included in file
// "license.pdf". This license can be also found at http://www.ltesimulator.com/
// The above file and the present reference are part of the software itself,
// and cannot be removed from it.
//
package lte_opf_wifi.simulations.cars;
import inet.networklayer.configurator.ipv4.IPv4NetworkConfigurator;
import inet.networklayer.ipv4.RoutingTableRecorder;
import inet.node.inet.AdhocHost;
import inet.node.inet.Router;
import inet.node.inet.WirelessHost;
import inet.node.wireless.AccessPoint;
import inet.physicallayer.ieee80211.packetlevel.Ieee80211ScalarRadioMedium;
import inet.node.inet.StandardHost;
import inet.node.ethernet.Eth10G;
import inet.common.misc.ThruputMeteringChannel;
import openflow.nodes.Domain_wController;
import openflow.nodes.OpenFlow_Domain_fixed;
import openflow.nodes.Open_Flow_Controller;
import openflow.nodes.Open_Flow_Switch;
import lte.world.radio.LteChannelControl;
import lte.epc.PgwStandardSimplified;
import lte.corenetwork.binder.LteBinder;
import lte.corenetwork.nodes.eNodeB;
import lte_opf_wifi.cars.Car;
import lte_opf_wifi.veins_inet.VeinsInetManager;
network highway2
{
parameters:
**.mgmt.numChannels = 2;
double playgroundSizeX #unit(m); // x size of the area the nodes are in (in meters)
double playgroundSizeY #unit(m); // y size of the area the nodes are in (in meters)
double playgroundSizeZ #unit(m); // z size of the area the nodes are in (in meters)
#display("bgb=732,713");
types:
channel ethernetline extends ThruputMeteringChannel
{
delay = 1us;
datarate = 100Mbps;
thruputDisplayFormat = "u";
}
submodules:
routingRecorder: RoutingTableRecorder {
#display("p=50,75;is=s");
}
configurator: IPv4NetworkConfigurator {
#display("p=50,125");
config = xmldoc("demo.xml");
}
//# Veins manager module
veinsManager: VeinsInetManager {
#display("p=50,227;is=s");
}
//#RSU
radioMedium: Ieee80211ScalarRadioMedium {
parameters:
#display("p=100,250");
}
rsu[2]: AccessPoint {
#display("p=559.26,451.71;i=veins/sign/yellowdiamond;is=vs");
wlan[*].mgmtType = "Ieee80211MgmtAPSimplified"; // wireless Access point
}
epdg: Router {
#display("p=481.82397,355.632;i=device/smallrouter");
}
//# LTE modules
channelControl: LteChannelControl {
#display("p=50,25;is=s");
}
binder: LteBinder {
#display("p=50,175;is=s");
}
server1: StandardHost {
#display("p=652.47,240.91199;is=n;i=device/server");
}
pgw: PgwStandardSimplified {
nodeType = "PGW";
#display("p=212.232,348.462;is=l");
}
eNodeB1: eNodeB {
#display("p=94.644,507.636;is=vl");
}
eNodeB2: eNodeB {
#display("p=309.744,507.636;is=vl");
}
//#open-flow
open_Flow_Switch0: Open_Flow_Switch {
#display("p=354.198,240.91199;b=66,64");
}
open_Flow_Switch1: Open_Flow_Switch {
#display("p=516.24,240.91199;b=66,64");
}
SDN_Controller: Open_Flow_Controller {
#display("p=427.332,124.757996;b=85,66");
}
//#car
node[10]: Car {
parameters:
#display("p=250.95,629.526;is=vs;i=block/process");
}
connections allowunconnected:
SDN_Controller.ethg++ <--> ethernetline <--> open_Flow_Switch0.gate_controller++;
SDN_Controller.ethg++ <--> ethernetline <--> open_Flow_Switch1.gate_controller++;
open_Flow_Switch0.ethg++ <--> ethernetline <--> open_Flow_Switch1.ethg++;
open_Flow_Switch0.ethg++ <--> ethernetline <--> pgw.filterGate;
open_Flow_Switch1.ethg++ <--> ethernetline <--> server1.pppg++;
open_Flow_Switch0.ethg++ <--> ethernetline <--> epdg.pppg++;
pgw.pppg++ <--> Eth10G <--> eNodeB1.ppp;
pgw.pppg++ <--> Eth10G <--> eNodeB2.ppp;
//# X2 connections
eNodeB1.x2++ <--> Eth10G <--> eNodeB2.x2++;
//# RSU connections
rsu[0].ethg++ <--> Eth10G <--> epdg.ethg++;
rsu[1].ethg++ <--> Eth10G <--> epdg.ethg++;
}
the .ini file
[General]
cmdenv-express-mode = true
cmdenv-autoflush = true
cmdenv-status-frequency = 1s
**.cmdenv-log-level = info
image-path = ../../images
network = highway2
##########################################################
# Simulation parameters #
##########################################################
debug-on-errors = true
print-undisposed = true
sim-time-limit = 200s
**.sctp.**.scalar-recording = true
**.sctp.**.vector-recording = true
**.coreDebug = false
**.routingRecorder.enabled = false
*.playgroundSizeX = 20000m
*.playgroundSizeY = 20000m
*.playgroundSizeZ = 50m
##########################################################
# Openflow #
#########################################################
**.SDN_Controller.ofa_controller.port = 6633
**.open_Flow_Switch*.sendCompletePacket = false
**.SDN_Controller.behavior = "Forwarding"
**.ofa_switch.connectPort = 6633
**.ofa_switch.connectAddress = "controller"
**.buffer.capacity = 10
**.ofa_switch.flow_timeout = 5s
**.open_Flow_Switch*.etherMAC[*].promiscuous = true
#**.controller.ofa_controller.address =
# NIC configuration
**.ppp[*].queueType = "DropTailQueue" # in routers
**.ppp[*].queue.frameCapacity = 10 # in routers
*.configurator.networkAddress = "192.168.1.0"
#**.tcp.sendQueueClass="TCPMsgBasedSendQueue" //obsolote since version 2.0
#**.tcp.receiveQueueClass="TCPMsgBasedRcvQueue" //obsolote since version 2.0
#**.open_Flow_Switch*.tcp.mss = 800
#**.SDN_Controller.tcp.mss = 800
#**.open_flow_switch*.tcp.nagleEnabled = false
##########################################################
# VeinsManager parameters #
##########################################################
*.veinsManager.updateInterval = 0.1s
*.veinsManager.launchConfig = xmldoc("heterogeneous.launchd.xml")
##########################################################
# RSU SETTINGS #
# #
# #
##########################################################
**.rsu[1].wlan[*].mac.address = "10:00:00:00:00:00"
**.rsu[2].wlan[*].mac.address = "10:00:00:00:00:10"
**.node[*].**.mgmt.accessPointAddress = "10:00:00:00:00:00"
**.node[*].**.mgmt.accessPointAddress = "10:00:00:00:00:10"
**.mgmt.frameCapacity = 100
#########################################################
# 11p specific parameters #
# #
# NIC-Settings #
##########################################################
**.wlan*.bitrate = 2Mbps
**.mac.address = "auto"
**.mac.maxQueueSize = 14
**.mac.rtsThresholdBytes = 3000B
**.wlan[*].mac.retryLimit = 7
**.wlan[*].mac.cwMinData = 7
**.wlan[*].mac.cwMinBroadcast = 31
**.wlan[*].radio.transmitter.power = 20mW
**.wlan[*].radio.transmitter.bitrate = 2Mbps
**.wlan[*].radio.transmitter.headerBitLength = 100b
**.wlan[*].radio.transmitter.carrierFrequency = 2.4GHz
**.wlan[*].radio.transmitter.bandwidth = 2MHz
**.wlan[*].radio.receiver.sensitivity = -85dBm
**.wlan[*].radio.receiver.snirThreshold = 4dB
# relay unit configuration
**.relayUnitType = "MACRelayUnit"
**.relayUnit.addressTableSize = 100
**.relayUnit.agingTime = 120s
**.relayUnit.bufferSize = 1MiB
**.relayUnit.highWatermark = 512KiB
**.relayUnit.pauseUnits = 300 # pause for 300*512 bit (19200 byte) time
**.relayUnit.addressTableFile = ""
**.relayUnit.numCPUs = 2
**.relayUnit.processingTime = 2us
##########################################################
# channel parameters #
##########################################################
**.channelControl.pMax = 10W
**.channelControl.alpha = 1.0
**.channelControl.carrierFrequency = 2100e+6Hz
##########################################################
# LTE specific parameters #
##########################################################
# Enable dynamic association of UEs (based on best SINR)
*.node[*].lteNic.phy.dynamicCellAssociation = true
**.node[*].masterId = 1 # useless if dynamic association is disabled
**.node[*].macCellId = 1 # useless if dynamic association is disabled
**.eNodeB1.macCellId = 1
**.eNodeB1.macNodeId = 1
**.eNodeB2.macCellId = 2
**.eNodeB2.macNodeId = 2
**.eNodeBCount = 2
# AMC module parameters
**.rbAllocationType = "localized"
**.feedbackType = "ALLBANDS"
**.feedbackGeneratorType = "IDEAL"
**.maxHarqRtx = 3
**.numUe = ${numUEs=10}
# RUs
**.cellInfo.ruRange = 50
**.cellInfo.ruTxPower = "50,50,50;"
**.cellInfo.antennaCws = "2;" # !!MACRO + RUS (numRus + 1)
**.cellInfo.numRbDl = 25
**.cellInfo.numRbUl = 25
**.numBands = 25
**.fbDelay = 1
# Enable handover
*.node[*].lteNic.phy.enableHandover = true
*.eNodeB*.lteNic.phy.enableHandover = true
*.eNodeB*.lteNic.phy.broadcastMessageInterval = 0.5s
# X2 and SCTP configuration
*.eNodeB*.numX2Apps = 1 # one x2App per peering eNodeB
*.eNodeB*.x2App[*].server.localPort = 5000 + ancestorIndex(1) # Server ports (x2App[0]=5000, x2App[1]=5001, ...)
*.eNodeB1.x2App[0].client.connectAddress = "eNodeB2%x2ppp0"
*.eNodeB2.x2App[0].client.connectAddress = "eNodeB1%x2ppp0"
**.sctp.nagleEnabled = false # if true, transmission of small packets will be delayed on the X2
**.sctp.enableHeartbeats = false

Snowplow Enrich Setup Issue

collector.conf
collector {
interface = "0.0.0.0"
interface = ${?COLLECTOR_INTERFACE}
port = 8181
port = ${?COLLECTOR_PORT}
# optional SSL/TLS configuration
ssl {
enable = false
enable = ${?COLLECTOR_SSL}
# whether to redirect HTTP to HTTPS
redirect = false
redirect = ${?COLLECTOR_SSL_REDIRECT}
port = 9543
port = ${?COLLECTOR_SSL_PORT}
}
paths {
# "/com.acme/track" = "/com.snowplowanalytics.snowplow/tp2"
# "/com.acme/redirect" = "/r/tp2"
# "/com.acme/iglu" = "/com.snowplowanalytics.iglu/v1"
}
# Configure the P3P policy header.
p3p {
policyRef = "/w3c/p3p.xml"
CP = "NOI DSP COR NID PSA OUR IND COM NAV STA"
}
crossDomain {
enabled = false
# Domains that are granted access, *.acme.com will match http://acme.com and http://sub.acme.com
enabled = ${?COLLECTOR_CROSS_DOMAIN_ENABLED}
domains = [ "*" ]
domains = [ ${?COLLECTOR_CROSS_DOMAIN_DOMAIN} ]
# Whether to only grant access to HTTPS or both HTTPS and HTTP sources
secure = true
secure = ${?COLLECTOR_CROSS_DOMAIN_SECURE}
}
cookie {
enabled = true
enabled = ${?COLLECTOR_COOKIE_ENABLED}
expiration = "365 days"
expiration = ${?COLLECTOR_COOKIE_EXPIRATION}
# Network cookie name
name = zanui_collector_cookie
name = ${?COLLECTOR_COOKIE_NAME}
domains = [
"{{cookieDomain1}}" # e.g. "domain.com" -> any origin domain ending with this will be matched and domain.com will be returned
"{{cookieDomain2}}" # e.g. "secure.anotherdomain.com" -> any origin domain ending with this will be matched and secure.anotherdomain.com will be returned
# ... more domains
]
domains += ${?COLLECTOR_COOKIE_DOMAIN_1}
domains += ${?COLLECTOR_COOKIE_DOMAIN_2}
fallbackDomain = ""
fallbackDomain = ${?FALLBACK_DOMAIN}
secure = false
secure = ${?COLLECTOR_COOKIE_SECURE}
httpOnly = false
httpOnly = ${?COLLECTOR_COOKIE_HTTP_ONLY}
sameSite = "{{cookieSameSite}}"
sameSite = ${?COLLECTOR_COOKIE_SAME_SITE}
}
doNotTrackCookie {
enabled = false
enabled = ${?COLLECTOR_DO_NOT_TRACK_COOKIE_ENABLED}
# name = {{doNotTrackCookieName}}
name = zanui-collector-do-not-track-cookie
# value = {{doNotTrackCookieValue}}
value = zanui-collector-do-not-track-cookie-value
}
cookieBounce {
enabled = false
enabled = ${?COLLECTOR_COOKIE_BOUNCE_ENABLED}
name = "n3pc"
name = ${?COLLECTOR_COOKIE_BOUNCE_NAME}
fallbackNetworkUserId = "00000000-0000-4000-A000-000000000000"
fallbackNetworkUserId = ${?COLLECTOR_COOKIE_BOUNCE_FALLBACK_NETWORK_USER_ID}
forwardedProtocolHeader = "X-Forwarded-Proto"
forwardedProtocolHeader = ${?COLLECTOR_COOKIE_BOUNCE_FORWARDED_PROTOCOL_HEADER}
}
enableDefaultRedirect = true
enableDefaultRedirect = ${?COLLECTOR_ALLOW_REDIRECTS}
redirectMacro {
enabled = false
enabled = ${?COLLECTOR_REDIRECT_MACRO_ENABLED}
# Optional custom placeholder token (defaults to the literal `${SP_NUID}`)
placeholder = "[TOKEN]"
placeholder = ${?COLLECTOR_REDIRECT_REDIRECT_MACRO_PLACEHOLDER}
}
rootResponse {
enabled = false
enabled = ${?COLLECTOR_ROOT_RESPONSE_ENABLED}
statusCode = 302
statusCode = ${?COLLECTOR_ROOT_RESPONSE_STATUS_CODE}
# Optional, defaults to empty map
headers = {
Location = "https://127.0.0.1/",
Location = ${?COLLECTOR_ROOT_RESPONSE_HEADERS_LOCATION},
X-Custom = "something"
}
# Optional, defaults to empty string
body = "302, redirecting"
body = ${?COLLECTOR_ROOT_RESPONSE_BODY}
}
cors {
accessControlMaxAge = 5 seconds
accessControlMaxAge = ${?COLLECTOR_CORS_ACCESS_CONTROL_MAX_AGE}
}
# Configuration of prometheus http metrics
prometheusMetrics {
enabled = false
}
streams {
# Events which have successfully been collected will be stored in the good stream/topic
good = snowplow-collected-good-events-stream
good = ${?COLLECTOR_STREAMS_GOOD}
# Events that are too big (w.r.t Kinesis 1MB limit) will be stored in the bad stream/topic
bad = snowplow-collected-bad-events-stream
bad = ${?COLLECTOR_STREAMS_BAD}
useIpAddressAsPartitionKey = false
useIpAddressAsPartitionKey = ${?COLLECTOR_STREAMS_USE_IP_ADDRESS_AS_PARTITION_KEY}
sink {
enabled = kinesis
enabled = ${?COLLECTOR_STREAMS_SINK_ENABLED}
# Region where the streams are located
region = ap-southeast-2
region = ${?COLLECTOR_STREAMS_SINK_REGION}
threadPoolSize = 10
threadPoolSize = ${?COLLECTOR_STREAMS_SINK_THREAD_POOL_SIZE}
aws {
accessKey = env
accessKey = ${?COLLECTOR_STREAMS_SINK_AWS_ACCESS_KEY}
secretKey = env
secretKey = ${?COLLECTOR_STREAMS_SINK_AWS_SECRET_KEY}
}
# Minimum and maximum backoff periods, in milliseconds
backoffPolicy {
#minBackoff = {{minBackoffMillis}}
minBackoff = 10
#maxBackoff = {{maxBackoffMillis}}
maxBackoff = 10
}
}
buffer {
byteLimit = 4500000
byteLimit = ${?COLLECTOR_STREAMS_BUFFER_BYTE_LIMIT}
recordLimit =500 # Not supported by Kafka; will be ignored
recordLimit = ${?COLLECTOR_STREAMS_BUFFER_RECORD_LIMIT}
timeLimit = 5000
timeLimit = ${?COLLECTOR_STREAMS_BUFFER_TIME_LIMIT}
}
}
}
akka {
loglevel = DEBUG # 'OFF' for no logging, 'DEBUG' for all logging.
loglevel = ${?AKKA_LOGLEVEL}
loggers = ["akka.event.slf4j.Slf4jLogger"]
loggers = [${?AKKA_LOGGERS}]
http.server {
remote-address-header = on
remote-address-header = ${?AKKA_HTTP_SERVER_REMOTE_ADDRESS_HEADER}
raw-request-uri-header = on
raw-request-uri-header = ${?AKKA_HTTP_SERVER_RAW_REQUEST_URI_HEADER}
# Define the maximum request length (the default is 2048)
parsing {
max-uri-length = 32768
max-uri-length = ${?AKKA_HTTP_SERVER_PARSING_MAX_URI_LENGTH}
uri-parsing-mode = relaxed
uri-parsing-mode = ${?AKKA_HTTP_SERVER_PARSING_URI_PARSING_MODE}
}
}
}
Run Command:
java -Dcom.amazonaws.sdk.disableCbor -jar snowplow-stream-collector-kinesis-1.0.0.jar --config collector.conf
enricher.conf
enrich {
streams {
in {
# Stream/topic where the raw events to be enriched are located
raw = snowplow-collected-good-events-stream
raw = ${?ENRICH_STREAMS_IN_RAW}
}
out {
# Stream/topic where the events that were successfully enriched will end up
enriched = snowplow-collected-good-events-stream
# Stream/topic where the event that failed enrichment will be stored
bad = snowplow-collected-bad-events-stream
bad = ${?ENRICH_STREAMS_OUT_BAD}
# Stream/topic where the pii tranformation events will end up
# pii = {{outPii}}
# pii = ${?ENRICH_STREAMS_OUT_PII}
partitionKey = event_id
partitionKey = ${?ENRICH_STREAMS_OUT_PARTITION_KEY}
}
sourceSink {
enabled = kinesis
enabled = ${?ENRICH_STREAMS_SOURCE_SINK_ENABLED}
region = ap-southeast-2
aws {
accessKey = env
accessKey = ${?ENRICH_STREAMS_SOURCE_SINK_AWS_ACCESS_KEY}
secretKey = env
secretKey = ${?ENRICH_STREAMS_SOURCE_SINK_AWS_SECRET_KEY}
}
maxRecords = 10000
initialPosition = TRIM_HORIZON
initialTimestamp = "2020-09-10T10:00:00Z"
backoffPolicy {
minBackoff = 1000
minBackoff = ${?ENRICH_STREAMS_SOURCE_SINK_BACKOFF_POLICY_MIN_BACKOFF}
maxBackoff = 5000
maxBackoff = ${?ENRICH_STREAMS_SOURCE_SINK_BACKOFF_POLICY_MAX_BACKOFF}
}
}
buffer {
byteLimit = 1000000000
byteLimit = ${?ENRICH_STREAMS_BUFFER_BYTE_LIMIT}
recordLimit = 10 # Not supported by Kafka; will be ignored
recordLimit = ${?ENRICH_STREAMS_BUFFER_RECORD_LIMIT}
timeLimit = 5000
timeLimit = ${?ENRICH_STREAMS_BUFFER_TIME_LIMIT}
}
appName = "zanui-enricher-app"
appName = ${?ENRICH_STREAMS_APP_NAME}
}
}
Run Command:
java -jar snowplow-stream-enrich-kinesis-1.0.0.jar --config enricher.conf --resolver file:resolver.json
S3 Loader Config
source = "kinesis"
sink = "kinesis"
aws {
accessKey = "env"
secretKey = "env"
}
# Config for NSQ
nsq {
channelName = "nsqSourceChannelName"
# Host name for NSQ tools
host = "127.0.0.1"
# HTTP port for nsqd
port = 4150
# HTTP port for nsqlookupd
lookupPort = 4161
}
kinesis {
initialPosition = "TRIM_HORIZON"
initialTimestamp = "2017-05-17T10:00:00Z"
maxRecords = 10000
region = "ap-southeast-2"
appName = "zanui-enricher-app"
}
streams {
inStreamName = "snowplow-collected-good-events-stream"
outStreamName = "snowplow-collected-bad-events-stream"
buffer {
byteLimit = 1000000000 # Not supported by NSQ; will be ignored
recordLimit = 10
timeLimit = 5000 # Not supported by NSQ; will be ignored
}
}
s3 {
region = "ap-southeast-2"
bucket = "snowplow-enriched-good-events"
partitionedBucket = "snowplow-enriched-good-events/partitioned"
dateFormat = "{YYYY}/{MM}/{dd}/{HH}"
outputDirectory = "zanui-enriched/good"
filenamePrefix = "zanui-output"
format = "gzip"
# Maximum Timeout that the application is allowed to fail for (in milliseconds)
maxTimeout = 300000 # 5 minutes
}
Run Command:
java -jar snowplow-s3-loader-0.6.0.jar --config my.conf
But this Snowplow S3 Loader not doing anything so i used Data Fireshose to transfer stream to S3 bucket.
When i try to use Aws Lambda in Data Fireshose, it give error
{"attemptsMade":4,"arrivalTimestamp":1600154159619,"errorCode":"Lambda.FunctionError","errorMessage":"The Lambda function was successfully invoked but it returned an error result.","attemptEndingTimestamp":1600154235846,"rawData":"****","lambdaArn":"arn:aws:lambda:ap-southeast-2:573188294151:function:snowplow-json-transformer-lambda:$LATEST"}
{"attemptsMade":4,"arrivalTimestamp":1600154161523,"errorCode":"Lambda.FunctionError","errorMessage":"The Lambda function was successfully invoked but it returned an error result.","attemptEndingTimestamp":1600154235846,"rawData":"*****=","lambdaArn":"arn:aws:lambda:ap-southeast-2:573188294151:function:snowplow-json-transformer-lambda:$LATEST"}
If i dont use lambda, log is created in S3 Good Enriched Bucket for page view event, but at the same time, log is created in S3 bad Enriched Bucket for the same page view event saying
{"schema":"iglu:com.snowplowanalytics.snowplow.badrows/collector_payload_format_violation/jsonschema/1-0-0","data":{"processor":{"artifact":"snowplow-stream-enrich","version":"1.0.0"},"failure":{"timestamp":"2020-09-15T07:16:02.488Z","loader":"thrift","message":{"error":"error deserializing raw event: Cannot read. Remote side has closed. Tried to read 2 bytes, but only got 1 bytes. (This is often indicative of an internal error on the server side. Please check your server logs.)"}},"payload":"****="}}
I have followed the documentation repeatedly but i am confused in the setup for stream enrich. What i did not understand is that , do we need to setup database for stream enrich if we are not using custom schema ? Because since i am trying to test with Page View event from Javascript Tracker, I have not setup any database. But i have provided access for DynamoDb create, edit for IAM role.
Please help me to setup snowplow if anyone has done before. Please :(
I had written a blog on how to setup Snowplow Analytics in AWS.
Here is the link, hope this helps you.

wireless communication between host and server by using AccessPoint as an intermediate node

I'm doing very basic simulations in which I want that a wireless host communicate with server (Standard Host) by using accesspoint as intermediate node. But unfortunately only beacon message is passed to access point and server don't recieve any message.My code is here.
I'll be very thankful for your help.
######--------VFsim.ned-------##########
package vf.simulations.simulation;
import inet.linklayer.contract.IWirelessNic;
import inet.networklayer.configurator.ipv4.IPv4NetworkConfigurator;
import inet.node.inet.StandardHost;
import inet.node.inet.WirelessHost;
import inet.node.wireless.AccessPoint;
import inet.physicallayer.common.packetlevel.RadioMedium;
import inet.physicallayer.ieee80211.packetlevel.Ieee80211ScalarRadioMedium;
//
// TODO documentation
//
network VFsim
{
#display("bgb=377,307");
types:
channel C extends ned.DatarateChannel
{
datarate = 100Mbps;
delay = 0.1us;
}
submodules:
server: StandardHost {
#display("p=81,85");
}
Configurator: IPv4NetworkConfigurator {
#display("p=283,78");
}
host: WirelessHost {
#display("p=276,183");
}
accessPoint: AccessPoint {
#display("p=78,155");
}
radioMedium: Ieee80211ScalarRadioMedium {
#display("p=282,26");
}
connections:
accessPoint.ethg++ <--> C <--> server.ethg++;
}
################---------VFsim.ini--------##############
[General]
network = VFsim
**.arp.typename = "GlobalArp"
**.wlan[*].radio.typename = Ieee80211Interface
*.accessPoint.numWlanInterfaces = 1
*.accessPoint.wlan[0].mgmt.ssid = "wlan1"
*.accessPoint.wlan[0].radio.bandName = "5 GHz"
*.accessPoint.app[0].destAddresses = "server"
*.accessPoint.app[0].destPort = 5000
#*.Configurator.config = xmldoc("config.xml")
**.networkLayer.configurator.networkConfiguratorModule = ""
*.host.wlan[*].agent.defaultSsid = "wlan1"
*.host.wlan[*].radio.bandName = "5 GHz"
*.host.numApps = 1
*.host.app[0].typename = "UdpBasicApp"
*.host.app[0].destAddresses = "accessPoint"
*.host.app[0].destPort = 5000
*.host.app[0].messageLength = 1000B
*.host.app[0].sendInterval = exponential(12ms)
*.host.app[0].packetName = "UDPData"
*.server.numApps = 1
*.server.app[0].typename = "UdpSink"
*.server.app[0].localPort = 5000
*.server.app[0].destAddresses = "accessPoint"
**.wlan[*].mac.useAck = false
**.wlan[*].mac.fullDuplex = false
**.radio.transmitter.power = 3.5mW
**.wlan[*].radio.transmitter.communicationRange = 500m
**.wlan[*].radio.receiver.ignoreInterference = true
**.wlan[*].mac.headerLength = 23B
**.bitrate = 1Mbps
For host set server as destination for the UDP traffic. In your ini file set:
*.host.app[0].destAddresses = "server"
EDIT
The instance of IPv4NetworkConfigurator must have the name configurator, not Configurator. Therefore in VFsim.ned change into:
configurator: IPv4NetworkConfigurator
Moreover, remove this line from ini:
**.networkLayer.configurator.networkConfiguratorModule = ""

Set up step_adjustment in aws_autoscaling_policy from variable in terraform

I am setting up a module to configure autoscaling in ASGs in terraform. Ideally, I'd like to pass in a list of maps to my module and have it loop through them, adding a step_adjustment for each map in the list to the policy, however this doesn't seem to work.
Current setup:
name = "Example Auto-Scale Up Policy"
policy_type = "StepScaling"
autoscaling_group_name = "${aws_autoscaling_group.example_asg.name}"
adjustment_type = "PercentChangeInCapacity"
estimated_instance_warmup = 300
step_adjustment {
scaling_adjustment = 20
metric_interval_lower_bound = 0
metric_interval_upper_bound = 5
}
step_adjustment {
scaling_adjustment = 25
metric_interval_lower_bound = 5
metric_interval_upper_bound = 15
}
step_adjustment {
scaling_adjustment = 50
metric_interval_lower_bound = 15
}
min_adjustment_magnitude = 4
}
I just want to provide the three step_adjustments as variables into my module.
So the way you can do it is as follows:
variable "step_adjustments" {
type = list(object({ metric_interval_lower_bound = string, metric_interval_upper_bound = string, scaling_adjustment = string }))
default = []
}
# inside your resource
resource "aws_appautoscaling_policy" "scale_up" {
name = "Example Auto-Scale Up Policy"
policy_type = "StepScaling"
autoscaling_group_name = "${aws_autoscaling_group.example_asg.name}"
adjustment_type = "PercentChangeInCapacity"
estimated_instance_warmup = 300
dynamic "step_adjustment" {
for_each = var.step_adjustments
content {
metric_interval_lower_bound = lookup(step_adjustment.value, "metric_interval_lower_bound")
metric_interval_upper_bound = lookup(step_adjustment.value, "metric_interval_upper_bound")
scaling_adjustment = lookup(step_adjustment.value, "scaling_adjustment")
}
}
}
# example input into your module
step_adjustments = [
{
scaling_adjustment = 2
metric_interval_lower_bound = 0
metric_interval_upper_bound = 5
},
{
scaling_adjustment = 1
metric_interval_lower_bound = 5
metric_interval_upper_bound = "" # indicates infinity
}]

akka.cluster with double asp.net webapi on IIS

In out cluster we have five nodes composite of:
2 seed nodes (backend)
1 worker
2 webapi on IIS
The cluster is joined, up and running; but the second IIS when perform the first message to the cluster via router make all cluster unreachable and dissociated.
In addition the second IIS can't deliver any message.
Here is my IIS config:
<hocon>
<![CDATA[
akka.loglevel = INFO
akka.log-config-on-start = off
akka.stdout-loglevel = INFO
akka.actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
deployment {
/Process {
router = round-robin-group
routees.paths = ["/user/Process"] # path of routee on each node
# nr-of-instances = 3 # max number of total routees
cluster {
enabled = on
allow-local-routees = off
use-role = Process
}
}
}
debug {
receive = on
autoreceive = on
lifecycle = on
event-stream = on
unhandled = on
}
}
akka.remote {
helios.tcp {
# transport-class = "Akka.Remote.Transport.Helios.HeliosTcpTransport, Akka.Remote"
# applied-adapters = []
# transport-protocol = tcp
port = 0
hostname = 172.16.1.8
}
log-remote-lifecyclo-events = DEBUG
}
akka.cluster {
seed-nodes = [
"akka.tcp://ClusterActorSystem#172.16.1.8:2551",
"akka.tcp://ClusterActorSystem#172.16.1.8:2552"
]
roles = [Send]
auto-down-unreachable-after = 10s
# how often should the node send out gossip information?
gossip-interval = 1s
# discard incoming gossip messages if not handled within this duration
gossip-time-to-live = 2s
}
# http://getakka.net/docs/persistence/at-least-once-delivery
akka.persistence.at-least-once-delivery.redeliver-interval = 300s
# akka.persistence.at-least-once-delivery.redelivery-burst-limit =
# akka.persistence.at-least-once-delivery.warn-after-number-of-unconfirmed-attempts =
akka.persistence.at-least-once-delivery.max-unconfirmed-messages = 1000000
akka.persistence.journal.plugin = "akka.persistence.journal.sql-server"
akka.persistence.journal.publish-plugin-commands = on
akka.persistence.journal.sql-server {
class = "Akka.Persistence.SqlServer.Journal.SqlServerJournal, Akka.Persistence.SqlServer"
plugin-dispatcher = "akka.actor.default-dispatcher"
table-name = EventJournal
schema-name = dbo
auto-initialize = on
connection-string-name = "HubAkkaPersistence"
refresh-interval = 1s
connection-timeout = 30s
timestamp-provider = "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common"
metadata-table-name = Metadata
}
akka.persistence.snapshot-store.plugin = ""akka.persistence.snapshot-store.sql-server""
akka.persistence.snapshot-store.sql-server {
class = "Akka.Persistence.SqlServer.Snapshot.SqlServerSnapshotStore, Akka.Persistence.SqlServer"
plugin-dispatcher = ""akka.actor.default-dispatcher""
connection-string-name = "HubAkkaPersistence"
schema-name = dbo
table-name = SnapshotStore
auto-initialize = on
}
]]>
</hocon>
inside the global.asax we create a new router to the cluster:
ClusterActorSystem = ActorSystem.Create("ClusterActorSystem");
var backendRouter =
ClusterActorSystem.ActorOf(
Props.Empty.WithRouter(FromConfig.Instance), "Process");
Send = SistemiHubClusterActorSystem.ActorOf(
Props.Create(() => new Common.Actors.Send(backendRouter)),
"Send");
and here is our backend config:
<hocon><![CDATA[
akka.loglevel = INFO
akka.log-config-on-start = on
akka.stdout-loglevel = INFO
akka.actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
debug {
receive = on
autoreceive = on
lifecycle = on
event-stream = on
unhandled = on
}
}
akka.remote {
helios.tcp {
# transport-class = "Akka.Remote.Transport.Helios.HeliosTcpTransport, Akka.Remote"
# applied-adapters = []
# transport-protocol = tcp
#
# seed-node ports 2551 and 2552
# non-seed-node port 0
port = 2551
hostname = 172.16.1.8
}
log-remote-lifecyclo-events = INFO
}
akka.cluster {
seed-nodes = [
"akka.tcp://ClusterActorSystem#172.16.1.8:2551",
"akka.tcp://ClusterActorSystem#172.16.1.8:2552"
]
roles = [Process]
auto-down-unreachable-after = 10s
}
]]></hocon>
The issue in present using Akka 1.1 and Akka 1.2
UPDATE
I have found that the issue is related to our LoadBalancer (NetScaler) if I call each IIS directly is working fine. If called by the balancer I face the reported issue; the balancer is trasparent (it only add some headers to the request). What can I check to solve this issue?
Finally I found the problem, we are using akka.persistence that requires a specific value declination for the PersistenceId for each IIS.

Resources