Opnet creating simple two wireless nodes testing path loss - opnet

I am new to OPNET, i am tring to test the pathloss between two points using tirem wave function on a specific map
I use OPNET 15.0 and I have all the needed models
I want to know how to create two simple nodes and make them talk and see the path loss in that communication
Any ideas?

There are a lot of ways to define application traffic between nodes. Normally I would just open the object pallete and drop two 10BaseT_LAN models and an Application Config object to your workspace, then connect them with a 10BaseT model. But you're using TIREM, so you might need to select wireless node and link types. In the Application Config you define a traffic type (e.g. a constant bit rate FTP flow), then edit the attributes for a node model and assign that application definition to that node.
In order to plot path loss, select Collect DES Statistics DES / Choose Individual Statistics... (or right-click on a device or the workspace and select Choose Individual DES Statistics).
Then after select the loss statistic under the IP category, as shown here:
After you run a DES simulation, you should be able to plot the loss statistics under the DES results menu:

Related

Multiple sources for dimensions in Data Warehouse

I am currently working on a financial Risk data warehouse. For my collateral dimension, I am souring the data from one source system. However, after further research by the business analyst, we found a legacy application that also holds collateral information which the bank also needs in the data warehouse. Bar a few common attributes that both source systems share, the legacy application contains a lot more attributes than what is defined already in my current collateral dimension. What is therefore the best way to onboard this new information in the Warehouse? I was thinking of extending the current collateral dimension but then would I need to do this every time I find a new source, which is very likely given the size of the bank. Alternatively is it better to create a new dimension called dimCollateralAdditionalInfo and add the extra attributes there?
As we always say that a DWH model is evolutive in the time since new business requirements can appear over time. The most important thing is to check if the new attributes are worth to be added and if they present an analytic axis.
You can store all the information in the dimCollateral and you need to think to manage this dimension properly in terms of optimization (indexes, data types...)
Or you can create an extended dimension dimCollateralExtension containing the additional info and it will have a one to one relationship with the master dimension
dimCollateral

Need some advise about Microsoft flow leader Board

I setting a flow to display leader board in excel file with a list of KPIs
in my tab Performance i wan to show detail KPIs in percentage
but it only show type number (Ex: 100% = 0.1)
In my flow i use concat function to display a variable that i had initialize in flow.
How i can show list of KPIs in my excel file as percentage using concat. I cant initialize each of KPIs in excel in flow because it make my flow run slowly.
Any ideal for this.
Thanks u guys so much
I used the following to show as percentage
concat(string(mul(float(body('Parse_JSON')?['Score']),100)),'%')
Sample Screenshot below:

Log analytics using Elasticsearch & Kibana - Few queries

I have just started playing around with ELK to develop our log analytics solution.
I had a few questions regarding the best practices so that I don't make any bad choice to begin with.
This tool will analyze various types of logs to find out and correlate any issue. It will run on multiple 'devices' and each device will be uniquely identifiable with a serial number.
Question 1) Is it possible to create a dashboard where the serial number is taken as an user input?
Details: I would like to have 1 dashboard created to analyze various fields and I should be able to specify the serial number of the device as an input. From what I see, I could use filter but then this would need the visualization to be 'edited'. So it appears to be me that right now, if I need to analyze multiple devices then I need to create a dashboard for each of the device. This will be a problem that if I need to modify the dashboard then I will have to make changes to all. The problem can be minimized by importing additional dashboards as a JSON file, still it is inconvenient.
Is there a better way that I am not aware of?
Question 2) On the main dashboard, I want to show a heatmap of various 'services' and their status as a time series. For e.g. say I am monitoring, CPU, memory, network and our service then I want to see something like below:
Now the heatmap visualization doesn't provide a way to uniquely specify the condition. I generated above image by populating dummy data where values were one of 0,1,2,3. Which means that I need to create such data periodically which the visualization can then use. Is there any built-in mechanism (scheduled jobs for e.g.) provided by ELK to do such processing. One option could be to run an external problem which queries Elasticsearch, fetches all the relevant information, analyzes it and puts it back into Elasticssearch. Is that the only way?
If there are any other suggestions, please feel free to share. Thanks.

Gephi: display modularity in data laboratory

I use Gephi 0.9.1 and 0.9.2. I run modularity and have a number of class. When right-clicking on a node, I choose select in data laboratory. In a youtube tutorial, one got an extra column with modularity which allowed identification of modularity and export to csv files for further processing, lists, etc.
Is this possible?
Just found out that the number of displayed columns is limited. So one has to deselect some of the default ones if one wants additional columns to show up.

influxdb creating a new measurement

new to Influxdb but liking it a lot
I've configured it gather metrics from snmp polled devices - primarily network nodes
I can happily graph the statistics polled using derived values but what I want to know
Is it possible to create a new measurement in influxdb from data already stored?
The use case is we poll network traffic and graph it by doing the derived difference between the current and last reading (grafana)
What I want to do is create a measurement that does that in the influxdb and stores it. This is primarily so I can setup monitoring of the new derived value using a simple query and alert if it drops below x.
I have a measurement snmp_rx / snmp_tx with host and port name with the polled ifHCInOctets and ifHCOutOctets
so can I do a process that continuously creates a new measurement for each showing the difference between current and last readings?
Thanks
Apparently influxdb feature you are looking for is called continuous queries :
A CQ is an InfluxQL query that the system runs automatically and
periodically within a database. InfluxDB stores the results of the CQ
in a specified measurement
It will allow you to automatically create and fill new octet rates measurements from raw ifHCInOctet/ifHCOutOctets counters you have using derivative function in select statement and configured group by time interval. You can also do some scaling in select expression (like bytes-to-bits, etc).

Resources