What is the difference between Operational and Config in YANG? - opendaylight

What is the difference between Operational and Config in YANG model? Is it a correct way to supporting GET,PUT,POST and DELETE interfaces both in Operational and Config ?

Config is what represents configuration data, usually what will be writable via the northbound agents (CLI, Netconf, Web, etc.), it is also what will be retrieved in a get-config Netconf operation.
Operational data is status data, data that is not writable via the northbound agents, it will come from a data provider application.
A web client should only be able to do a GET operation on operational data. Because it doesn't make sense to allow a client to change the information about a status.
For config data it makes sense to have all the operations.

NETCONF separates configuration and state (or operational) data:
The information that can be retrieved from a running system is separated into two classes, configuration data and state data. Configuration data is the set of writable data that is required to transform a system from its initial default state into its current state. State data is the additional data on a system that is not configuration data such as read-only status information and collected statistics.
RESTCONF works as NETCONF, but on HTTP: it does map CRUD verbs onto NETCONF operations:
+----------+-------------------------------------------------------+
| RESTCONF | NETCONF |
+----------+-------------------------------------------------------+
| OPTIONS | none |
| | |
| HEAD | <get-config>, <get> |
| | |
| GET | <get-config>, <get> |
| | |
| POST | <edit-config> (nc:operation="create") |
| | |
| POST | invoke an RPC operation |
| | |
| PUT | <copy-config> (PUT on datastore) |
| | |
| PUT | <edit-config> (nc:operation="create/replace") |
| | |
| PATCH | <edit-config> (nc:operation depends on PATCH content) |
| | |
| DELETE | <edit-config> (nc:operation="delete") |
+----------+-------------------------------------------------------+

On supporting GET,PUT,POST and DELETE, if you are reffering to http methods here, you should probably follow RestConf

Related

How to get members of a Windows (security) group?

Short Version
What is the Windows function to list all members of a Windows (security) group?
Long Version
The gotcha is that we can't just skip directly to Active Directory, because it's entirely possible that the group is not an group in Active Directory, because not all groups will be in Active Directory, because some groups can be local to the machine, and some machines may not even be joined to a domain.
For example:
STACKOVERFLOW\ITOps: a group in the STACKOVERFLOW domain
ITOps#stackoverflow.com: a group in the stackoverflow.com domain
HOTH\docker-users: a group on the local machine
Reminder that STACKOVERFLOW and stackoverflow.com happen to be two names for the same domain.
This will then remind you that STACKOVERFLOW\ITOps and ITOps#stackoverflow.com are two names for the same group.
The first is the Windows name.
The second is the matching name in Active Directory.
You can convince yourself of this by calling LookupAccountSid, which returns:
Name: "ITOps"
Domain: "STACKOVERFLOW"
And you can see it in when running whoami /groups:
| Group Name | Type | SID |
|------------------------------------------------------|------------------|------------------------------------------------|
| HOTH\PoleDancers | Alias | S-1-5-21-508675309-3072756349-3142140079-1006 |
| HOTH\docker-users | Alias | S-1-5-21-508675309-3072756349-3142140079-1014 |
| STACKOVERFLOW\Stackoverflow Users | Group | S-1-5-21-1701128619-854245398-2146844275-1132 |
| STACKOVERFLOW\Project Managers | Group | S-1-5-21-1701128619-854245398-2146844275-3626 |
| STACKOVERFLOW\MSFT Developers | Group | S-1-5-21-1701128619-854245398-2146844275-3608 |
| STACKOVERFLOW\Stackoverflow Admins | Group | S-1-5-21-1701128619-854245398-2146844275-1136 |
| STACKOVERFLOW\Audit | Group | S-1-5-21-1701128619-854245398-2146844275-3627 |
| STACKOVERFLOW\Domain Admins | Group | S-1-5-21-1701128619-854245398-2146844275-512 |
| STACKOVERFLOW\Enterprise Admins | Group | S-1-5-21-1701128619-854245398-2146844275-519 |
| STACKOVERFLOW\DnsAdmins | Alias | S-1-5-21-1701128619-854245398-2146844275-1107 |
| STACKOVERFLOW\Denied RODC Password Replication Group | Alias | S-1-5-21-1701128619-854245398-2146844275-572 |
| STACKOVERFLOW\DHCP Administrators | Alias | S-1-5-21-1701128619-854245398-2146844275-1004 |
| BUILTIN\Performance Log Users | Alias | S-1-5-32-559 |
| BUILTIN\Performance Monitor Users | Alias | S-1-5-32-558 |
| BUILTIN\Users | Alias | S-1-5-32-545 |
| BUILTIN\Administrators | Alias | S-1-5-32-544 |
| NT AUTHORITY\REMOTE INTERACTIVE LOGON | Well-known group | S-1-5-14 |
| NT AUTHORITY\INTERACTIVE | Well-known group | S-1-5-4 |
| NT AUTHORITY\Authenticated Users | Well-known group | S-1-5-11 |
| NT AUTHORITY\This Organization | Well-known group | S-1-5-15 |
| Everyone | Well-known group | S-1-1-0 |
| LOCAL | Well-known group | S-1-2-0 |
| Authentication authority asserted identity | Well-known group | S-1-18-1 |
| Mandatory Label\Medium Mandatory Level | Label | S-1-16-8192 |
So: given a group, i need the members.
the group could be local
the group could be on the domain
i don't know if the group is local or part of a domain
i don't care if the group is local or part of a domain
I need Windows to tell me the members.
And remember:
a local group
can contain local users
can contain local groups
can contain domain users
can contain domain groups
So, by definition, trying to query Active Directory is simply wrong.
Short version
Given a group security identifier (SID) how do i get the members of that group?
I know it's possible, because Windows does it right there in netplwiz:
Showing a mix of local and domain users at the same time.
Bonus Reading
Active Directory: Get all group members
How to get members in a local windows group using LDAP and c++
Get members of Windows Group with Python
Get windows group members along with their domain names
How to determine membership of user in well-known-groups as Everyone

Converting Raw Data to Event Log

I do research in the field of Health-PM and facing an unstructured big data which needs a preprocessing phase for converting to suitable event log.
I've just googled and understood no ProM plug-in, stand-alone code, or script has developed specially for this task. Except Celonis, which has claimed developed an event log convertor. I'm also writing an event log generator code for my specific case study.
I just want to know, is there any business solution, case study or article on this topic which investigated this issue?
Thanks.
Soureh
What do you exactly mean with unstructured? Is this a bad-structured table like the example you provided, or is it data that is not structured at all (e.g. a hard disk with files)?
In the first situation, Celonis indeed provide an option to extract events based on tables using Vertica SQL. In their free SNAP environment you can learn how to do that.
In the latter, I quess that at least semi-structured data is needed to extract events on large scale, otherwise your script has no clue where to look for.
Good question! Many process mining papers mention that most of the existing information systems are PAIS (process-aware information system) hence, qualified to perform process mining on them. This is true, BUT, it does not mean you can get the data out-of-the-box!
What's the solution? You may transform the existing data (typically from a relational database of your business solution, e.g., an ERP or HIS system) into an event log that process mining can understand.
It works like this: you look into the table containing, e.g., patient registration data. You need the patient ID of this table and the timestamp of registration for each ID. You create an empty table for your event log, typically called "Activity_Table". You consider giving a name to each activity depending on the business context. In our example "Patient Registration" would be a sound name. You insert all the patient IDs with their respective timestamp into the Activity_Table followed by the same activity name for all rows, i.e., "Patient Registration". The result looks like this:
|Patient-ID | Activity | timestamp |
|:----------|:--------------------:| -------------------:|
| 111 |"Patient Registration"| 2021.06.01 14:33:49 |
| 112 |"Patient Registration"| 2021.06.18 10:03:21 |
| 113 |"Patient Registration"| 2021.07.01 01:20:00 |
| ... | | |
Congrats! you have an event log with one activity. The rest is just the same. You create the same table for every important action that has a timestamp in your database, e.g., "Diagnose finished", "lab test requested", "treatment A finished".
|Patient-ID | Activity | timestamp |
|:----------|:-----------------:| -------------------:|
| 111 |"Diagnose finished"| 2021.06.21 18:03:19 |
| 112 |"Diagnose finished"| 2021.07.02 01:22:00 |
| 113 |"Diagnose finished"| 2021.07.01 01:20:00 |
| ... | | |
Then you UNION all these mini tables and sort it based on Patient-ID and then by timestamp:
|Patient-ID | Activity | timestamp |
|:----------|:--------------------:| -------------------:|
| 111 |"Patient Registration"| 2021.06.01 14:33:49 |
| 111 |"Diagnose finished" | 2021.06.21 18:03:19 |
| 112 |"Patient Registration"| 2021.06.18 10:03:21 |
| 112 |"Diagnose finished" | 2021.07.02 01:22:00 |
| 113 |"Patient Registration"| 2021.07.01 01:20:00 |
| 113 |"Diagnose finished" | 2021.07.01 01:20:00 |
| ... | | |
If you notice, the last two rows have the same timestamp. This is very common when working with real data. To avoid this, we need an extra column called "sorting" which helps the process mining algorithm to understand the "normal" order of activities with the same timestamp according to the nature of the underlying business. In this case, we can easily know that registration happens before diagnosis hence, we assign a low value (e.g., 1) to all "Patient Registration" activities. The table might look like this:
|Patient-ID | Activity | timestamp |Order |
|:----------|:--------------------:|:-------------------:| ----:|
| 111 |"Patient Registration"| 2021.06.01 14:33:49 | 1 |
| 111 |"Diagnose finished" | 2021.06.21 18:03:19 | 2 |
| 112 |"Patient Registration"| 2021.06.18 10:03:21 | 1 |
| 112 |"Diagnose finished" | 2021.07.02 01:22:00 | 2 |
| 113 |"Patient Registration"| 2021.07.01 01:20:00 | 1 |
| 113 |"Diagnose finished" | 2021.07.01 01:20:00 | 2 |
| ... | | | |
Now, you have an event log that process mining algorithms undertand!
Side note:
there has been many attempts to automate event log extraction process. The works of "Eduardo González López de Murillas" are really interesting if you want to follow this topic. I could also recommend this open-access paper by Eduardo et al. 2018:
"Connecting databases with process mining: a meta model and toolset" (https://link.springer.com/article/10.1007/s10270-018-0664-7)

How to use a union operator in SonarQube web services?

I would like to select from all the issues I have all the blocking issues and all the vulnerability issues, which are Blocker, Critical or Major.
How can I do that in one request for SonarQube 6.4?
If I do
http://localhost:9000/api/issues/search
severities=BLOCKER,CRITICAL,MAJOR&type=vulnerability&additionalFields=comments
I will have the vulnerability issues only.
And if I do two requests, one for blocker issues and one for the vulnerabilities, I will have blocking vulnerabilities which are redundant.
api/issues/search does not allow to combine filters. It will "AND" all conditions together.
I assumed that you are asking about how to query for these issues:
CODE_SMELL | BUG | VULNERABILITY
BLOCKER | YES | YES | YES
CRITICAL | no | no | YES
MAJOR | no | no | YES
MINOR | no | no | YES
INFO | no | no | YES
So I suggest:
api/issues/search?severities=BLOCKER&types=CODE_SMELL,BUG
(for to get all BLOCKER issues of CODE_SMELL and BUG)
CODE_SMELL | BUG | VULNERABILITY
BLOCKER | YES | YES | no
CRITICAL | no | no | no
MAJOR | no | no | no
MINOR | no | no | no
INFO | no | no | no
api/issues/search?types=VULNERABILITY
(for to get all issues of VULNERABILITY)
CODE_SMELL | BUG | VULNERABILITY
BLOCKER | no | no | YES
CRITICAL | no | no | YES
MAJOR | no | no | YES
MINOR | no | no | YES
INFO | no | no | YES
So you will not have duplicated issues, but have to do two requests.
There are three types of issues
BUG
CODE_SMELL
VULNERABILITY
All of this issues types can have any severity set. So, if you want all issues (of any type) with Blocker, Critical and Major severity there should be this params in your request.
severities=BLOCKER,CRITICAL,MAJOR&types=CODE_SMELL,BUG,VULNERABILITY&additionalFields=comments

How to do First Pass Yield analysis using Elasticsearch?

I'm starting to explore using Elasticsearch to help analyze engineering data produced in a manufacturing facility. One of the key metrics we analyze if the First Pass Yield (FPY) of any given process. So imagine I had some test data like the following:
Item | Process | Pass/Fail | Timestamp
+-----+---------+-----------+----------
| A | 1 | Fail | 1 | <-- First pass failure
| A | 1 | Pass | 2 |
| A | 2 | Pass | 3 |
| A | 3 | Fail | 4 | <-- First pass failure
| A | 3 | Fail | 5 |
| A | 3 | Pass | 6 |
| A | 4 | Pass | 7 |
---------------------------------------
What I'd like to get out of this is the ability to query this index/type and determine what the first pass yield is by process. So conceptually I want to count the following in some time period using a set of filters:
How many unique items went through a given process step
How many of those items passed on their first attempt at a process
With a traditional RDBMS I can do this easily with subqueries to pull out and combine these counts. I'm very new to ES, so I'm not sure how to query the process data to count how many failures occurred for the first time an item went through that process
My real end goal is to include this on a Kibana dashboard so my customers can quickly analyze the FPY data for different processes over various time periods. I'm not there yet, but I think Kibana will let me use a JSON query if this query requires that today.
Is this possible with Elasticsearch, or am I trying to use the wrong tool for the job here?

Google Compute Engine snapshots not displaying actual space used

If I take a snapshot of a persistent disk, then try to see get information about the snapshot in gcutil, the data is always incomplete. I need to see this data since snapshots are differential.:
server$ gcutil getsnapshot snapshot-3
+----------------------+-----------------------------------+
| name | snapshot-3 |
| description | |
| creation-time | 2014-07-30T06:52:56.223-07:00 |
| status | READY |
| disk-size-gb | 200 |
| storage-bytes | |
| storage-bytes-status | |
| source-disk | us-central1-a/disks/app-db-1-data |
+----------------------+-----------------------------------+
Is there a way to determine what this snapshot is actually occupying? gcutil and the web UI are the only resources I know of, and they are both not displaying this information.
unfortunately it's a bug, known by google developers. They are working on that....

Resources