Traps for table elements - snmp

In my MIB I have a table which contains properties for some entity. Herewith, once it is a table, there can be several entities of the same type. I also have a list of available traps, and there is a subset of traps, related to this entity type. But I don't see how can I determine which entity this trap is related to (first, second, ...). Is there I stated way to determine this, or my MIB is not complete?

Related

When creating a slot in Rasa, is it also important to declare the slot as an entity?

Lets say I create a slot in Rasa called "yearlybill".
I will have to write:
slots:
yearlybill:
type: float
min_value: 0
So my question is, when I want to use these slot in my intents, will I have to explicitly mention it as an entity as well? Or is that option?
Let's start with a bit of background.
A slot should be seen as a long-term memory slot. You can store information in there manually, via a custom action, without having an entity around.
An entity is a substring of the user message that you'd like to extract for later use. Common entities are names, dates, and product-ids. It's very common to store the entity in a slot, but you don't have to. You can also detect an entity and have a custom action retrieve that information from the tracker.
You could define a slot without defining an entity. If you're planning to use a custom action to fetch the slot value from a users' text, you technically don't need an entity. This isn't a common pattern though. Typically you'd like a specific entity detection model to fetch the entity so that it can be stored in the slot after. That's why it's common to see domain.yml files that contain both a slot and an entity definition.

Information recording for Charging

In TS-0001chapter 12 "Information Element Recording" triggers (e.g. a request on Mcc/Mca or any other interface) are described.
In clause 12.2.2 "Filtering of Recorded Information for Offline Charging" it is described how to derive charging information from recorded information which means that charging data is being derived from IERs.
In clause 10.2.11.14, "Service Statistics collection record" is described.
There are 3 questions:
First, is there any correlation between Service Statistics Collection record and IER? It looks like Service statistics collection record is a subset of IER derived on the basis of eventConfig and statsCollect resources. If it is a subset, then there is no field in IER which maps to "collectingEntityID" as Service Statistics Collection Record are derived corresponding to "collectingEntityID".
Second, there is no description for charging data records (CDRs). It is described as subset of IER. As a result of statsCollect, Service Statistics Collection Records are generated. When will the CDRs be generated?
Third, there is no linking between Service Statistics Collection record and CDR, both needs to be transferred on Mch interface.
For your first and third questions, I understand the confusion. The Service Statistics Collection record and a M2M Event Record probably should be combined or consolidated. In fact, based on your question we will shortly bring in contributions to the oneM2M standard to make this change.
For the second question, TS-0001 clause 12.2.4 describes CDRs. This clause defines Accounting-Request and Accounting-Answer messages that flow between an IN and a billing system over Mch. Within the Accounting-Request there is an M2M Information element defined in which M2M Event Record information is stored. This is effectively the CDR. Depending on the requirements of the billing system, the charging function of the IN will filter the required information from the M2M Event Record and store this information in the M2M Information element of the Accounting-Request message for transfer to the billing system.
In addition, TS-0004 A.2 "Diameter Commands on Mch" defines how to bind the Mch Accounting-Request and Accounting-Answer messages to the Diameter protocol for deployments which use Diameter.

Google Datastore bulk retrieve data using urlsafe

Is there a way in Google DataStore to bulk fetch entities using their urlsafe key values?
I know about ndb.get_multi([list]) which takes a list of keys and retrieves the entities in bulk which is more efficient. But in our case we have a webpage with a few hundred entities, embedded with the entities urlsafe key values. At first we were only doing operations on single entities, so we were able to use the urlsafe value to retrieve the entity and do the operation without much trouble. Now, we need to change multiple entities at once, and looping on them one by one does not sound like an efficient approach. Any thoughts?
Is there any advantage of using the entities key ID directly (versus the key urlsafe value)? get_by_id() in the documentation does not imply being able to get entities in bulk (takes only one ID).
If the only way to retrieve entities in bulk is using the entities key, yet, exposing the key on the webpage is not a recommended approach, does that mean we're stuck when it comes to bulk operations on a page with a few hundred entities?
The keys and the urlsafe strings are exactly in a 1:1 relationship. When you have one you can obtain the other:
urlsafe_string = entity_key.urlsafe()
entity_key = ndb.Key(urlsafe=urlsafe_string)
So if you have a bunch of urlsafe strings you can obtain the corresponding keys and then use ndb.get_multi() with those keys to get all entities, modify them as needed then use ndb.put_multi() to save them back into the datastore.
As for using IDs - that only works (in a convenient manner) if you do not use entity ancestry. Otherwise to obtain a key you need both the ID and the entity's parent key (or its entire ancestry) - it's not convenient, better use urlsafe strings in this case.
But for entities with no parents (aka root entities in the respective entity groups) the entity keys and their IDs are always in a 1:1 relationship and again you can obtain one if you have the other:
entity_key_id = entity_key.id()
entity_key = ndb.Key(MyModel, entity_key_id)
So again from a bunch of IDs you can obtain keys to use with ndb.get_multi() and/or ndb.put_multi().
Using IDs can have a cosmetic advantage over the urlsafe strings - typically shorter and easier on the eyes when they apear in URLs or in the page HTML code :)
Another advantage of using IDs is the ability to split large entities or to deal in a simpler manner with entities in a 1:1 relationship. See re-using an entity's ID for other entities of different kinds - sane idea?
For more info on keys and IDs see Creating and Using Entity Keys.

Advantage of splitting a table

My question may seems more general. But only answer I got so far is from the SO itself. My question is, I have a table customer information. I have 47 fields in it. Some of the fields are optional. I would like to split that table into two customer_info and customer_additional_info. One of its column is storing a file in byte format. Is there any advantage by splitting the table. I saw that the JOIN will slow down the query execution. Can I have more PROs and CONs of splitting a table into two?
I don't see much advantage in splitting the table unless some of the columns are very infrequently accessed and fairly large. There's a theoretical advantage to keeping rows small as you're going to get more of them in a cached block, and you improve the efficiency of a full table scan and of the buffer cache. Based on that I'd be wary of storing this file column in the customer table if it was more than a very small size.
Other than that, I'd keep it in a single table.
I can think of only 2 arguments in favor of splitting the table:
If all the columns in Customer_Addition_info are related, you could potentially get the benefit of additional declarative data integrity that you couldn't get with a single table. For instance, lets say your addition table was CustomerAddress. Your business logic may dictate that a customer address is optional, but once you have a customer Zip code, the addressL1, City and State become required fields. You could set these columns to non null if they exist in a customerAddress table. You couldn't do that if they existed directly in the customer table.
If you were doing some Object-relational mapping and your had a customer class with many subclasses and you didn't want to use Single Table Inheritance. Sometimes STI creates problems when you have similar properties of various subclasses that require different storage layout. Being that all subclasses have to use the same table, you might have name clashes. The alternative is Class Table inheritance where you have a table for the superclass, and an addition table for each subclass. This is a similar scenario to the one you described in your question.
As for CONS, The join makes things harder and slower. You also run the risk of accidentally creating a 1 to many relationship. I.E. You create 2 addresses in the CustomerAddress table and now you don't know which one is valid.
EDIT:
Let me explain the declarative ref integrity point further.
If your business rules are such that a customer address is optional, and you embed addressL1, addressL2, City, State, and Zip in your customer table, you would need to make each of these fields Nullable. That would allow someone to insert a customer with a City but no state. You could write a table level check constraint to cover this situation. But that isn't as easy as simply setting the AddressL1, City, State and Zip columns in the CustomerAddress table not nullable. To be clear, I am NOT advocating using the multi-table approach. However you asked for Pros and Cons, and I'm just pointing out this aspect falls on the pro side of the ledger.
I second what David Aldridge said, I'd just like to add a point about the file column (presumably BLOB)...
BLOBs are stored up to approx. 4000 bytes in-line1. If a BLOB is used rarely, you can specify DISABLE STORAGE IN ROW to store it out-of-line, removing the "cache pollution" without the need to split the table.
But whatever you do, measure the effects on realistic amounts of data before you make the final decision.
1 That is, in the row itself.

Best approach for populating model object(s) from a joined query?

I'm building a small financial system. Because of double-entry accounting, transactions always come in batches of two or more, so I've got a batch table and a transaction table. (The transaction table has batch_id, account_id, and amount fields, and shared data like date and description are relegated to the batch table).
I've been using basic vo-type models for each table so far. Because of this table structure, though, transactions will almost always be selected with a join on the batch table.
So should I take the selected records and splice them into two separate vo objects, or should I create a "shared" vo that contains both batch and transaction data?
There are a few cases in which batch records and/or transaction records are loaded individually, so they will each also have their associated vo class. Are there possible pitfalls down the road if I have "overlapping" vo classes like this?
The best approach is to tie models not to database tables, but to your views. E.g. if view has date field, then use "shared " view object (ideally even specific-to-the-view object), if view has only transaction info, use another object etc. It can be tedious, but separation of concerns will be worthy. Too much duplication can be remedied with reusing/inheriting when appropriate.

Resources