Dynamic Achievement System algorithm / design - algorithm

I'm developing this Achievement System and it must have a CRUD, that admins access to create new achievements and it's rules. I need some help with the design & algorithm of this so it can easily evolve with new rules as admins ask.
Rules sample
Medal one: must complete 5 any courses with a score of at least 90
Medal two: must complete two specific courses with a score of at least 85
Medal three: must be top 5 in general ranking at least once
Medal four: must have more than 5000 points
I'll basically store that as metadata in a relational database, probably with these columns below:
action
action quantity
course quantity
score
id course
ranking
position
points
I want to know if there is any known algorithm / design to this kind of problem? Or perhaps I should store them differently to make it easier? Don't know, I want suggestions.

Your doubts may be right. In my opinion, a database is the wrong way to organize this data. Every new kind of achievement you want to create would add extra columns to your database, and most achievements wouldn't use most of the columns. A more flexible data structure, one that doesn't expect for every entry to use all of the possible achievement criteria at once by default, would probably be more useful. Most languages support JSON, so I suggest you use that. The structure could be something like this:
[
{
"name": "Medal One",
"requirements": {
"coursesCompleted": 5,
"scoreMin": 90
}
},
{
"name": "Medal Two",
"requirements": {
"specificCoursesCompleted": [
"Course 1",
"Course 2"
],
"scoreMin": 85
}
},
{
"name": "Medal Three",
"requirements": {
"generalRankingMin": 5
}
},
{
"name": "Medal Four",
"requirements": {
"scoreMin": 5000
}
}
]
You can see here how the criteria types are sometimes reused, but they can be omitted when not needed and new ones can be added to a few achievements without bloating the rest of the dataset as well.
PS: I made the criteria names very verbose for demonstration purposes; shortening them or not in actual use is up to preference.

Related

SSAS: The way to hide certain fields in a table from certain users

For a Microsoft Analysis Services Tabular (1500) data cube, given a Sales table:
CREATE TABLE SalesActual (
Id Int,
InvoiceNumber Char(10),
InvoiceLineNumber Char(3),
DateKey Date,
SalesAmount money,
CostAmount money )
Where the GP Calculation in DAX would be
GP := SUM('SalesActual'[SalesAmount]) - SUM('SalesActual'[CostAmount])
I want to limit some users from accessing cost / GP data. Which approach would you recommend?
I can think of the following:
Split all the Sales and Cost into separate rows and create a MetricType flag 'C', 'S', etc. and set Row-Level Security so that some people won't be able to see lines with costs.
Separate the into two different tables and handle it through OLS.
Any other recommendations?
I am leaning towards approach 1 as I have some other RLS set-up and OLS doesn't mix well with RLS, but I also want to hear from the experts what other approach could fulfill such requirements.
Thanks!
UPDATE: I ended up going with the first approach.
Tabular DB is fast for this kind of split
OLS = renders the field invalid; and I'd have to create and maintain two reports... which is undesirable
RLS is easier to control; and I think cost / GP is the only thing I'd need to exclude for now, but it also gives me some flexibility in the filter if I need to restrict other fields; my data will grow vertically, but I can also add additional data type such as sales budget, sales forecast, expenses and other cost, etc. into the model in the future. All easily controlled by RLS
The accepted answer works and would work for many scenario. I appreciate answerer's sharing, just that it doesn't solve my particular situation.
You can create a role where CLS does the job. There is no gui for CLS, but we can use a script (You can script your current role from SSMS "Script Role As", to modify - but better test this on new one)
{
"createOrReplace": {
"object": {
"database": "YourDatabase",
"role": "CLS1"
},
"role": {
"name": "CLS1",
"modelPermission": "read",
"members": [
{
"memberName": "YourOrganization\\userName"
}
],
"tablePermissions": [
{
"name": "Sales",
"columnPermissions": [
{
"name": "SalesBonus",
"metadataPermission": "none"
},
{
"name": "CostAmount",
"metadataPermission": "none"
}
]
}
]
}
}
}
The key element is TablePermissions and columnPermissions in which we define which column / columns the user cannot use).

How to transform nested JSON-payloads with Kiba-ETL?

I want to transform nested JSON-payloads into relational tables with Kiba-ETL. Here's a simplified pseudo-JSON-payload:
{
"bookings": [
{
"bookingNumber": "1111",
"name": "Booking 1111",
"services": [
{
"serviceNumber": "45",
"serviceName": "Extra Service"
}
]
},
{
"bookingNumber": "2222",
"name": "Booking 2222",
"services": [
{
"serviceNumber": "1",
"serviceName": "Super Service"
},
{
"serviceNumber": "2",
"serviceName": "Bonus Service"
}
]
}
]
}
How can I transform this payload into two tables:
bookings
services (every service belongsTo a booking)
I read a about yielding multiple rows with the help of Kiba::Common::Transforms::EnumerableExploder at wiki, blog, ... etc.
Would you solve my use-case by yielding multiple rows (the booking and multiple services), or would you implement a Destination which receives a whole booking and calls some Sub-Destinations (i.e. to create or update a service)?
Author of Kiba here!
This is a common requirement, but it can (and this is not specific to Kiba) be more or less complex to handle. Here are a few points you'll need to think about.
Handling of foreign keys
The main problem here is that you'll want to keep the relationships between services and bookings, once they are inserted.
Foreign keys using business keys
A first (most easy) way to handle this is to use a foreign-key constraint on "booking number", and make sure to insert that booking number in each service row, so that you can leverage it later in your queries. If you do this (see https://stackoverflow.com/a/18435114/20302) you'll have to set a unique-constraint on "booking number" in the bookings table target.
Foreign keys using primary keys
If you instead prefer to have a booking_id which points to the bookings table id key, things are a bit more complicated.
If this is a one-off import targeting an empty table, I recommend that you arbitrarily force the primary key using something like:
transform do |r|
#row_index ||= 0
#row_index += 1
r.merge(id: #row_index)
end
If this not a one-off import, you will have to:
* Upsert bookings in a first pass
* In a second pass, look-up (via SQL queries) "bookings" to figure out what is the id to store in booking_id, then upsert the services
As you see it's a bit more work, so stick with option 1 if you don't have strong requirements around this (although option 2 is more solid on the long run).
Example implementation (using Kiba Pro & business keys)
The simplest way to achieve this (assuming your target is Postgres) is to use Kiba Pro's SQL Bulk Insert/Upsert destination.
It would go this way (in single pass):
extend Kiba::DSLExtensions::Config
config :kiba, runner: Kiba::StreamingRunner
source Kiba::Common::Sources::Enumerable, -> { Dir["input/*.json"] }
transform { |r| JSON.parse(IO.read(r)).fetch('bookings') }
transform Kiba::Common::Transforms::EnumerableExploder
# SNIP (remapping / renaming of fields etc)
first_destination = nil
destination Kiba::Pro::Destinations::SQLBulkInsert,
row_pre_processor: -> (row) { row.except("services") },
dataset: -> (dataset) {
dataset.insert_conflict(target: :booking_number)
},
after_read: -> (d) { first_destination = d }
destination Kiba::Pro::Destinations::SQLBulkInsert,
row_pre_processor: -> (row) { row.fetch("services") },
dataset: -> (dataset) {
dataset.insert_conflict(target: :service_number)
},
before_flush: -> { first_destination.flush }
Here we iterate over each input file, parsing it and grabbing the "bookings", then generating one row per element of "bookings".
We have 2 destinations, doing "upsert" (insert or update), plus one trick to ensure we'll save the parent rows before we insert the children, to avoid a failure due to missing pointed record.
You can of course implement this yourself, but this is a bit of work though!
If you need to use primary-key based foreign keys, you'll have (likely) to split in 2 pass (one for each destination), then add some form of lookup in the middle.
Conclusion
I know that this is not trivial (depending on what you'll need, & if you'll use Kiba Pro or not), but at least I'm sharing the patterns that I'm using in such situations.
Hope it helps a bit!

Is it possible to use different locations with Schema.org JobPosting?

I would like to use Schema.org for JobPosting, but the offer is for different cities (jobLocation).
Can I mark 2-3 cities in this schema (with JSON-LD)? In that case, how?
According to Google: https://developers.google.com/search/docs/data-types/job-postings#definitions
If the job has multiple locations, add multiple jobLocation properties
in an array. Google will choose the best location to display based on
the job seeker's query.
In json+ld it would look something like:
"jobLocation":[
{
"#type":"Place",
"address":{
"#type":"PostalAddress",
"streetAddress": "555 Clancy St",
"addressLocality":"Chicago",
"addressRegion":"IL",
"postalCode": "48201",
}
},
{
"#type":"Place",
"address":{
"#type":"PostalAddress",
"streetAddress": "5 Main St",
"addressLocality":"San Francisco",
"addressRegion":"CA",
"postalCode": "48212",
}
}
]
The jobLocation property, like any property, can have multiple values. In JSON-LD, you have to use an array (see example).
But the question is what multiple values mean for this jobLocation property: do these represent all locations the person has to work in (AND), or do these represent alternatives and the person can choose (OR)?
Neither Schema.org nor JSON-LD offer a way for the author to disambiguate which one is meant.
In my opinion, multiple values should convey that the person has to work in all these places (AND). Why? Because otherwise there would be no way to convey this. If multiple locations would represent alternatives (OR), you can simply provide multiple JobPosting items (one for each location).

Add custom comparatorClass class in Solr

I am newbie in Solr. I want to add a custom comparatorClass in Solr. I also need to use fields - term and count in my custom class which I have defined in my schema.xml.
Structure of indexing document :
"docs": [
{
"count": 98,
"term": "age",
},
{
"count": 6,
"term": "age assan",
},
{
"count": 5,
"term": "age but",
},
{
"count": 10,
"term": "age salman",
}]
I have stored ngrams with term and their count but solr gives frequency by own that I don't need. I want my count frequency which I have defined for each term. And that term and count, I need to use and want to sort with frequency(count) and then edit distance which I need to implement by creating own class in comparator class or there is something else which helps me. Please share..
How can I do this. Any help please.
Thanks.
You should be able to do this without implementing a custom similarity class. The first requirement is (from your description) a straight forward sort on the count value, while the latter can be implemented by sorting on the value from the strdist() function. You can also multiply or weight these values against each other in a single sort statement by using several functions.
If you really, really need to build your own scorer (which I don't think you need to do from your description) - these are usually written to explore other ranking algorithms than tf/idf, bm25 etc. for larger corpuses, a search on Google gives you many resources with pre-made, easy to adopt solutions. I particularly want to point out "This is the Nuclear Option" in Build Your Own Custom Lucene Query and Scorer:
Unless you just want the educational experience, building a custom Lucene Query should be the “nuclear option” for search relevancy. It’s very fiddly and there are many ins-and-outs. If you’re actually considering this to solve a real problem, you’ve already gone down the following paths [...]

How to remove luis entity marker from utterance

I am using LUIS to determine which state a customer lives in. I have set up a list entity called "state" that has the 50 states with their two-letter abbreviations as synonyms as described in the documentation. LUIS is returning certain two letter words, such as "hi" or "in" as state entities.
I have set up an intent with phrases such as "My state is Oregon", "I am from WA", etc. Inside the intent, if the word "in" is included in the utterance, for example in the utterance "I live in Kentucky", the word "in" is marked automatically by LUIS as a state entity and I am unable to remove that marker.
Below is a snip of the LUIS json response to the utterance "I live in Kentucky". As you can see, the response includes both Indiana and Kentucky as entities when there should only be Kentucky.
"query": "I live in Kentucky",
"topScoringIntent": {
"intent": "STATE_INQUIRY",
"score": 0.9338141
},
....
"entities": [
....
{
"entity": "in",
"type": "state",
"startIndex": 7,
"endIndex": 8,
"resolution": {
"values": [
"indiana"
]
}
},
{
"entity": "kentucky",
"type": "state",
"startIndex": 10,
"endIndex": 17,
"resolution": {
"values": [
"kentucky"
]
}
}
], ....
How do I train LUIS not to mark the words "in" and "hi" in this context as states if I can't remove the intent marker from the utterance?
In this particular case (populating a list entity with state abbvreviations/names), you would be better served using the geographyV2 prebuilt entity or Places.AbsoluteLocation prebuilt domain entity. (Please note that at the time of this writing, the geographyV2 prebuilt entity has a slight bug, so using the prebuilt domain entity would be the better option).
The reason for this is two-fold:
One, geographic locations are already baked into LUIS and they don't collide with regular syntactic words like "in", "hi", or "me". I tested this in reverse by creating a [Medical] list that contained "ct" as the normalized value and "ct scan" as a synonym. When I typed "get me a ct in CT" it resulted in "get me a [Medical] in [Medical]". To fix, I selected the second "CT" value and re-assigned it to the Places.AbsoluteLocation entity. After retraining, I tested "when in CT show me ct options" which correctly resulted in "when in [Places.AbsoluteLocation] show me [Medical] options". Further examples and training will refine the results.
Two, lists work well for words that have disparate words that can reference one. This tutorial shows a simple example where loosely associated words are assigned as synonyms to a canonical name (normalized value).
Hope of help!
#StevenKanberg's answer was very helpful but unfortunately not complete for my situation. I tried to implement both geographyV2 and Places.AbsoluteLocation (separately). Neither one works entirely in the way I need it to (recognizing states and their two-letter abbrevs in a way that can be queried from the entities in the response).
So my choices are:
Create my own list of states, using the state name and the two-letter abbrev as synonyms, as described in the list description itself. This works except for two letter abbrevs that are also words, such as "in", "hi" and "me".
Use geographyV2 prebuilt which does not allow synonyms and does not recognize two-letter abbrevs at all, or
Use Places.AbsoluteLocation which does recognize two-letter abbrevs for states, does not confuse them with words, but also grabs all locations including cities, countries and addresses and does not differentiate between them so I have no way of parsing which entity is the state in an utterance like "I live in Lake Stevens, Snohomish County, WA".
Solution: If I combine 1 with 3, I can query for entities that have both of those types. If LUIS marks the word "in" as a state (Indiana), I can then check to see if that word has also been flagged as an AbsoluteLocation. If it has not, then I can safely discard that entity. It's not ideal but is a workaround that solves the problem.

Resources