Google Classroom Grade book API code var. string - google-classroom

In google classroom API code, how do i find the overall grade (percentage) data point that is assigned in grades manipulated by the weighted grade categories?

Grade categories:
Unfortunately, the API doesn't currently support grade categories. This functionality has already been requested in Issue Tracker:
Feature Request: Include Grade Category field in Coursework Resource
Anyone interested in this functionality can click the star on the top-left of the referenced page in order to keep track of its development and to prioritize its implementation.
Non-weighted:
Overall grades can currently be retrieved as long as there are no weighted grade categories: just sum up assignedGrade for each studentSubmission for each specific student (that is, the total amount of points a student has earned), and divide the calculated value by the total amount of maxPoints for each courseWork in the course (that is, maximum possible amount of points in a class).

Related

Algorithms to process a list of transactions

this is a type of question I usually encounter when doing SWE Internship OA for Trading/Payment Services companies: given a list of transactions of the form "Action-Customer ID-Transaction ID-Amount", we are asked to process them and return a list of actions to take or the amount of money remained.
Here is a specific example. Input is a list of string, with the form "Action-Order ID-Price-Amount-Buy/Sell" with only 2 actions SUB (Submit) and CXL (Cancel).
For Buy, Higher price has higher priority while for Sell, lower price has higher priority. If for the same Buy or Sell category, the prices are similar then the string came earlier has higher priority.
If price of buy >= price of sell, then two orders are matched and Amount = min(Amount of Sell, Amount of Buy), Buy or Sell will be decided accordingly.
If an order has already been filled, it will not be cancelled.
If Order ID in CXL action does not exist, there will be no effect (no error).
We are asked to return a list of actions to take, output should be a list of strings with the following format: "Order ID-Buy/Sell-Amount to Buy/Sell"
Input and output examples:
Input: ["SUB-hghg-10-400-B", "SUB-abab-15-500-B", "SUB-abcd-10-400-S"]
Output: ["abcd-S-400"]
Explanation: "abab" has higher Buy price than "hghg" even though it came later, so it will be processed first. Amount of abab = 500, Amount of abcd = 400 => Amount = 400
Input: ["SUB-hghg-10-400-B", "CXL-hghg"]
Output: []
Explanation: do nothing because the order was cancelled before it is filled.
I have attempted the problem using hash maps but it gets too complicated for me. Before, I also encountered a similar problem but the difference is that the Cancel will remove the order regardless of whether it has been filled or not, and in that case I use a LinkedList to keep track of the orders.
I want to ask if there are any general approaches to optimally solve these kinds of problems. I have wandered LeetCode for some time, practicing some Medium questions but have not encountered this problem type. If there is any typical data structure to efficiently store the information of each order, I would like to know also. I have also searched the Internet for some keywords like algorithmic trading, algorithms to process payments/transactions but I have not found anything useful yet. Any help is greatly appreciated!
Thank you so much for reading my lengthy post.
So your input is a series of submitted orders and cancels and your output is a sequence of the trades that happen when orders are matched, right?
I'd approach the problem as follows: Create an order book data structure that contains all open (unmatched) oders, buys and sells separately, ordered by price.
Init: The order book is of course initialized empty. Initialize your list of resulting trades empty as well.
Loop: Then process the incoming requests (submit or cancel) one by one and apply them to the order book. This will either add the order to the book or the order matches one or more other orders and a new trade is generated. Append the resulting trade to your trades list.
That's it basically. However, please note that matching a new order with the book is not completely trivial. One order in the book may only be matched partially and remain in the book or the new order will only be matched partially and the rest amount has to be added to the book. I'd recommend to write unit tests for the single step so you are sure your orders are sorted and matched as intended.

Find combination of bank notes for particular sum

How to issue money efficiently. For example if you have 1 bank note of 100$, 1 bank note of 50$ and 2 bank notes of 30$. How to determine that we need 50$ and two of 30$ to achieve 110$ summary.
In other words, we have fixed number of bank note types: 30$ (two bank notes), 50$ (one bank note), 100$ (one bank note). The problem is to determine which bank notes we should take to get a particular sum, for example, 110$? In this case we should take two 30$ bank notes and one 50$ bank note.
We can't use greedy algorithm here because if we take 100$ bank note first, then we can't achieve 110$ summary.
Which data structure do we need to use for storing bank notes? Simply quantity of each bank note type or may be an array to store each bank note: [100, 50, 30, 30]
And what is the algorithm to find which bank notes do we need to get a particular sum?
You can use dynamic programming approach. You keep an array, where index is reachable sum, and value - bill number, by which this sum is reached.
This approach is used in the Emercoin transaction optimizer.
See source code at:
https://github.com/Emercoin/emercoin/blob/master/src/wallet.cpp#L1112
Memory is O(Sum); Time is O(Sum * NumBills);
See short article about it:
http://cointelegraph.com/news/emercoin-implements-solution-to-reduce-blocksize-inflation
See my solution in this question Coin Change Problem
Also for such kind of problem, the hard part is always to determine if the coin / money system is canonical i.e. Can use greedy algorithm

How to manage multiple positive implicit feedbacks?

When there are no ratings, a common scenario is to use implicit feedback (items bought, pageviews, clicks, ...) to suggests recommendations. I'm using a model-based approach and I wondering how to deal with multiple identical feedback.
As an example, let's imagine that consummers buy items more than once. Should I have to consider the number of feedback (pageviews, items bought, ...) as a rating or compute a custom value ?
To model implicit feedback, we usually have a mapping procedure to map implicit user feedback into the explicit ratings. I guess in most domains, repeated user action against the same item indicates that the user's preference over the item is increasing.
This is certainly true if the domain is music or video recommendation. In a shopping site, such a behavior might indicate the item is consumed periodically, e.g., diapers or printer ink.
One way I am aware of to model this multiple implicit feedback is to create a numeric rating mapping function. When the number of times (k) of implicit feedback increases, the mapped value of rating should increase. At k = 1, you have a minimal rating of positive feedback, for example 0.6; when k increases, it approaches 1. For sure, you don't need to map to [0,1]; you can have integer ratings, 0,1,2,3,4,5.
To give you a concrete example of the mapping, here is what they did in a music recommendation domain. For short, they used the statistic info of the items per user to define the mapping function.
We assume that the more
times the user has listened to an artist the more the user
likes that particular artist. Note that user’s listening habits
usually present a power law distribution, meaning that a few
artists have lots of plays in the users profile, while the rest
of the artists have significantly less play counts. Therefore,
we compute the complementary cumulative distribution of
artist plays in the users’ profile. Artists located in the top
80-100% of the distribution are assigned a score of 5, while
artists in the 60-80% range assign a score of 4.
Another way I have seen in the literature is to create another variable besides a binary rating variable. They call it confidence levels. See here for details.
Probably not that helpful for OP any longer, but it might be for others in the same boat.
Evaluating Various Implicit Factors in E-commerce
Modelling User Preferences from Implicit Preference Indicators via Compensational Aggregations
If anyone knows more papers/methods, please share as I'm currently looking for state of the art approaches to this problem. Thanks in advance.
You typically use a sum of clicks, or some weighted sum of events, as a "score" for each user-item pair in implicit feedback systems. It's not a rating, and that's more than a semantic distinction. You won't get good results if you feed these values into a process that's expecting rating-like and trying to minimize a squared-error loss.
You treat 3 clicks as adding 3 times the value of 1 click to the user-item interaction strength. Other events, like a purchase, might be weighted much more highly than a click. But in the end it also adds to a sum.

Design of Bayesian networks: Understanding the difference between "States" and "Nodes"

I'm designing a small Bayesian Network using the program "Hugin Lite".
The problem is that I have difficulty understanding the difference between "Nodes"(visual circles) and "States"(witch are the "fields" of a node).
I will write an example where it is clear,and another which I can't understand.
The example I understand:
There are two women (W1 and W2) and one men (M).
M get a child with W1. Child's name is: C1
Then M get a child with W2. Child's name is: C2
The resulting network is:
The four possibles STATES of every Node (W1,W2,M,C1,C2) are:
AA: the person has two genes "A"
Aa/aA: the person has one gene "A" and one gene "a"
aa: the person has two genes "a"
Now the example that I can't understand:
The data given:
Total(authorized or not) of payments while a person is in a foreign country (travelling): 5% (of course the 95% of transactions are transactions made in the home country)
NOT AUTHORIZED payments while TRAVELLING: 1%
NOT AUTHORIZED payments while in HOME COUNTRY: 0,2%
NOT AUTHORIZED payments while in HOME COUNTRY and to a FOREIGN COMPANY: 10%
AUTHORIZED payments while in HOME COUNTRY and to a FOREIGN COMPANY: 1%
TOTAL (authorized of not authorized) payments while TRAVELLING and to a FOREIGN country: 90%
What I've drawn is the following.
But I'm not sure if it's correct or not. What do you think? Then I'm supposed to fulfill a "probability table" for each node. But what should I write?
Probability table:
Any hint about the network correctness and how to fullfill the table is really appreciated.
Nodes are Random Variables (RV), that is is "things" that can have different states thus with certain levels of uncertainty therefore you assign probabilities to those states. So for example if you talk of RV of Person it could have different states such as [Man or Woman] with their corresponding probabilities, if you want to relate it to another RV Credit Worthiness [Good,Bad] then you can "marry" Person and Credit Worthiness to have a combination of both RV and the combination of states.
This is homework so I don't want to just tell you the answer. Instead, I'll make an observation, and ask a few questions. The observation is that you want your arrows goig from cause to effect.
So. Is the payment authorization status a/the cause of the location? Or is the location a/the cause of the payment authorization?
Also, do you really need four variables for each of travelling, home, foreign, and local? Or might some smaller number of variables suffice?

Algorithm for Rating Objects Based on Amount of Votes and 5 Star Rating

I'm creating a site whereby people can rate an object of their choice by allotting a star rating (say 5 star rating). Objects are arranged in a series of tags and categories eg. electronics>graphics cards>pci express>... or maintenance>contractor>plumber.
If another user searches for a specific category or tag, the hits must return the highest "rated" object in that category. However, the system would be flawed if 1 person only votes 5 stars for an object whilst 1000 users vote an average of 4.5 stars for another object. Obviously, logic dictates that credibility would be given to the 1000 user rated object as opposed to the object that is evaluated by 1 user even though it has a "lower" score.
Conversely, it's reliable to trust an object with 500 user rating with score of 4.8 than it is to trust an object with 1000 user ratings of 4.5 for example.
What algorithm can achieve this weighting?
A great answer to this question is here:
http://www.evanmiller.org/how-not-to-sort-by-average-rating.html
You can use the Bayesian average when sorting by recommendation.
I'd be tempted to have a cutoff (say, fifty votes though this is obviously traffic dependent) before which you consider the item as unranked. That would significantly reduce the motivation for spam/idiot rankings (especially if each vote is tied to a user account), and also gets you a simple, quick to implement, and reasonably reliable system.
simboid_function(value) = 1/(1+e^(-value));
rating = simboid_function(number_of_voters) + simboid_function(average_rating);

Resources