Can any one help me understand
What happens to the fees of each transactions ? Does fees become part of the reward ?
When does the reward gets distributed . Is there any specific timeperiod. Can you please point me to couple of examples in explorer where reward is distribuited.
i got this docs for Reward - https://nomicon.io/Economics/README.html#rewards-calculation
coinbaseReward[t] The maximum inflation per epoch[t], as a function of REWARD_PCT_PER_YEAR / EPOCHS_A_YEAR
where can i get REWARD_PCT_PER_YEAR and EPOCHS_A_YEAR. Does it remain fixed for ceratin range of blocks
epochReward[t] = coinbaseReward[t] + epochFee[t]
epochFee[t] - does this mean fees of all transactions happened in an epoch .
Can i have some staking transactions from explorer if any
Sure,
I had exactly the same questions and came out with a doc.
In a few words:
Fees are burnt, they do not become a part of validators reward, though part of the fees of function calls and cross-contract function calls become a royalty reward to contract accounts.
Reward tokens are minted every epoch (something about every 12 hours) and gone to staking pools.
Please check the doc for the examples, I covered all possible situations of balance changing with all numbers and explanation what is going on.
UPD by the way, it's true, there is no information about royalties in Nomicon, we created the issue for that
Related
Scenario:
I need to give users opportunity to book different times for the service.
Caveat is that i dont have bookings in advance but i need to fill them as they come in.
Bookings can be represented as keyvalue pairs:
[startTime, duration]
So, for example, [9,3] would mean event starts at 9 o’clock and has duration of 3 hours.
Rules:
users come in one by one, there is never a batch of users requests
no bookings can overlap
service is available 24/7 so no need to worry about “working time”
users choose duration on their own
obviously, once user chooses&confirms his booking we cannot shuffle it anymore
we dont want gaps to be lesser than some amount of time. this one is based on probability that future users will fill in the gap. for example, if distribution of durations over users bookings is such that probability for future users filling the gap shorter than x hours is less than p then we want a rule that gap cannot be shorter than x. (for purpose of this question, we can assume x being hardcoded, here i just explain reasons)
the goal is to have service-busy-duration maximized
My thinking so far...
I keep the list of bookings made so far
I also keep track of gaps (as they are potential slots for new users booking)
When new user comes with his booking [startTime, duration] i first check for ideal case where gapLength = duration. if there is no such gaps, i find all slots (gaps) that satisfy condition gapLength - duration > minimumGapDuration and order them in descending order by that gapLength - duration value
I assign user to the first gap with maximum value of gapLength - duration since that gives me highest probability that gap remaining after this booking will also get filled in future
Questions:
Are there some problems with my approach that i am missing?
Are there some algorithms solving this particular problem?
Is there some usual approach (good starting point) which i could start with and optimize later? (i am actually trying to get enough infos to start but not making some critical mistake; optimizations can/should come later)
PS.
From research so far it sounds this might be the case for constraint programming. I would like to avoid it if possible as i have no clue about it (maybe its simple, i just dont know) but if it makes a real difference, i will go for its benefits and implement it.
I went through stackoverflow for similar problems but didnt find one with unknown future events. If there is such and this is direct duplicate, please refer to it.
Gurus,
I am in the process of writing some code to optimize employee transport for corporate. I need all you expert's advice on how can this be achieved. Here is my scenario.
There are 100 pick up points all over city from where employees need to be brought to company with multiple vehicles. Each vehicle can occupy say 4 or 6 employees. My objective is to write some code which will group the people from nearby areas and bring them to company. Master data will have addresses and its latitude/longitude. I want to build an algorithm to optimize vehicle occupancy as well as distance and time. Could you guys please give some directions how can this be achieved. I understand I may need to use google maps or direction API for this but looking for some logic hint/advice how this can be achieved.
Some more inputs: These vehicles are of company's vehicle with driver. Travel time should not be more than 1.5 Hrs.
Thanks in advance.
Your problem description is a more complicated version of "The travelling salesman problem". You can look it up on and find some different examples and how they are implemented.
One point that need to be clarified : the vehicules to be use will be the employee vehicule that will be carshared or it will be company's vehicule with a driver?
You also need to define some time constrain. For example 50 employees should have under 30 min travel, 40 employe under 1h travel and 10 employe under 1,5H.
You also need to define the travel time for each road depending of the time, because at different time there will be traffic jam or not.
You also need to define group within the employee: usually people (admin clerck or CEO) in a company don't commute at the same time, it can have 1 hour range or more.
In fine, don't forget to include about 10% of the employee that wil be 2 to 5 min late to their meeting point.
I have a scenario where deliveries should be managed properly with optimized route.
Let me explain the scenario in detail;
We are dealing with a company who are doing the furniture sales. There may be more than 4000 to 5000 deliveries per day and it is pretty difficult for them to assign each delivery to each vehicle and the fuel cost also not able to control. The overtime allowance also coming very high as they are not properly scheduling the deliveries. So they required an application which will handle these situations.
The application should handle following scenarios;
All the deliveries to be done.
The system should findout how many vehicles required for delivering the items with the following input parameters.
Input 1: The delivery location (Latitude and Longitude).
Input 2: Percentage of space used by the item in Vehicle
Input 3: The FIXING TIME (For items dismantled; which will be refixing at delivery
location)
From the above parameters the system should calculate how many vehicles required and which all deliveries to be assigned to which all vehicles.
The Vehicle working time should be 8 hours (TRAVEL TIME + FIXING TIME).
I have tried this with the help of HUNGARIAN algorithm but failed to get the proper output.
Please suggest me which algorithm will be suitable for my scenario.
Shenu Lal
I read this problem in a book (Interview Question), and wanted to discuss this problem, in detail over here. Kindly throw some lights on it.
The problem is as follows:-
Privacy & Anonymization
The Massachusetts Group Insurance Commission had a bright idea back in the mid 1990s - it decided to release "anonymized" data on state employees that showed every single hospital visit they had.
The goal was to help the researchers. The state spent time removing identifiers such as name, address and social security no. The Governor of Masachusetts assured the public that this was sufficient to protect patient privacy.
Then a graduate student, saw significant pitfalls in this approach. She requested a copy of the data and by collating the data in multiple columns, she was able to identify the health records of the Governor.
This demonstrated that extreme care needs to be taken in anonymizing data. One way of ensuring privacy is to aggregate data such that any record can be mapped to at least k individuals, for some large value of k.
I wanted to actually experience this problem, with some kind of example set, and then what it actually takes to do this anonymization. I hope you are clear with the question.....
I have no experienced person, who can help me deal with such kind of problems. Kindly don't put votes to close this question..... As I would be helpless, if this happens...
Thanks & if any more explanation in question required, kindly shoot with the questions.
I just copy pasted part of your text, and stumbled upon this
This helps understanding your problem :
At the time GIC released the data, William Weld, then Governor of Massachusetts, assured the public that GIC had protected patient privacy by deleting identifiers. In response, then-graduate student Sweeney started hunting for the Governor’s hospital records in the GIC data. She knew that Governor Weld resided in Cambridge, Massachusetts, a city of 54,000 residents and seven ZIP codes. For twenty dollars, she purchased the complete voter rolls from the city of Cambridge, a database containing, among other things, the name, address, ZIP code, birth date, and sex of every voter. By combining this data with the GIC records, Sweeney found Governor Weld with ease. Only six people in Cambridge shared his birth date, only three of them men, and of them, only he lived in his ZIP code. In a theatrical flourish, Dr. Sweeney sent the Governor’s health records (which included diagnoses and prescriptions) to his office.
Boom! But it was only an early mile marker in Sweeney's career; in 2000, she showed that 87 percent of all Americans could be uniquely identified using only three bits of information: ZIP code, birthdate, and sex.
Well, as you stated it, you need a random database, and ensure that any record can be mapped to at least k individuals, for some large value of k.
In other words, you need to clear the database of discriminative information. For example, if you keep in the database only the sex (M/F), then there is no way to found out who is who. Because there are only two entries : M and F.
But, if you take the birthdate, then your total number of entries become more or less 2*365*80 ~=50.000. (I chose 80 years). Even if your database contain 500.000 people, there is a chance that one of them (let's say a male born on 03/03/1985) is the ONLY one with such entry, thus you can recognize him.
This is only a simplistic approach that relies on combinatorial stuff. If you're wanting something more complex, look for correlated information and PCA
Edit : Let's give an example. Let's suppose I'm working with medical things. If I keep only
The sex : 2 possibilities (M, F)
The blood group : 4 possibilities (O, A, B, AB)
The rhesus : 2 possibilities (+, -)
The state they're living in : 50 possibilities (if you're in the USA)
The month of birth : 12 possibilities (affects death rate of babies)
Their age category : 10 possibilities (0-9 years old, 10-19 years old ... 90-infinity)
That leads to a total number of category of 2*4*2*50*12*10 = 96.000 categories. Thus, if your database contains 200.000.000 entries (rough approximation of the number of inhabitants in the USA that are in your database) there is NO WAY you can identify someone.
This also implies that you do not give out any further information, no ZIP code, etc... With only the 6 information given, you can compute some nice statistics (do persons born in december live longer?) but there is no identification possible because 96.000 is very inferior to 200.000.000.
However, if you only have the database of the city you live in, who has for example 200.000 inhabitants, the you cannot guaranty anonymization. Because 200.000 is "not much bigger" than 96.000. ("not much bigger" is a true complex scientifical term that requires knowledge in probabilities :P )
"I wanted to actually experience this problem, with some kind of example set, and then what it actually takes to do this anonymization."
You can also construct your own dataset by finding one alone, "anonymizing" it, and trying to reconstruct it.
Here is a very detailed discussion of the de-identification/anonymization problem, and potential tools & techniques for solving them.
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CDQQFjAA&url=https%3A%2F%2Fwww.infoway-inforoute.ca%2Findex.php%2Fcomponent%2Fdocman%2Fdoc_download%2F624-tools-for-de-identification-of-personal-health-information&ei=QiO0VL72J-3nsATkl4CQBg&usg=AFQjCNF3YUE2cl9QZTuw-L4PYtWnzmwlIQ&sig2=JE8bYkqg04auXstgF0f7Aw&bvm=bv.83339334,d.cWc
The jurisdiction for the document above is within the rules of the Canadian public health system, but they are conceptually applicable to other jurisdictions.
For the U.S., you would specifically need to comply with the HIPAA de-identification requirements. http://www.hhs.gov/ocr/privacy/hipaa/understanding/coveredentities/De-identification/guidance.html
"Conceptually applicable" does not mean "compliant". To be compliant, with the EU, for example, you would need to dig into their specific EU requirements as well as the country requirements and potentially State/local requirements.
I'm going to be starting a banner-rotation script soon and I'm getting a bit perplexed over how exactly to develop it. Suppose a client asks for
"10,000 impressions in the next 10 days for $10,000 dollars."
Another client asks for
"1,000 impressions for $100 dollars."
And a third asks for
"1,000 clicks or 10,000 impressions for $5,000."
How exactly do I determine which banner to show upon a page-request? How do I weigh one against another? Clearly the first request is rather important, as I'm expected to serve a set number of impressions within a time-window.
The second client is not nearly as important, as they don't care about a time-window, they just want some face-time.
And the last client wants to place an n or m restraint on the impressions/clicks, making matters slightly more difficult.
I'm already pretty confident that I'll need to abstract some weight from these scenarios to determine who gets the most attention. My question is what type of algorithm could handle this, and secondly how could I serve up banners by weight without always serving up the most important banner with each request?
The difficulty comes from the time constraint more than anything else. I would divide anyone's priority who did not specify a time constraint by 365 (a year), and then use time as part of the weight factor. So:
Client 1 priority: 10000/10 = 1000
Client 2 priority: 1000/365 ~ 3
Client 3 priority: 10000/365 ~30
That should get you a fairly decent indicator of priority. Now, you can't mix and match impressions and clicks can you? They either go the impression route or the click route. Seeing as you cannot control click, but you can control impressions (at least, moreso than clicks), I would weigh it according to impressions.
Use a random-number generator to pick which ad to show, and weight it with a priority for each ad. Set the weighting factor higher for clients that want more impressions or have a deadline. You can increase weighting factor if the time is almost up.
Once a client hits their requested impressions, drop weighting to 0 to prevent their ad from showing.
Default weighting could be 1 or so, with clients being allowed to pay extra to increase priority (without telling them the mechanics -- bill it as "premium" placement, etc).
Edit: weighting details
You can make this as simple or complex as you like, but a basic version would include the following terms:
weight is 0 if ad has reached purchased impressions/clicks
base weighting (1.0 probably)
multiply weight by impressions_remaining / TOTAL impressions remaining for all clients
add a small constant if remaining impressions/clicks is small -- ensures they get the last few ones needed to finish the account
for deadline clients: add term for (remaining impressions/purchased impressions)/(time left/total time)
The deadline clients should be capped at 90% of all page displays or something to ensure they don't outcompete others. The last term gives the "urgency" for deadline clients -- it goes to infinity as deadline hits, so you should put a condition on the remaining time piece to prevent problems with this.
Microsoft Commerce Server contains a NOD algorithm
(see http://msdn.microsoft.com/en-us/library/ms960081%28v=cs.70%29.aspx
and http://msdn.microsoft.com/en-us/library/ee825423%28v=cs.10%29.aspx )
I've used derived versions of this formula in 3 different ad servers, which turned out to work nice for my conditions.
The basic formula regarding your situation uses a variable called NOD, short for "Need of Delivery". At any given time, the "basic" NOD formula of a banner is:
NOD=(Remaining Events / Total Events Requested) * (Total Runtime /
Remaining Runtime)
Note that "Events" is a general term, which may represent impressions, clicks, conversions, etc. depending on your system.
The equation states that all banners start with the initial value of 1.0 to their lives, because (e / e) * (t / t) = 1.0
A higher-than-1 NOD value means you are behind your schedule, while a NOD between 0 and 1 generally means that you have displayed the banner "too fast". Values between 0.9 and 1.2 are generally in acceptable range (this is not a technical range, rather a business experience).
As long as the serving ratios match duration ratios, values stay around 1.0.
For a specific ad slot, the algorithm checks the NODs of all available banners targettable on the slot. Suppose you have 3 banners available on a slot, with NOD values 0.6, 1.35 and 1.05, which add up to 3.0. Then relative probabilities of each banner to be displayed become 20%, 45% and 35% in order [ 0.6 / (0.6 + 1.35 + 1.05)] = 20%
The algorithm uses weighted probability distribution, which means that even the banner with the least NOD value has the chance to be displayed. While the basic formula uses this approach, business decisions generally always forced me to implement algorithms favoring the urgent NOD values more than the original formula. So, I took the base NODs and multiplied them with themselves. In the same example, probabilities become 11%, 55,5%
and 33,5% in order.
For your condition, you might consider changing the formula a little bit to serve your needs. First to be able to compare the income you will earn by displaying a banner, you should convert all display types (impression, click, action, etc) to a common eCPM value. Then you might use this eCPM as a multiplier to the original equation.
Calculating eCPM (effective CPM) might be tricky for not-yet-published campaigns, in this case you should use historical data.
Let me explain this part a little bit more: When trying to compare the probable income you will earn by "displaying" a single banner, you don't need to compare impression based budgets. For click based budgets, you should use historical CTR value to guess "how many impressions does my system need to serve to get X clics". A more advanced algorithm might utilize "how many impressions does my system need to serve to get a campaign in X category, in y inventory".
Then your final equation becomes:
NOD = eCPM * (Remaining Events / Total Events Requested) * (Total
Runtime / Remaining Runtime)
You can always consider using powers of eCPM to compare the results. Like my way of changing the original formula to favor more urgent campaigns, you might favor "more paying" campaigns.
I really like AlbertoPL's time-based approach, but he doesn't factor in the clicks. Its easy to demonstrate pathological cases where clicks are relevant:
Client A offers $1000 for 1 click or 10,000 impressions
Client B offers $1000 for 5000 clicks or 10,000 impressions.
Any reasonable person would give the 1-click guy higher priority. The calculation is actually pretty trivial: assume your click-through is 100 impressions per click.
Client A wants 10,000 impressions or 1 click, so we require a bare minimum of 100 impressions to get paid. At a cost of $1000 per 100 impressions, you can figure that your client is willing to pay $10/impression.
Client B wants 10,000 impressions or 5000 clicks. 5000 clicks requires 500,000 impressions, we'll clearly meet the 10,000 impression mark before then, so we assume the client is really offering to pay $1000 for 10,000 impressions, or $0.10/impression.
We maximize revenue by maximizing our $$$$$/impression, so client A takes priority. Let's use the figures provided in the OP:
Client 1:
10,000 impressions in the next 10 days for $10,000 dollars
= minimum of 10,000 impressions * $1/impression / 10 days
= $1000/day
Client 2:
1,000 impressions for $100 dollars
= minimum of 1,000 impressions * $.01/impression / 365 days
= $0.27/day.
Client 3:
1,000 clicks or 10,000 impressions for $5000
= min(100,000 impressions to get 1,000 clicks, 10,000 impressions) = 10,000 impressions for $5000
= minimum of 10,000 impressions * $0.5/impression / 365
= $13.7/day.
Clients take priority based on how much they pay per day.