From POWER BI table containing a simple series of yearly results (profit or loss), I'm trying to calculate yearly taxable income. For that,
losses from one or several previous years can be deducted from current profit
if the amount of previous losses exceeds the amount of current profit, that excess can be applied to profit
however, the excess of profit over losses can not be applied to further losses
Therefore, taxable income in a loss year is always =0, and always >=0 in a profit year.
The outcome I´m after might be something like this:
Taxable income calculation
The issue here is that "Previous losses compensation" depends on "Previous losses balance" and viceversa, generating a circular dependency. I've tried with both measures and calculated columns, to no avail.
Any suggestion will be very much appreciated. Thanks in advance.
For what it's worth, I think I came out with some sort of solution here. Data lie in [Tabla5], and I defined
Year's result = SUM(Tabla5[RCAT])
In the first place, I considered that every time there's a positive result immediately after a loss, there must be a compensation:
Last year's loss compensation =VAR _Comp=
SUMX(Tabla5,
VAR _CurrentResult= [Year's result]
VAR _LastResult=MAXX(FILTER(ALL(Tabla5),Tabla5[Year]=EARLIER(Tabla5[Year])-1),[Year's result])
RETURN
IF(
AND(_LastResult<0, _CurrentResult>0),
MIN(_CurrentResult,ABS(_LastResult)),0
)
)RETURN_Comp
Secondly, we need to find out the amount of tax credit available after this first compensation, by means of:
Cumm First compensation = CALCULATE([Last year's loss compensation], FILTER(ALL(Tabla5),Tabla5[Year]<=MAX(Tabla5[Year])))
and
Prior losses = SUMX(FILTER(ALL(Tabla5),Tabla5[Year]<MAX(Tabla5[Year])),IF([Year's result]<0,ABS([Year's result]),0))
and
Tax credit available = [Prior losses]-[Cumm First compensation]
The third step would be comparing this tax credit still available to the amount of profit available for compensation:
Profit available for compensation = IF(
AND([Year's result]>0, [Tax credit available]>0),
[Year's result]-[Last year's loss compensation],0
)
and
Cumm Second Compensation = MIN(SUMX(FILTER(ALL(Tabla5),Tabla5[Year]<=MAX(Tabla5[Year])),IF(AND([Year's result]>0, [Tax credit available]>0),[Profit available for compensation])),[Tax credit available])
The difference between years of this last measure will bring the value of the current year´s second compensation:
Prior years losses compensation = [Cumm Second Compensation]- MAXX(FILTER(ALL(Tabla5), Tabla5[Year]=MAX(Tabla5[Year])-1),[Cumm Second Compensation])
Finally, we just need to sum both compensations and substract that value from current year's profit in order to find taxable income:
Total compensation = [Last year's loss compensation]+[Prior years losses compensation]
and
Taxable income = IF([Year's result]>0, [Year's result]-[Total compensation],0)
The outcome would be something like
Outcome
I've been trying to buid a one-measure-only solution, but I came across with some row/filter context issues that made it too complicated to me. Maybe someone could sort this out.
Related
I can get bid and ask data from my market data provider but I want to convert this in OHLC values.
What is the good calculation using bid/ask? I saw in a post that for a specific period:
Open = (first bid + first ask) / 2.
High = Highest bid
Low = Lower ask
Close = (last bid + last ask) / 2
Is it true?
You are getting confused with terminology. In forex:
Ask is the price that you, the trader, can currently buy at.
Bid is the price that you, the trader, can currently sell at.
OHLC are historical prices for a predetermined period of time (common time periods at 1 min, 5 min, 15 min, 30 min, 1 hour, 4 hour, daily and weekly) and are usually used to plot candle stick charts (and tend to be based on the Bid price only).
Open - This is the bid price at the commencement of the time period.
High - This is the highest bid price that was quoted during the time period.
Low - This is the lowest bid price that was quoted during the time period
Close - This is the last bid price at the end of the time period.
Conversion between the two is not always straightforward or even possible. What many beginners (including myself) stumble upon:
Ohlc data represents trades that did actually happen. Bid and ask represent requests for trades that might never happen.
Simplified example:
Let's say investor A wants to sell 100 shares of a specific company for 20$ each, so he places ask(100,20) on the market. Investor B wants to buy 100 shares of the same company, but only wants to pay 18$ each, so he places bid(100,18).
If both are not willing to change their price, no trade will happen and no ohlc data will be generated (if no other trades occur in this timeframe).
Of course, one can assume that if trades happen in a specific time frame, h will be the highest price someone is willing to pay (highest bid) and l will be the lowest price someone is willing to sell for (lowest ask), as those orders have the highest chance of being met. But I think o and c values really depend on which bids/asks actually turned into a trade.
I got a basic issue looking like voodoo magic to me (a noob at google sheet):
What I need (a future value on a specific date)
Instead of the usual Future Value after "n" years or months I need to know the future value on a SPECIFIC DATE (eg.: on the 20th of March 2030 or in "2560" days) compounded either yearly or monthly with or without contribution.
What I have (the usual data for calculating FV):
Yearly (or monthly if it's easier) compound rate.
A present principal which compounds yearly or monthly
A regular monthly (or weekly) contribution to the principal.
SAMPLE FORMULAS I WORK WITH:
FV = SV*(((CAGR*100)/100)+1)^n.
FV - Future Value
SV - Starting Value
CAGR - Compound Annual Growth Rate
n - years
This tells me how much capital I will have given an annual growth rate after n years. But how to have the formula telling me what that capital will be on a specific date and also how to add the monthly/weekly contribution?
Any idea on how to achieve this?
Thanks a lot
I created a calculator in Google Sheets that has all the necessary formulas for FV, PV, PVAF, and "Compound FV on a SPECIFIC DATE".
Although, I believe this question is more suited to the Personal Finance & Money and Stackexchange site.
See this link for the calculator.
Description
In order to calculate the contributions into the formula we must use the following values:
p = initial value
n = compounding periods per year
r = nominal interest rate, compounded n times per year
i = periodic interest rate = r/n
y = number of years
t = number of compounding periods = n*y
d = periodic deposit
The formula for the future value of an annuity due is
d*(((1 + i)^t - 1)/i)*(1 + i)
(In an annuity due, a deposit is made at the beginning of a period and the interest is received at the end of the period. This is in contrast to an ordinary annuity, where a payment is made at the end of a period.)
The formula is derived, by induction, from the summation of the future values of every deposit.
The initial value, with interest accumulated for all periods, can simply be added.
pfv = p*(1 + i)^t
total = pfv + fv
So the overall formula is
I'm currently working on Node.js back-end application.
In a nutshell, it should compute a deposit needed to return to previous average after several days of not making deposits.
Let's say, our average deposit during the first three days was 100.
Then we were idle for 4 days, obviously our average would drop significantly.
How do we figure out what a single deposit must to be to return to the previous average of 100?
Help will be greatly appreciated.
Keep track the latest day when any last deposit was made. Let's say the average deposit was X on day a.
After the idle period, on day b, the total money needed to submit to restore the previous average deposit - (b - a) * X.
In case average can be fraction number, you also need to consider whether you will do ceiling or flooring after the calculation based on your requirement.
If someone is interested, I built the algorithm.
let x=(previousAveragePerDay*(datesConcluded+worthOfDiff))-r[0].sum;
where previousAveragePerDay is average during days of making a deposit
datesConcluded is the number of days of making a deposit
worthOfDiff is the number of days of not making a deposit
r[0].sum is the sum of all deposits made
Background;
I am looking for a way to calculate the score of a piece of audio based on listeners feedback. Each time a user listens to the track, they must vote if they like it, a simple yes or no. Then each track has a score, based on the number of yes and no votes.
Additionally I would like to decay the value of each vote uniformly over the course of 31 days, so after this amount of time, its value is 0 and doesn't contribute to the overall total score.
I have found a lot of discussions based on reddit and hacker news ranking algorithms, but these seem to decay the total score, and not individual votes themselves. Each vote will have a different amount of decay, based on when the vote was originally cast.
Can anyone help or recommend some material to look at?
Thanks
You could model it as "yes" = 1.0 and "no" = 0.0.
Then, the value of a vote on the nth day after it was cast = (31-n)/31. Further condition this if n > 31, then set it to 0.
Hope this answers your question.
What acceleration do you want on the degradation. A common one is logarithmic because it is easy to implement. Score a 1 for like and a -1 for dislike. Then, when adding up the likes/disliked, divide by the number of days since the vote. On day 1, the vote will have an absolute value of 1. On day 2, it will be 1/2. On day 3, it will be worth 1/3, etc... On day 31, it will be worth 1/31 (0.03).
The problem with logarithmic degradation is that it drops very quickly. You can use many other methods, such as multiplying by log(11-d) where d=1 on the first day, 2 on the second day, and so on. It only allows 11 days of degradation. log(31-d) would allow 31 days. You need to ensure you don't try to do log(0) or log(-x).
Another problem with this entire model is how to handle things that only have old votes. What if something has nothing but likes, but all the likes are old? It will register as not liked much because all the likes have degraded.
I am currently working on writing an algorithm for my new site I plan to launch soon. The index page will display the "hottest" posts at the moment.
Variables to consider are:
Number of votes
How controversial the post is (# between 0-1)
Time since post
I have come up with two possible algorithms, the first and most simple is:
controversial * (numVotesThisHour / (numVotesTotal - numVotesThisHour)
Denom = numVotesTuisHour if numVotesTotal - numVotesThisHour == 0
Highest number is hottest
My other option is to use an algorithm similar to Reddit's (except that the score decreases as time goes by):
[controversial * log(x)] - (TimePassed / interval)
x = { numVotesTotal if numVotesTotal >= 10, 10 if numVotesTotal < 10
Highest number is hottest
The first algorithm would allow older posts to become "hot" again in the future while the second one wouldn't.
So my question is, which one of these two algorithms do you think is more effective? Which one do you think will display the truly "hot" topics at the moment? Can you think of any advantages or disadvantages to using one over the other? I just want to make sure I don't overlook anything so that I can ensure the content is as relevant as possible. Any feedback would be great! Thanks!
Am I missing something. In the first formula you have numVotesTotal in the denominator. So higher number of votes all time will mean it will never be so hot even if it is not so old.
For example if I have two posts - P1 and P2 (both equally controversial). Say P1 has numVotesTotal = 20, and P2 has numVotesTotal = 1000. Now in the last one hour P1 gets numVotesThisHour = 10 and P2 gets numVotesThisHour = 200.
According to the algorithm, P1 is more famous than P2. It doesn't make sense to me.
I think the first algorithm relies too heavily on instantaneous trend. Think of NASCAR, the current leader could be going 0 m.p.h. because he's at a pit stop. The second one uses the notion of average trend. I think both have their uses.
So for two posts with the same total votes and controversial rating, but where posts one receives 20 votes in the first hour and zero in the second, while the other receives 10 in each hour. The first post will be buried by the first algorithm but the second algorithm will rank them equally.
YMMV, but I think the 'hotness' is entirely dependent on the time frame, and not at all on the total votes unless your time frame is 'all time'. Also, it seems to me that the proportion of all votes in the relevant time frame, rather than the absolute number of them, is the important figure.
You might have several categories of hot:
Hottest this hour
Hottest this week
Hottest since your last visit
Hottest all time
So, 'Hottest in the last [whatever]' could be calculated like this:
votes_for_topic_in_timeframe / all_votes_in_timeframe
if you especially want a number between 0 and 1, (useful for comparing across categories) or, if you only want the ones in a specific timeframe, just take the votes_for_topic_in_timeframe values and sort into descending order.
If you don't want the user explicitly choosing the time frame, you may want to calculate all (say) four versions (or perhaps just the top 3), assign a multiplier to each category to give each category a relative importance, and calculate total values for each topic to take the top n. This has the advantage of potentially hiding from the user that no-one at all has voted in the last hour ;)