I was wondering if Cognos Framework Manager has the built-in function "Last" like in Dynamic Cubes?
Or does someone know how to model following case:
We have two dimensions - a time dimension with year, half-year, quarter and month and another dimension that categorises people depending how long they are attending a project (1-30 days, 31-60 d, 60-180, 180 -365, 1-2 years, +2 years). However the choice of the time dimension level (year, half-year etc.) influences the categorization of the other dimension).
An example:
A person attends a project starting from 15.11.2018 and ends 30.06.2020. The cognos user uses for the time dimension the year level thus 2018, 2019 & 2018 will be displayed.
For 2018 the person will be in the category 31-60 days, since 46 days have passed until 31.12.2018. For 2019 the person will be listed in category 1-2 years as 46 + 365 days will have been passed since 31.12.2019. For 2020 the person will also be in that category as 46 + 365 + 180 day have gone by.
The categories will change if the user selects another time dimension level e.g. half-years:
2nd HY 2018: 31-60 (46 days passed)
1st HY 2019: 180-365 days (46 + 180 --> End of HY2019)
2nd HY 2019: 1-2 years (46 + 180 + 180)
1st HY 2020: 1-2 years (46 + 180 + 180 + 180)
Does someone know how to model dynamic dimension categories based on selection of another dimension (here time dimension)?
The fact table contains monthly data and for the mentioned peroson above there will be 20 seperate records (for each month between november 2018 and june 2020).
For any period, a person may or may not be working on a project.
Without knowing exactly what your data and metadata is it would be somewhat difficult to prescribe an exact solution but the approach would probably be somewhat similar to a degenerate dimension scenario.
You would want to model the project dimension as a fact as well as a dimension. You would have relationships between it and time and whatever other dimensions you need.
Depending on the data and the metadata you might need to do some gymnastics to get there.
If the data was in a form similar to this it would be not too difficult. This is an example to get you an idea about some ways of approaching the problem.
Date_Key Person_Key Project_Key commitment_status, which would be the measure.
20200101 1 1 1
20200101 1 2 0
20200101 1 3 0
20200102 1 1 1
20200102 1 2 0
20200102 1 3 0
20200103 1 1 0
20200103 1 2 1
20200103 1 3 0
In the above, person 1 was working on project 1 for 2 days and then put onto project 2 for a day. By aggregating the commitment status, which is done by setting the aggregate rule property, you would be able to determine the number of days a person has been working on a project no matter what time period you have set in your query.
I have a table in Oracle that records events for a user. This user may have many events. From these events I am calculating a reputation with a formula. My question is, what is this best approach to do this in calculating and returning the data. Using a view and using SQL, doing it in code by grabbing all the events and calculating it (problem with this is when you have a list of users and need to calculate the reputation for all), or something else. Like to hear your thoughts.
Comments * (.1) +
Blog Posts * (.3) +
Blog Posts Ratings * (.1) +
Followers * (.1) +
Following * (.1) +
Badges * (.2) +
Connections * (.1)
= 100%
One Example
Comments:
This parameter is based on the average comments per post.
• Max: 20
• Formula: AVE(#) / max * 100 = 100%
• Example: 5 /10 * 100 = 50%
Max is that maximum number to get all that percentage. Hope that makes some sense.
We are calculating visitation, so all unique visits / date of membership is another. The table contains an event name, some meta data, and it is tied to that user. Reputation just uses those events to formulate a reputation based on 100% as the highest.
85% reputation - Joe AuthorUser been a member for 3 years. He has:
• written 18 blog posts
o 2 in the past month
• commented an average of 115 times per month
• 3,000 followers
• following 2,000 people
• received an average like rating of 325 per post
• he's earned, over the past 3 years:
o 100 level 1 badges
o 50 level 2 badges
• he's connected his:
o FB account
o Twitter account
As a general approach I would be using PL/SQL. One package with several get_rep functions.
function calc_rep (i_comments in number, i_posts in number, i_ratings in number,
i_followers in number, i_following in number, i_badges in number,
i_connections in number) return number deterministic is
...
end calc_rep;
function get_rep_for_user (i_user_id in number) is
v_comments ....
begin
select .....
calc_rep (v_comments...)
end get_rep_for_user;
If you've got to recalculate rep for a lot of users a lot of the time, I'd look into parallel pipelined functions (which should be a separate question). The CALC_REP is deterministic as anyone with the same set of numbers will get the same result.
If the number of comments etc is stored in a single record, then it will be simple to call. If the details need to be summarised up, then use materialized views for the summaries. If they need to be gathered from multiple places, then a view can be used to encapsulate the joins.
Whether you can calculate on the fly fast enough to meet requirements is a factor of data volumes, database design, final calculation complexity..... to imagine that we can give you a cut-and-dry approach is unreasonable.
It may wind up being something that would be helped by storing summaries used for some calculated values. For example, look at the things that cause DML. If you had a user_reputation table, then a trigger on your blog_post table could increment/decrement a counter on user_reputation on insert or delete of a post. Same for comments, likes, follows, etc.
If you keep all of your summaries up to date in that manner, then the incremental costs to DML will be minor and the calculations will become simple.
Not saying that this is THE solution. Just saying that it might be worth exploring.
I'm lost here. Here's the problem and I think it's NP-hard. A center is staffed with a finite number of workers with the following conditions:
There are 3 shifts per day with 2 people in each shift
Each employee works for 5 days straight and then 2 days off with only one shift per day
So the problem is: how many workers do we need if the center remains active every day and a feasible schedule?
Update:
Thanks for all the great answers. The closest I've come to (with a randomized brute-force algorithm) is the following:
X 3 0
1 0 3
2 3 1
2 1 3
0 1 2
0 2 1
3 0 2
I've simplified the problem into batches of 2 people (0-3 represent 4 batches) in the hopes of getting a feasible solution. X refers to a shift which has not been assigned (which was not the initial goal but it looks like there may not be an alternative).
The constraints cannot be respected exactly as expressed in the question.
That's because the numbers don't add up (or rather "divide up").
Consequently, the problem should be reworded to require
exactly 3 shifts per day
exactly 2 workers per shift
workers work a maximum of 5 consecutive days
workers rest a minimum of 2 consecutive days
With the introduction of the minimum and maximum qualifiers, the minimum number of workers required is 9 (again assuming no part-time worker).
Note that although 9 appears to be a absolute minimum, given the need to cover 42 shifts per week (3 * 2 * 7) with workers who can cover a maximum of 5 shifts per week (5 work days 2 rest days = a week), there is no assurance that 9 would be sufficient given the consecutive work and/or rest day requirements.
This is how I figure...
8 workers isn't enough, and the following 9 workers line-up, is an example of such a schedule.
To make things easy, I assigned all workers except for worker #1 and #9, to an optimal schedule of exactly a 5 days-on and 2 days-off schedule; #1 and #9 work less. Of course many other arrangements would work (maybe this is what the OP sensed when he hinted at an NP-complete problem). Also, the schedule is such that each week's schedule is exactly the same for everyone, but that could also be changed (maybe introducing some fairness, by having all workers have a lighter week every once in a while, but this BTW can lead to some difficulties of respecting the requirement of 5 maximum work days).
The sample schedule shows two consecutive weeks to help see the consecutive work or rest days, but as said, all weeks are the same for every one.
Max Conseq Ws Min Conseq Rs
Worker #1 RRWWWRW RRWWWRW 3 2
Worker #2 WWWWWRR WWWWWRR 5 2
Worker #3 WWWRRWW WWWRRWW 5 2
Worker #4 WWWRRWW WWWRRWW 5 2
Worker #5 WRRWWWW WRRWWWW 5 2
Worker #6 WRRWWWW WRRWWWW 5 2
Worker #7 RWWWWWR RWWWWWR 5 2
Worker #8 RWWWWWR RWWWWWR 5 2
Worker #9 WWRRRRW WWRRRRW 3 3
Nb of Ws 6666666 6666666
The tally at the bottom shows exactly 6 workers per day (respecting the need to cover 3 shifts with 2 workers each), the max and min columns on the right show that the maximum consecutive work and minimum consecutive rest requirements are respected.
3 shifts per day * 2 people per shift * (7 days per week / 5 working days per person) = 8.4 people (9 if part time is not an option).
3 shifts x 7 days = 21
this does not divide evenly by 5 nor 2 - so your constraints will not allow a complete filling of the slots.
OK - even though you have an answer, let me take a shot.
Let's take the general problem: 7 days x 3 shifts = 21 different shifts to fill
There are 7 possible employee schedules expressed as days on (1) & days off (0)
MTWTFSS
0011111
1001111
1100111
1110011
1111001
1111100
0111110
We want to minimize the number of scheduled employees that matches the number of required hours.
I have a matrix of number of employees of each type per shift and that number is an integer variable. My optimization model is:
Min (number of employees)
Subject to: sum of (# of emp sched * employee schedule) = staff required for each shift
and
number of employees scheduled is integer
You can change the = sign in the first constraint to a >=. Then you'll get a feasible solution with extra staff. You can solve this in Excel with the basic SOLVER addin.
Let's say I need four employees for each day on a shift but I'm willing to tolerate extra staff.
A solution using the schedules above is:
Number of staff by schedule type: 0,2,0,2,0,2,0
Schedule types 0011111,1001111,1100111,1110011,1111001,1111100,0111110
(In other words 2 with schedules 1001111, 2 with schedules 1111001, and 2 more with schedules 1111100)
This results in one day (Monday) with two extra staff and 4 employees on all the other days.
Of course, this isn't a unique solution. There are at least 6 other solutions with two extra staff members. Constraint programming would be a better and much faster approach since there will often be many feasible schedules.
Consider a sales department that sets a sales goal for each day. The total goal isn't important, but the overage or underage is. For example, if Monday of week 1 has a goal of 50 and we sell 60, that day gets a score of +10. On Tuesday, our goal is 48 and we sell 46 for a score of -2. At the end of the week, we score the week like this:
[0,0]=10,[0,1]=-2,[0,2]=1,[0,3]=7,[0,4]=6
In this example, both Monday (0,0) and Thursday and Friday (0,3 and 0,4) are "hot"
If we look at the results from week 2, we see:
[1,0]=-4,[1,1]=2,[1,2]=-1,[1,3]=4,[1,4]=5
For week 2, the end of the week is hot, and Tuesday is warm.
Next, if we compare weeks one and two, we see that the end of the week tends to be better than the first part of the week. So, now let's add weeks 3 and 4:
[0,0]=10,[0,1]=-2,[0,2]=1,[0,3]=7,[0,4]=6
[1,0]=-4,[1,1]=2,[1,2]=-1,[1,3]=4,[1,4]=5
[2,0]=-8,[2,1]=-2,[2,2]=-1,[2,3]=2,[2,4]=3
[3,0]=2,[3,1]=3,[3,2]=4,[3,3]=7,[3,4]=9
From this, we see that the end of the week is better theory holds true. But we also see that end of the month is better than the start. Of course, we would want to next compare this month with next month, or compare a group of months for quarterly or annual results.
I'm not a math or stats guy, but I'm pretty sure there are algorithms designed for this type of problem. Since I don't have a math background (and don't remember any algebra from my earlier days), where would I look for help? Does this type of "hotspot" logic have a name? Are there formulas or algorithms that can slice and dice and compare multidimensional arrays?
Any help, pointers or advice is appreciated!
This data isn't really multidimensional, it's just a simple time series, and there are many ways to analyse it. I'd suggest you start with the Fourier Transform, it detects "rhythms" in a series, so this data would show a spike at 7 days, and also around thirty, and if you extended the data set to a few years it would show a one-year spike for seasons and holidays. That should keep you busy for a while, until you're ready to use real multidimensional data, say by adding in weather information, stock market data, results of recent sports events and so on.
The following might be relevant to you: Stochastic oscillators in technical analysis, which are used to determine whether a stock has been overbought or oversold.
I'm oversimplifying here, but essentially you have two moving calculations:
14-day stochastic: 100 * (today's closing price - low of last 14 days) / (high of last 14 days - low of last 14 days)
3-day stochastic: same calculation, but relative to 3 days.
The 14-day and 3-day stochastics will have a tendency to follow the same curve. Your stochastics will fall somewhere between 1.0 and 0.0; stochastics above 0.8 are considered overbought or bearish, below 0.2 indicates oversold or bullish. More specifically, when your 3-day stochastic "crosses" the 14-day stochastic in one of those regions, you have predictor of momentum of the prices.
Although some people consider technical analysis to be voodoo, empirical evidence indicates that it has some predictive power. For what its worth, a stochastic is a very easy and efficient way to visualize the momentum of prices over time.
It seems to me that an OLAP approach (like pivot tables in MS Excel) fit the problem perfectly.
What you want to do is quite simple - you just have to calculate the autocorrelation of your data and look at the correlogram. From the correlogram you can see 'hidden' periods of your data and then you can use this information to analyze the periods.
Here is the result - your numbers and their normalized autocorrelation.
10 1,000
-2 0,097
1 -0,121
7 0,084
6 0,098
-4 0,154
2 -0,082
-1 -0,550
4 -0,341
5 -0,027
-8 -0,165
-2 -0,212
-1 -0,555
2 -0,426
3 -0,279
2 0,195
3 0,000
4 -0,795
7 -1,000
9
I used Excel to get the values. But the sequence in column A and add the equation =CORREL($A$1:$A$20;$A1:$A20) to cell B1 and copy it then up to B19. If you the add a line diagram, you can nicely see the structure of the data.
You can already make reasonable guesses about the periods of patterns - you're looking at things like weekly and monthly. To look for weekly patterns, for example, just average all the mondays together and so on. Same goes for days of the month, for months of the year.
Sure, you could use a complex algorithm to find out that there's a weekly pattern, but you already know to expect that. If you think there really may be patterns buried there that you'd never suspect (there's a strange community of people who use a 5-day week and frequent your business), by all means, use a strong tool -- but if you know what kinds of things to look for, there's really no need.
Daniel has the right idea when he suggested correlation but I don't think autocorrelation is what you want. Instead I would suggest correlating each week with each other week. Peaks in your correlation--that is values close to 1--suggest that the values of the weeks resemble each other (I.e. are peiodic) for that particular shift.
For example when you cross correlate
0 0 1 2 0 0
with
0 0 0 1 1 0
the result would be
2 0 0 1 3 0
the highest value is 3, which corresponds to shifting (right) the second array by 4
0 0 0 1 1 0 --> 0 0 1 1 0 0
and thenn multiplying component wise
0 0 1 2 0 0
0 0 1 1 0 0
----------------------
0 + 0 + 1 + 2 + 0 + 0 = 3
Note that when you correlate you can create your own "fake" week and cross-correlate all your real weeks, the idea being that you are looking for "shapes" of your weekly values that correspond to the shape of your fake week by looking for peaks in the correlation result.
So if you are interested in finding weeks that are close near the end of the week you could use the "fake" week
-1 -1 -1 -1 1 1
and if you get a high response in the first value of the correlation this means that the real week that you correlated with has roughly this shape.
This is probably beyond the scope of what you're looking for, but one technical approach that would give you the ability to do forecasting, look at things like statistical significance, etc., would be ARIMA or similar Box-Jenkins models.