How to decide the probability percentage in question - probability

I have the below question:
In the first part of the question, is says the probability that the selected person will be a male is 0.44, it means the number of males is 25*0.44 = 11. That's ok
In the second part, the probability of the selected person will be a male who was born before 1960 is 0.28, Does that mean 0.28 out of the total number which is 25 or out of the number of males?
I mean should the number of male who was born before 1960 equals into 250.28 OR 110.28

I find it easiest to think of these sorts of problems as contingency tables.
You use a maxtrix layout to express the distributions in terms of two or more factors or characteristics, each having two or more categories. The table can be constructed either with probabilities (proportions) or with counts, and switching back and forth is easy based on the total count in the table. Entries in the table are the intersections of the categories, corresponding to and in a verbal description. The numbers to the right or at the bottom of the table are called marginals, because they're found in the margins of the tables, and are always the sum of the table row or column entries in which they occur. The total probability (or count) in the table is found by summing across all the rows and columns. The marginal distribution of gender would be found by summing across rows, and the marginal distribution of birthdays would be found by summing across the columns.
Based on this, you can inferentially determine other values as indicated by the entries in parentheses below. With one more entry, either for gender or in the marginal row for birthdays, you'd be able to fill in the whole table inferentially. (This is related to the concept of degrees of freedom - how many pieces of info can you fill in independently before the others are determined by the known constraint that the totals are fixed or that probability adds to 1.)
Probabilities
Birthday
< 1960 | >= 1960
_______________________
G | | |
e F | | | (0.56)
n __|_________|__________|
d | | |
e M | 0.28 | (0.16) | 0.44
r __|_________|__________|______
? ? | 1.00
Counts
Birthday
< 1960 | >= 1960
_______________________
G | | |
e F | | | (14)
n __|_________|__________|
d | | |
e M | 7 | (4) | 11
r __|_________|__________|_____
? ? | 25
Conditional probability corresponds to limiting yourself to the subset of rows or columns specified in the condition. If you had been asked what is the probability of a birthday < 1960 given the gender is male, i.e., P{birthday < 1960 | M} in relatively standard notation, you'd be restricting your focus to just the M row, so the answer would be 7/11 = 0.28/0.44. Computationally, you take the probabilities or counts in the qualifying table entries and express them as a proportion of the probabilities or counts of the specified (given) marginal entries. This is often written in prob & stats texts as P(A|B) = P(AB)/P(B), where AB is a set shorthand for A and B (intersection).

0,44 = 11 / 25 people are male.
0,28 = 7 / 25 people are male & born before 1960.

Related

Communicate estimates from GEE o linear mixed models to a general audience

I want to transform the estimates from a GEE model into estimates easy to interpret. I am analyzing in Stata v17 (for Mac) the data from a therapeutic intervention in a pilot study with 26 individuals, randomized to either the treatment or a placebo. The outcome is a set of inflammatory proteins, which expression has been normalized as an arbitrary log2 scale. There are longitudinal measurements at weeks 0, 1, 2 and 3.
To evaluate the impact of treatment on each protein, I have used GEE models. For each protein, the code looks like this:
xtgee log2_biomarker treatment##c.week, family(gaussian) link(identity) corr(ar 1)
And the model output
GEE population-averaged model Number of obs = 104
Group and time vars: id week Number of groups = 26
Family: Gaussian Obs per group:
Link: Identity min = 4
Correlation: AR(1) avg = 4.0
max = 4
Wald chi2(3) = 11.38
Scale parameter = 1.018093 Prob > chi2 = 0.0098
----------------------------------------------------------------------------------
log2_biomarker | Coefficient Std. err. z P>|z| [95% conf. interval]
-----------------+----------------------------------------------------------------
treatment |
Placebo | .1010699 .379534 0.27 0.790 -.6428031 .8449428
week | -.1841974 .125257 -1.47 0.141 -.4296967 .0613018
|
treatment#c.week |
Placebo | .3919614 .1771402 2.21 0.027 .044773 .7391497
|
_cons | .063295 .268371 0.24 0.814 -.4627026 .5892926
----------------------------------------------------------------------------------
The interaction term "treatment#c.week" indicates that this protein increases over time. In order to put it in context with the estimates from models for the other proteins, I would like to translate this 0.39 coefficient into something like this:
"Subjects in the placebo arm experience a X % (or X-fold) greater protein increase per week".
But, having a log2 transformed outcome, I am struggling to come up with the correct formula.
Thanks!

Normalize 5-star rating to make rating more uniform

I have a system where people rate different items on a scale from 0-5. The issue is, not everyone rates the same items, and the scoring is not objective. The goal is to achieve a fair comparison between items so that an item's score is not affected too much if one of the scorers is very "lenient" or "harsh." In actuality, there may be 100 items, each one scored twice, but here is an example dataset where 4 people scored 12 items, each one being scored twice:
| Item | Score 1 | Score 2 |
_____
1 | 5 | | 4 |
2 | 5 | | 3 | C
3 | 4 | |_2_|
4 A | 5 | | 5 |
5 | 3 | | 0 |
6 |_5_| | 3 |
7 | 3 | | 1 | D
8 | 4 | | 1 |
9 B | 4 | |_2_|
10 | 4 | | 3 |
11 | 4 | | 3 | C
12 |_5_| | 4 |
In this table, the boxes represent a single person's set of scores. We can label the person who gave score 1 to items 1-6 person A, the one who gave score 1 to 7-12 person B, the one who gave score 2 to 1-3 and 10-12 person C, and the one who gave score 2 to to 4-9 person D.
Informally, if we assume person C was the closest to each item's objective score, we might reason as follows:
Person A generally gave higher scores than C on items 1-3 so he is "lenient."
D gave low scores to all of them except for item 4 which then must have been truly good. He gave scores generally lower than A, so his scores should be adjusted slightly upwards perhaps.
B gave higher scores than D, and a bit higher than C, so a bit "lenient".
Thus, we might produce adjusted scores for each item. For example, even though item 2 has a higher average score than item 9, they are probably on par considering A is generally lenient and D is generally harsh. The question is, how do we do this programmatically. I thought we might make several transformation functions which transform a raw score into an adjusted score, say A, B, C, and D. For example, we might have A(5)=3.7 because when A rates an item as 5, it is really in the 3-4 range. Then, we want to minimize
|A(x_0a)-C(x_0c)|^2 + |D(x_1d)-A(x_1a)|^2 + |B(x_2b)-D(x_2d)|^2 + |C(x_3c)-B(x_3b)|^2
where x_ip is a vector which consists of person p's ratings for items 3i+1, 3i+2, and 3i+3. We might make A, B, C, and D linear transformations, for example. How then do you optimize it? And is this the best way to eliminate one the harshness or leniency of scorers without throwing away their ratings?

How do I calculate the Confusion Matrix?

This is the WEKA output that i was able to generate. Unfortunatly, I do not know how to calculate the confusion matrix. Could someone help me calculate it?
=== Classifier model (full training set) ===
J48 pruned tree
-----------------
plas <= 127: negative (485.0/94.0)
plas > 127
| mass <= 29.9
| | plas <= 145: negative (41.0/6.0)
| | plas > 145
| | | age <= 25: negative (4.0)
| | | age > 25
| | | | age <= 61: positive (27.0/9.0)
| | | | age > 61: negative (4.0)
| mass > 29.9
| | plas <= 157
| | | age <= 30: negative (50.0/23.0)
| | | age > 30: positive (65.0/18.0)
| | plas > 157: positive (92.0/12.0)
Number of Leaves : 8
Size of the tree : 15
a. Use the WEKA output to construct a confusion matrix. (Hint: look at each leaf node to determine how many instances fall into each of the four quadrants; and aggregate results of all leaf nodes to obtain the final counts)
TP=?
FP=?
FN=?
TN=?
b. In medical diagnosis, three metrics are commonly used: sensitivity, specificity and diagnosis accuracy. Sensitivity is defined as TP/(TP+FN) ; Specificity is defined as TN/(FP+TN); Diagnosis Accuracy is defined as the average of Sensitivity and Specificity. Calculate the Diagnosis Accuracy based on the confusion matrix above.
If someone could help me with this, i would greatly appreciate it. Thank you!
In the "Classify" Panel, click on "More Options", Click on "Output Confusion matrix", click OK.
I have added a screenshot of the respective GUI screens and dialog boxes. In the sccreenshot "More options..." button (1) is greyed out because I have already clicked it.
Here to fill the required table you have to understand the tree and figures at each of its leaf.
Root node of the tree is 'plas'. It has two children. All the cases of input where 'plas' less than or equal to 127 falls at first child whereas all cases where 'plas' greater than 127 falls at second. Negative at leaf of first child indicates that cases which falls at first child are all negative. Figure 485 in parenthesis denotes number of input cases that are having 'plas' less than or equal to 127 & 94 denotes that out of these 485 cases, 94 are miss-classified as negative. Similar is the case for rest of the tree. So,
TP=145
FP=39
TN=461
FN=123
Hope this helps. Comment if anything seems doubtful.

Determine max slope of slowly descending signal

I have an analog power signal from a motor. The signal ramps up quickly, but powers off slowly over the course of several seconds. The signal looks almost like a series of plateaus on the descent. The problem is that the signal doesn't settle back to zero. It settles back to an intermediate level unknown, and varying from motor to motor. See chart below.
I'm trying to find a way determine when the motor is off and at that intermediate level.
My thought is to find and store the max point, and calculate the slopes thereafter until the max slope is greater than some large negative slope value like -160 (~ -60 degrees), and declare that the motor must be powering off. The sample points below are with all duplicates removed. (there's about 5000 samples typically).
My problem is determining the X values. In the formula (y2-y1) / (x2 - x1), the x values could far enough away in time that the slope never appears greater than -30 degrees. Picking an absolute number like 10 would fix this, but is there a more mathematically correct method?
The data shows me calculating slope with method described above and the max of 921. ie (y2 -y1) / ( (10+1) - 10). In this scheme, at datapoint 9, i would say the motor is "Off". I'm looking for a more precise means to determine an X value rather than randomly picking 10 for instance.
+---+-----+----------+
| X | Y | Slope |
+---+-----+----------+
| 1 | 65 | 856.000 |
| 2 | 58 | 863.000 |
| 3 | 57 | 864.000 |
| 4 | 638 | 283.000 |
| 5 | 921 | 0.000 |
| 6 | 839 | -82.000 |
| 7 | 838 | -83.000 |
| 8 | 811 | -110.000 |
| 9 | 724 | -197.000 |
+---+-----+----------+
EDIT: A much simpler answer:
Since your motor is either ON or OFF, and ON wattages are strictly higher than OFF wattages, you should be able to discriminate between ON and OFF wattages by maintaining an average wattage, reporting ON if the current measurement is higher than the average and OFF if it is lower.
Count = 0
Average = 500
Whenever a measurement comes in,
Count = Count + 1
Average = Average + (Measurement - Average) / Count
Return Measurement > Average ? ON : OFF
This represents an average of all the values the wattage has ever been. If we want to eventually "forget" the earliest values (before the motor was ever turned on), we could either keep a buffer of recent values and use that for a moving average, or approximate a moving average with an IIR like
Average = (1-X) * Average + X * Measurement
for some X between 0 and 1 (closer to 0 to change more slowly).
Original answer:
You could treat this as an online clustering problem, where you expect three clusters (before the motor turns on, when the motor is on, and when the motor is turned off), or perhaps four (before the motor turns on, peak power, when the motor is running normally, and when the motor turns off). In effect, you're trying to learn what it looks like when a motor is on (or off).
If you don't have any other information about whether the motor is on or off (which could be used to train a model), here's a simple approach:
Define an "Estimate" to contain:
float Value
int Count
Define an "Estimator" to contain:
float TotalError = 0.0
Estimate COLD_OFF = {Value = 0, Count = 1}
Estimate ON = {Value = 1000, Count = 1}
Estimate WARM_OFF = {Value = 500, Count = 1}
a function Update_Estimate(float Measurement)
Find the Estimate E such that E.Value is closest to Measurement
Update TotalError = TotalError + (E.Value - Measurement)*(E.Value - Measurement)
Update E.Value = (E.Value * E.Count + P) / (E.Count + 1)
Update E.Count = E.Count + 1
return E
This takes initial guesses for what the wattages of these stages should be and updates them with the measurements. However, this has some problems. What if our initial guesses are off?
You could initialize some number of Estimators with different possible (e.g. random) guesses for COLD_OFF, ON, and WARM_OFF; after receiving a measurement, let each Estimator update itself and aggregate their values somehow. This aggregation should reward the better estimates. Since you're storing TotalError for each estimate, you could just pick the output of the Estimator that has the lowest TotalError so far, or you could let the Estimators vote (giving each Estimator's vote a weight proportional to 1/(TotalError + 1) or something like that).

How to randomize across categories holding the mean equal?

I am looking for some conceptional inputs detached from any specific platform/software for the following problem:
Let R be a Nx2 matrix with the first column denoting the object ID and the second column the category (e.g. from 1 to 10).
ID | Category
1 | 1
2 | 1
3 | 1
4 | 2
5 | 2
6 | 3
7 | 3
8 | 3
9 | 3
. | .
. | .
Further, assume we have a matrix C which assignes for each cateogry a number, e.g.:
Category | Number
1 | 0.5
2 | 0.2
3 | 0.9
. | .
. | .
So for each object in matrix R a number can be mapped according to matrix C (e.g. for ID=1 with category=1, the number according to matrix C is 0.5).
The goal now is to create an algorithm which randomizes the objects across a pre-specified category-range with the overall average of the column number (which is mapped to the corresponding category) being held constant.
E.g. assume that the category-range is defined as 2 meaning that each object from category 1 can either stay in category 1, randomly be shifted to category 2 or even up to category 3. Similarly, an object from category 3 with a selected category range of 1 can either be moved down to category 2, stay at category 3 or move up to category 4). If an object is now shifted to another category, it gets assigned a new number according to matrix C which impacts the overall average across the column numbers.
However, all swaps have to be executed on a purely random basis with the additional constraint that the average across the column number after the randomization is equal to the one from the beginning.
Any input would be greatly appreciated.

Resources