Highlighting mininimum row value in Pander - rstudio

I am trying to display a dataframe in an RMarkdown document using the Pander package.
I would like to highlight the minimum value in each row of values. Here's what I have tried:
df <- replicate(4, rnorm(5))
df <- as.data.frame(df)
df$min <- apply(df, 1, min)
emphasize.strong.cells(which(df == df$min, arr.ind = T))
pander(df[1:4])
When I do this I get the error:
Error in check.highlight.parameters(emphasize.strong.cells, nrow(t), ncol(t)) :
Too high number passed for column indexes that should be kept below 6
I can print out the whole table (with the min column) without any trouble or I can print out a partial table without emphasis, but neither of these is ideal. I want the highlighting, but I do not wish to include the 'min' column.
I imagine the fact that I am leaving some highlighted cells out of the pander command is causing the error.
Is there a way around this? Or a better way to do this?
Thanks.
Subquestion: What if I wanted to highlight the minimum in the first few rows and the maximum in the next few. Is that possible in a single table?

Instead of the which lookup, with the possibility to match row minimums in the wrong rows, you can easily construct those array indices with a simple sequence (1:N) and calling which.min on each row, eg with apply:
> df <- replicate(4, rnorm(5))
> df <- as.data.frame(df)
> emphasize.strong.cells(cbind(1:nrow(df), apply(df, 1, which.min)))
> pander(df)
----------------------------------------------
V1 V2 V3 V4
----------- ----------- ----------- ----------
0.6802 0.1409 **-0.7992** 0.1997
0.6797 **-0.2212** 1.016 0.6874
2.031 -0.009855 0.3881 **-1.275**
1.376 0.2619 **-2.337** -0.1066
**-0.4541** 1.135 -0.1566 0.2912
----------------------------------------------
About your next question: you could of course do that in a single table, eg rbind two matrices created similarly as described above with which.min and which.max.

Related

In Visual FoxPro, how does one incorporate a SUM REST command into a SCAN loop?

I am trying to complete a mortality table, using loops in Visual Foxpro. I have run into one difficulty where the math operation involves doing a sum of of all data in a column for the remaining rows - this needs to be incorporated into a loop. The strategy I thought would work, nesting a SUM REST function into the SCAN REST function, was not successful, and I haven't found a good alternative approach.
In FoxPro, I can successfully use the SCAN function as follows, say:
Go 1
Replace survivors WITH 1000000
SCATTER NAME oprev
SKIP
SCAN rest
replace survivors WITH (1 - oprev.prob) * oprev.survivors
SCATTER NAME oprev
ENDSCAN
(to take the mortality rates in a table and use it to compute number of survivors at each age)
Or, say:
Replace Yearslived WITH 0
SCATTER NAME oprev1
SKIP
SCAN rest
replace Yearslived WITH (oprev1.survivors + survivors) * 0.5
SCATTER NAME oprev1
ENDSCAN
In order to complete a mortality table I want to use the Yearslived and survivors data (which were produced using the SCANs above) to get life expectancy data as follows. Say we have the simplified table:
SURVIVORS YEARSLIVED LIFEEXP
100 0 ?
80 90 ?
60 70 ?
40 50 ?
20 30 ?
0 10 ?
Then each LIFEEXP record should be the sum of the remaining YEARSLIVED records divided by the corresponding Survivors record, i.e:
LIFEEXP (1) = (90+70+50+30+10)/100
LIFEEXP (2) = (70+50+30+10)/80
...and so on.
I attempted to do this with a similar SCAN approach - see below:
Go 1
SCATTER NAME Oprev2
SCAN rest
replace lifeexp WITH ((SUM yearslived Rest) - oprev2.yearslived) / oprev2.survivors
SCATTER NAME oprev2
ENDSCAN
But here I get the error message "Function name is missing)." Help tells me this is probably because the function contains too many arguments.
So I then also tried to break things down and first use SCAN just to get all of my SUM REST data, as follows:
SCAN rest
SUM yearslived REST
END SCAN
... in the hope that I could get this data, define it as a variable, and create a simpler SCAN function above. However, I seem to be doing something wrong here as well, as instead of getting all necessary sums (first the sum of rows 2 to end, then 3 to end, etc.), I only get one sum, of all the yearslived data. In other words, using the sample data, I am given just 250, instead of the list 250, 160, 90, 40, 10.
What am I doing wrong? And more generally, how can I create a loop in Foxpro that includes a function where you Sum up all remaining data in a specific column over and over again (first 2nd through last record, then 3rd through last record, and so on)?
Any help will be much appreciated!
TM
Well you are really hiding the important detail, your table's structure, sample data and desired output. Then it is mostly guess work which have a high chance of to be true.
You seem to be trying to do something like this:
Create Cursor Mortality (Survivors i, YearsLived i, LifeExp b)
Local ix, oprev1
For ix=100 To 0 Step -20
Insert Into Mortality (Survivors, YearsLived) Values (m.ix,0)
Endfor
Locate
Survivors = Mortality.Survivors
Skip
Scan Rest
Replace YearsLived With (m.Survivors + Mortality.Survivors) * 0.5
Survivors = Mortality.Survivors
Endscan
*** Here is the part that deals with your sum problem
Local nRecNo, nSum
Scan
* Save current recnord number
nRecNo = Recno()
Skip
* Sum REST after skipping to next row
Sum YearsLived Rest To nSum
* Position back to row where we started
Go m.nRecNo
* Do the replacement
Replace LifeExp With Iif(Survivors=0,0,m.nSum/Survivors)
* ENDSCAN would implicitly move to next record
Endscan
* We are done. Go first record and browse
Locate
Browse
While there are N ways to do this in VFP, this is one xbase approach to do that and relatively simple to understand IMHO.
Where did you go wrong?
Well, you tried to use SUM as if it were a function, but it is a command. There is SUM() function for SQL as an aggregate function but here you are using the xBase command SUM.
EDIT: And BTW in this code:
SCAN rest
SUM yearslived REST
ENDSCAN
What you are doing is, starting a SCAN with a scope of REST, in loop you are using another scoped command
SUM yearslived REST
This effectively does the summing on the REST of records and places the record pointer to bottom. Endscan further advances it to eof(). Thus it only works for the first record.

Most common "denominators" in a two column list in Google Sheets

How can I find the most commonly found 'Code' (Col B) associated with each unique 'Name' in (Col A) and find the closest value if the 'Code' in Col B is unique?
The image below shows the shared google sheet with Starting data in Columns A & B and the desired output columns in columns C and D. Each Unique Name has associated codes. Column D displays the most commonly occuring Code for each unique name. For example, Buick La Sabre 1 has 3 associated codes in B3,B4,B5 but in D3 only 98761 because it appears more frequently than the other 2 codes do in B2:B. I will explain what I mean by the closest value below.
The Codes that have a count = 1 are unique so the output in column D tries to find the closest match.
However, when the count of the code in B2:B > 1, then the output in column D = to the most frequent code associated with the Name.
Approach when there is 2 or more of the same values in column B
Query
I thought I might use a QUERY with a ORDER BY count(B) DESC LIMIT 2 in a fashion similar to this working equation:
QUERY($A$1:$D$25,"SELECT A, B ORDER BY B DESC Limit 2",1)
but I could not get it to work when I substituted in the Count function.
SORT & INDEX OR VLOOKUP
If the query function can't be fixed to work, then I thought another approach might be to combine a Vlookup/Index after sorting column B in a descending order.
UNIQUE(sort($B$3:$B,if(len($B$3:$B),countif($B$3:$B,$B$3:$B),),0,1,1))
Since a Vlookup or Index using multiple criteria would just pull the first value it finds, you would just end up with the first matching value, we would then get the most frequent value.
Approach when there is < 2 of the same values in column B
This is a little more complicated since the values can be numbers and letters.
A solution like that seen in the image below could be used if everything were a number. In our case there will usually be between 3 - 5 character alphanumeric code starting with 0 - 1 letters numbers and followed by numbers. I'm not sure what the best way to match a code like A1234 would be. I imagine a solution might be to SPLIT off letters and trying to match those first. For example A1234 would be split into A | 1234, then matching the closest letter and then the closest number. But I really am not sure what the best solution to this might be that works within the constraints of Google Sheets.
In the event that a number is equidistant between two numbers, the lower number should be chosen. For example, if 8 is the number and the closest match would be 6 or 10, then 6 should be selected.
In the event that a letter is being used it should work in a similar fashion. For example, thinking of {A, B, C} as {1, 2, 3}, B should preferrentially match to A since it comes before C.
In summary, looking for a way to find the most frequently associated code in col B that is associated with unique names in col A in this sheet and; In the event where there are none of the same codes in B2:B, a formula that will find the closest match for a number or alphanumeric code.
You can use this formula:
=QUERY({range of numerators & denominators}, "select Col2, count(Col2) group by Col2 label Col2 'Denominator', count(Col2) 'Count'")
That outputs something like this:
Denominator
Count
Den 1
Count 1
Den 2
Count 2
use:
=ARRAY_CONSTRAIN(SORTN(QUERY({A3:B},
"select Col1,Col2,count(Col2)
where Col1 is not null
group by Col1,Col2
order by count(Col2) desc,Col2 asc
label count(Col2)''"), 9^9, 2, 1, 1), 9^9, 2)

COUNTIF over a moving window

I have a column wherein datapoints have been assigned a "1" or "2". I would like to use a function similar to COUNTIF in Excel, but over a moving window, e.g. =COUNTIF(G2:G31, 2) to determine how many "2"s exist in that given window
You might be able to use tibbletime.
1) Since you are interested in state being 1 or 2, we can recode it into a logical (boolean). Assuming your data.frame is named df,
df$state <- df$state == 2
2) Logicals are cool, because we can simply sum them, and get the number of TRUE values:
# total number of rows with state == 2:
sum(df$state)
3) Make a rollify function, cf. the link:
library(tibbletime)
rolling_sum <- rollify(sum, window = 30)
df$countif = rolling_sum(df$state)
This approach does however not solve the leading 29 rows. For those you can in your case use:
df$countif[1:29] <- cumsum(df$state[1:29])

Stack multiple columns into one

I want to do a simple task but somehow I'm unable to do it. Assume that I have one column like:
a
z
e
r
t
How can I create a new column with the same value twice with the following result:
a
a
z
z
e
e
r
r
t
t
I've already tried to double my column and do something like :
=TRANSPOSE(SPLIT(JOIN(";",A:A,B:B),";"))
but it creates:
a
z
e
r
t
a
z
e
r
t
I get inspired by this answer so far.
Try this:
=SORT({A1:A5;A1:A5})
Here we use:
sort
{} to combine data
Accounting your comment, then you may use this formula:
=QUERY(SORT(ArrayFormula({row(A1:A5),A1:A5;row(A1:A5),A1:A5})),"select Col2")
The idea is to use additional column of data with number of row, then sort by row, then query to get only values.
And join→split method will do the same:
=TRANSPOSE(SPLIT(JOIN(",",ARRAYFORMULA(CONCAT(A1:A5&",",A1:A5))),","))
Here we use range only two times, so this is easier to use. Also see Concat + ArrayFormula sample.
Few hundreds rows is nothing :)
I created index from 1 to n, then pasted it twice and sorted by index. But it's obviously fancier to do it with a formula :)
Assuming Your list is in column A and (for now) the times of repeat are in C1 (can be changed to a number in the formula), then something simple like this will do (starting in B1):
=INDEX(A:A,(INT(ROW()-1)/$C$1)+1)
Simply copy down as you need it (will give just 0 after the last item). No sorting. No array. No sheets/excel problems. No heavy calculations.

How to filter one list of items from another list of items?

I have a huge list of items in Column A (1,000 items) and a smaller list of items in Column B (510 items).
I want to put a formula in Column C to show only the Column A items not in Column B.
How to achieve this through a formula, preferably a FILTER formula?
Select the list in column A
Right-Click and select Name a Range...
Enter "ColumnToSearch"
Click cell C1
Enter this formula: =MATCH(B1,ColumnToSearch,0)
Drag the formula down for all items in B
If the formula fails to find a match, it will be marked "#N/A", otherwise it will be a number.
If you'd like it to be TRUE for match and FALSE for no match, use this formula instead:
=ISNUMBER(MATCH(B1,ColumnToSearch,0))
If you'd like to return the unfound value and return empty string for found values
=IF(ISNUMBER(MATCH(B1,ColumnToSearch,0)),"",B1)
Alternative method is simply =
FILTER(A1:A,if(COUNTIF(B1:B,A1:A),0,1))
It's much more efficient.
It uses countif to get a 0 or a 1 as an array if the values in B are in A, then it reverses the 0 and 1 to get the values that are missing instead of only the values that are in there. It then filters based on that.
Columns look like this
A B
1 2
2 5
3
4
5
ARE formulae:
=FILTER(A1:A, MATCH(A1:A, B1:B, 0))
=FILTER(A1:A, COUNTIF(B1:B, A1:A))
ARE NOT formulae:
=FILTER(A1:A, ISNA(MATCH(A1:A, B1:B, 0)))
=FILTER(A1:A, NOT(COUNTIF(B1:B, A1:A)))
in your case:
=FILTER(A1:A; ISNA(MATCH(A:A; B:B; )))
if you face a mismatch of ranges see: https://stackoverflow.com/a/54795616/5632629

Resources