Mutate results with multiple variables using for loop - for-loop

I have this dataframe
structure(list(plate = c("A", "A", "A", "A", "A", "B", "B", "B",
"B", "B", "C", "C", "C", "C", "C"), marker = c("IL-1", "IL-2",
"IL-3", "IL-4", "IL-5", "IL-1", "IL-2", "IL-3", "IL-4", "IL-5",
"IL-1", "IL-2", "IL-3", "IL-4", "IL-5"), sample = c(1, 2, 3,
4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15), result = c(1.94000836127381,
0.426353706529969, 2.07418521661429, 1.58200029160696, 0.812685661255674,
0.546681932009987, 0.199532997122114, 0.100208840148698, 0.720956738045624,
0.444814277410285, 2.25080298569014, 1.61429066532657, 1.1066027850052,
0.927880542016121, 4.1487948134003), LOD = c(0.810456546400942,
0.614177278086376, 0.98739611371029, 0.315142822914328, 0.221497734151459,
0.0191136249820546, 0.364139946842526, 0.983763479804491, 0.982034953153209,
0.851687364910033, 0.893324689832074, 0.978609354294382, 0.62613140416969,
0.0310439168600307, 0.729966088361143)), class = c("tbl_df",
"tbl", "data.frame"), row.names = c(NA, -15L))
As you can see I have different LOD values, for each marker in each plate. So I calculate the mean LOD for each marker using
lod <- dummy_2 |>
group_by (marker) |>
summarise(lod = mean(LOD))
which results in the following mean LOD per marker for all plates
structure(list(marker = c("IL-1", "IL-2", "IL-3", "IL-4", "IL-5"
), lod = c(0.57429828707169, 0.652308859741095, 0.865763665894824,
0.442740564309189, 0.601050395807545)), class = c("tbl_df", "tbl",
"data.frame"), row.names = c(NA, -5L))
So far so good. Now I want to check if the result of my markers are above or below my mean LOD. If it is above my mean LOD, the results must not be changed, if it is below my LOD, the result must be changed in the LOD/2.
I tried to use for loop and mutate in combination with ifelse, but that did not work. I also saw the accross function, but that also did not work. My latest try was...
marker <- unique(dummy_2$marker)
for (i in marker){
dummy_2 <- mutate(result = ifelse(i %in% dummy_2$result < dummy_2$LOD, (i %in% lod$LOD)/2), dummy_2$result)}
Is for loop the right way to go, or is there a better solution?
Any help would be appreciated..

Already found a solution by creating a new dataframe with my mean values and linked this together using left_join to my dataset. Still like to know if this could be possible with a for loop. But for now, problem solved....

Related

Am getting error trying to predict on a single image CNN pytorch

Error message
Traceback (most recent call last):
File "pred.py", line 134, in
output = model(data)
Runtime Error: Expected 4-dimensional input for 4-dimensional weight [16, 3, 3, 3], but got 3-dimensional input of size [1, 32, 32] instead.
Prediction code
normalize = transforms.Normalize(mean=[0.4914, 0.4824, 0.4467],
std=[0.2471, 0.2435, 0.2616])
train_set = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize,
])
model = models.condensenet(args)
model = nn.DataParallel(model)
PATH = "results/savedir/save_models/checkpoint_001.pth.tar"
model.load_state_dict(torch.load(PATH)['state_dict'])
device = torch.device("cpu")
model.eval()
image = Image.open("horse.jpg")
input = train_set(image)
train_loader = torch.utils.data.DataLoader(
input,
batch_size=1,shuffle=True, num_workers=1)
for i, data in enumerate(train_loader):
#input_var = torch.autograd.Variable(data, volatile=True)
#input_var = input_var.view(1, 3, 32,32)
**output = model(data)
topk=(1,5)
maxk = max(topk)
_, pred = output.topk(maxk, 1, True, True)
Am getting this error when am trying to predict on a single image
Image shape/size error message
Link to saved model
Training code repository
Plz uncomment this line #input_var = input_var.view(1, 3, 32,32) so that your input dimension is 4.
I assume that your no. of input channels are 3 if its one then use input_var = input_var.view(1, 1, 32,32) if gray scale
Instead of doing the for loop and train_loader, solved this by just passing the input directly into the model. like this
input = train_set(image)
input = input.unsqueeze(0)
model.eval()
output = model(input)
More details can be found here link

Overriding a hash in ruby

I am very new to ruby and I am trying to find if there's an equivalent way to doing the thing in ruby.
In yml syntax, we use a syntax similar to this way to have a default blob and then override them with specific values:
default:
default:
A: {read: 20, write: 10}
B: {read: 30, write: 30}
C: {read: 130, write: 10}
override1:
placeholderA:
A: {read: 10, write: 10}
override2:
placeHolderB:
A: {read: 10, write: 10}
B: {read: 5, write: 5}
C: {read: 5, write: 5}
D: {read: 5, write: 5}
I wanted to know if we can create a hash in ruby where in it will pick the values for the override if they exist, otherwise it will just pick the default value.
I am not sure if ruby merge map is an approach to this problem (since I am still new to ruby, I am exploring options).
Is this possible?
merge could be used:
options = {a:22}
my_defaults = {a:1, b:123}
my_defaults.merge(options)
> {a:22, b:123}
if you are using rails that also provides a reverse_merge which works the other way round and may be clearer to read intent from in some use cases
options = { a:2, b:321 }
my_defaults = {a:1, c:3}
options.reverse_merge(my_defaults)
> {a:2, b:321, c:3}
http://apidock.com/rails/Hash/reverse_merge

What is the syntax for constructing a list-value in ATS?

For instance, how can I construct a list consisting of all the digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
You can construct it with val xs = ($list {int} (0, 1, 2, 3, 4, 5, 6, 7, 8, 9)).
Make sure you specify the correct memory allocation functions by passing -DATS_MEMALLOC_LIBC to the compiler when using this code.
If you compile to JavaScript via atscc2js, then you need to construct
the list in the following manner:
val ds =
0::1::2::3::4::5::6::7::8::9::nil{int}()
// end of [val]
It also works for targeting C.
There are also combinators for this sort of things. For instance,
val ds = (10).list_map(TYPE{int})(lam(i) => i)
val ds = list_tabulate_cloref<int>(10, lam i => i)
You can find special literal on emacs mode:
https://github.com/githwxi/ATS-Postiats/blob/653af81715cf6bfbd1d2cd5ece1e88c8c3912b4a/utils/emacs/ats2-mode.el#L287
(defvar ats-special-keywords
'("$arrpsz" "$arrptrsize" "$delay" "$ldelay" "$effmask" "$effmask_ntm" "$effmask_exn" "$effmask_ref"
"$effmask_wrt" "$effmask_all" "$extern" "$extkind" "$extype" "$extype_struct" "$extval" "$lst"
"$lst_t" "$lst_vt" "$list" "$list_t" "$list_vt" "$rec" "$rec_t" "$rec_vt"
"$record" "$record_t" "$record_vt" "$tup" "$tup_t" "$tup_vt" "$tuple" "$tuple_t"
"$tuple_vt" "$raise" "$showtype" "$myfilename" "$mylocation" "$myfunction" "#assert" "#define"
"#elif" "#elifdef" "#elifndef" "#else" "#endif" "#error" "#if" "#ifdef"
"#ifndef" "#print" "#then" "#undef" "#include" "#staload" "#dynload" "#require"))
If you find some keyword, you can easily know how to use it on doc/EXAMPLE/ directory:
$ git clone https://github.com/githwxi/ATS-Postiats.git
$ cd ATS-Postiats/doc/EXAMPLE
$ grep -r "\$list" . | head
./MISC/word-chain.dats: $list{word}("", "")
./MISC/word-chain.dats: $list{word}("", "")
./MISC/mysendmailist.dats:$list{string}
./MISC/monad_list.dats: $list{a}("this", "that", "a")
./MISC/monad_list.dats: $list{a}("frog", "elephant", "thing")
./MISC/monad_list.dats: $list{a}("walked", "treaded", "grows")
./MISC/monad_list.dats: $list{a}("slowly", "quickly")
./ATSLF/CoYonedaLemma.dats:val myintlist0 = g0ofg1($list{int0}(I(1), I(0), I(1), I(0), I(0)))
./ATSLF/YonedaLemma.dats: $list{bool}(True, False, True, False, False)
./ATS-QA-LIST/qa-list-2014-12-07.dats:$list{double}(0.111111, 0.222222, 0.333333)
For a list0-value, you can do
val xs = g0ofg1($list{T}(x1, ..., xn))
where T is the type for the elements in xs. For instance,
val some_int_list = g0ofg1($list{int}(0, 9, 8, 7, 3, 4))

IgniteRdd's savepairs method for reading parquet files

I have made a small impl in order to read parquet files and store the items into cache. So I wrote:
val df= sqlContext.read.
parquet(hdfsFolder).
select("a","b", "c", "d", "e", "f")
val columnsSeq= Seq("a","b", "c", "d", "e", "f")
val values = df.map(row => (row.getAs[String]("a"), row.getValuesMap(columnsSeq))).
groupByKey(1024).
map(row => (row._1 , row._2.toList.asJava ))
//put them into cache
val igniteContext = new IgniteContext(sc, cacheConfigPath)
val sharedRdd = igniteContext.fromCache(cacheName)
sharedRdd.savePairs(values)
But the last line "sharedRdd.savePairs(values)" gives the compileerror:
found : org.apache.spark.rdd.RDD[(String,
java.util.List[Map[String,Nothing]])] required:
org.apache.spark.rdd.RDD[(Nothing, Nothing)] Note: (String,
java.util.List[Map[String,Nothing]]) >: (Nothing, Nothing), but class
RDD is invariant in type T. You may wish to define T as -T instead.
(SLS 4.5)
sharedRdd.savePairs(values)
I could not found any ways to overcome this error.
Any ideas?
You should create IgniteRDD with proper typing:
val sharedRdd = igniteContext.fromCache[String, java.util.List[Map[String,Nothing]]](cacheName)

Getting max and min from two different sets in json

I haven't found a solution with data set up quite like mine...
var marketshare = [
{"store": "store1", "share": "5.3%", "q1count": 2, "q2count": 4, "q3count": 0},
{"store": "store2","share": "1.9%", "q1count": 5, "q2count": 10, "q3count": 0},
{"store": "store3", "share": "2.5%", "q1count": 3, "q2count": 6, "q3count": 0}
];
Code so far, returning undefined...
var minDataPoint = d3.min( d3.values(marketshare.q1count) ); //Expecting 2 from store 1
var maxDataPoint = d3.max( d3.values(marketshare.q2count) ); //Expecting 10 from store 2
I'm a little overwhelmed by d3.keys, d3.values, d3.maps, converting to array, etc. Any explanations or nudges would be appreciated.
I think you're looking for something like this instead:
d3.min(marketshare, function(d){ return d.q1count; }) // => 2.
You can pass an accessor function as the second argument to d3.min/d3.max.

Resources