Once and for all: `programIdIndex` is 0 based, right? - solana

Given an array of accounts of length 5 in a transaction like so:
"accountKeys": [
"3hyA2VoeXrMwfxU5AvD9FS2auvHL9wNHLacKkkSVEzPf", # PidIndex 0
"6bMbqts34vu4GKbFUUs51h8BskjFK26WvsZiimc2B7bf", # PidIndex 1
"SysvarS1otHashes111111111111111111111111111", # PidIndex 2
"SysvarC1ock11111111111111111111111111111111", # PidIndex 3
"Vote111111111111111111111111111111111111111" # PidIndex 4
],
The following instruction will be processed by the Vote program(not SysvarClock):
"accounts": [1,2,3,0],
"data":"rTmw9...nXy",
"programIdIndex": 4
}
Correct?

Related

Parse and filter MySQL Slow Query Logs Using Grafana

We have MySQL Slow Query logs writing to Grafana 9.3.6.
Given a MYSQL slow log like the following, I'm trying to filter the log output to logs that are slower than, say, one second.
# User#Host: kermit[muppets] # [99.99.99.99] Id: 54908918
# Schema: frogs Last_errno: 0 Killed: 0
# Query_time: 0.000218 Lock_time: 0.000081 Rows_sent: 1 Rows_examined: 1 Rows_affected: 0 Bytes_sent: 665
# Tmp_tables: 0 Tmp_disk_tables: 0 Tmp_table_sizes: 0
# InnoDB_trx_id: 0
# QC_Hit: No Full_scan: No Full_join: No Tmp_table: No Tmp_table_on_disk: No
# Filesort: No Filesort_on_disk: No Merge_passes: 0
# InnoDB_IO_r_ops: 0 InnoDB_IO_r_bytes: 0 InnoDB_IO_r_wait: 0.000000
# InnoDB_rec_lock_wait: 0.000000 InnoDB_queue_wait: 0.000000
# InnoDB_pages_distinct: 9
# Log_slow_rate_type: query Log_slow_rate_limit: 1000
SET timestamp=1676569875;
select id FROM characters WHERE name='monster';
I've made it this far,
{service="db::muppets"} |~ `Query_time: (\d*\.\d*)`
which correctly highlights the field in the log messages, but now I'd like to use that (\d*\.\d*) capture group to reduce the logs to the queries that are more than one second.
It seems like I need something like this, but this returns no results.
{service="db::muppets"} |~ `Query_time: (?P<query_time>\d*\.\d*)` | query_time > 1
I assume there needs to be some type of text-to-number conversion of the query_time label, but I can't figure that part out.
Grafana is totally new to me.

Laravel consoletv/charts v 6.0 - Get count per day and per Item name

How do i query to make total count per day based on a name or an id?
id
name
1
Facebook
2
Twitter
3
Reddit
id
page_id
social_id
visited_at
1
1
1
2021-03-27
2
1
1
2021-03-27
3
1
2
2021-03-27
4
1
2
2021-03-27
5
1
3
2021-03-27
6
1
3
2021-03-27
7
1
1
2021-03-28
8
1
1
2021-03-28
9
1
2
2021-03-28
10
1
2
2021-03-28
11
1
3
2021-03-28
12
1
3
2021-03-28
With the following query i get count of click on all social anchors per day, but i want to show in the chart also which social anchor has been clicked on that day.
$social_stats= Social::join('social_statistics', 'social_statistics.social_id','socials.id')
->select( array(
'social_statistics.visited_at as visited_at',
DB::raw('count(*) as count'),
)
)
->orderBy('visited_at')
->groupBy('visited_at')
->pluck('count','visited_at')
->all();
Need to render a Chart that shows by day the count of click on different social.
$social_bar_chart = new SocialBarChart;
$visited_at = collect(array_keys($this->social_bar));
$social_bar_labels = $visited_at->map(function ($date) {
return Carbon::parse($date)->format('d/m');
})->toArray();
$social_bar_chart->labels($social_bar_labels)
->dataset('Social Count', 'bar', array_values($this->social_bar))
->options([
'tooltip' =>['show' => true],
'backgroundColor' => '#54a0ff',
]);

Structure of the random effects in glmmLasso

I want to perform model selection among ~150 fixed-effect and 7 random-effect variables, on a set of 360 observations. I decided to use the Lasso procedure for mixed models, with the glmmLasso. I did a lost of researches to find some examples of comparable models without success. Here is a sample of my data:
> str(RHI_12)
'data.frame': 350 obs. of 164 variables:
$ RHI_counts_12 : int 0 14 1 3 2 2 2 0 0 1 ...
$ Site : Factor w/ 6 levels "14_metzerlen",..: 1 1 1 1 1 1 1 1 1 1 ...
$ Location : Factor w/ 30 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
$ Dist_roost : num 0.985 0.88 0.908 0.888 0.89 ...
$ Natural_light : num -0.194 -0.194 -0.194 -0.194 -0.194 ...
$ Mean_wind : num 0.836 0.836 0.836 0.836 0.836 ...
$ Mean_temp : num -0.427 -0.427 -0.427 -0.427 -0.427 ...
$ Day : num -0.993 -0.993 -0.993 -0.993 -0.993 ...
$ Artificial_light: num -0.2016 -0.2016 0.0772 -0.2016 -0.2016 ...
$ WBdi : num 1.14 1.14 1.14 1.14 1.14 ...
$ WCdi : num 1.49 1.49 1.49 1.49 1.47 ...
... (many more fixed-effect variables)
The response variable is counts (RHI_counts_12).
My question is about the structure of the random-effect variables in the model.
I have 2 categorical random-effect variables ("Site" and "Location"; "Location" is nested in "Site") and 5 numerical random-effect variables. I have structured my model like this (using only a sample of the fixed-effect variables):
lasso1<-glmmLasso(RHI_counts_12 ~ Artificial_light+WBdi+WCdi+BUdi+FOdi+TIdi, list(Site=~1,Location=~1+Dist_roost+Natural_light+Mean_wind+Mean_temp+Day),
lambda = 500,family = poisson(link = log), data = RHI_12)
I am not convinced at all about the right way to structure the random effects if I have these 2 categorical nested random effects. I want to have a model with Location nested in Site, and I do not think that this is what I get. Here is my output for the random effects(in this output, "Loc" stands for Location, "siteName" for Site):
Random Effects:
StdDev:
[[1]]
siteName
siteName 1.180514
[[2]]
Loc Loc:Dist_roost Loc:Natural_light Loc:Mean_wind
Loc 1.15105859 -0.66317669 -0.35354821 -0.10805268
Loc:Dist_roost -0.66317669 1.42601945 0.46004662 -0.42795987
Loc:Natural_light -0.35354821 0.46004662 0.49532786 -0.15485395
Loc:Mean_wind -0.10805268 -0.42795987 -0.15485395 0.76175417
Loc:Mean_temp 0.02677276 0.03961902 -0.01431360 -0.03649499
Loc:Day 0.03756960 -0.02081360 0.02520654 -0.12082652
Loc:Mean_temp Loc:Day
Loc 0.02677276 0.03756960
Loc:Dist_roost 0.03961902 -0.02081360
Loc:Natural_light -0.01431360 0.02520654
Loc:Mean_wind -0.03649499 -0.12082652
Loc:Mean_temp 0.36923939 -0.08311209
Loc:Day -0.08311209 0.56876662
Do you think that it is right? I was not able to build this model with "Location" nested in "Site" (and all the other random factors would also be nested in "Site".) I have tried many different ways without success.
I already thank you a lot for having read me and for any advices for the structure of random effects in glmmLasso! :-)
Thomas

High & Low Numbers From A String (Ruby)

Good evening,
I'm trying to solve a problem on Codewars:
In this little assignment you are given a string of space separated numbers, and have to return the highest and lowest number.
Example:
high_and_low("1 2 3 4 5") # return "5 1"
high_and_low("1 2 -3 4 5") # return "5 -3"
high_and_low("1 9 3 4 -5") # return "9 -5"
Notes:
All numbers are valid Int32, no need to validate them.
There will always be at least one number in the input string.
Output string must be two numbers separated by a single space, and highest number is first.
I came up with the following solution however I cannot figure out why the method is only returning "542" and not "-214 542". I also tried using #at, #shift and #pop, with the same result.
Is there something I am missing? I hope someone can point me in the right direction. I would like to understand why this is happening.
def high_and_low(numbers)
numberArray = numbers.split(/\s/).map(&:to_i).sort
numberArray[-1]
numberArray[0]
end
high_and_low("4 5 29 54 4 0 -214 542 -64 1 -3 6 -6")
EDIT
I also tried this and receive a failed test "Nil":
def high_and_low(numbers)
numberArray = numbers.split(/\s/).map(&:to_i).sort
puts "#{numberArray[-1]}" + " " + "#{numberArray[0]}"
end
When omitting the return statement, a function will only return the result of the last expression within its body. To return both as an Array write:
def high_and_low(numbers)
numberArray = numbers.split(/\s/).map(&:to_i).sort
return numberArray[0], numberArray[-1]
end
puts high_and_low("4 5 29 54 4 0 -214 542 -64 1 -3 6 -6")
# => [-214, 542]
Using sort would be inefficient for big arrays. Instead, use Enumerable#minmax:
numbers.split.map(&:to_i).minmax
# => [-214, 542]
Or use Enumerable#minmax_by if you like result to remain strings:
numbers.split.minmax_by(&:to_i)
# => ["-214", "542"]

How can I filter through my groups/clusters to keep only the ones with different column2 values?

I have a file which looks something like this:
1 Ape 5138150 5140933
1 Ape 4289 7147
1 Ape 2680951 2683603
1 Ape 1484200 1486662
1 Baboon 3706008 3708636
1 Baboon 11745108 11747790
1 Baboon 3823683 3826474
2 Dog 216795245 216796748
2 Dog 14408 15922
3 Elephant 18 691
3 Ape 1 824
4 Frog 823145 826431
4 Sloth 35088 37788
4 Snake 1071033 1074121
5 Tiger 997421 1003284
5 Tiger 125725 131553
6 Tiger 2951524 2953649
6 Lion 178820 180879
Each group (or cluster) is indicated by the line number (e.g. all lines starting with 1 are in group 1) and different groups are separated by a blank line, as shown above. I'm interested in column 2. I want to keep all groups that have at least two different animals in column 2, but delete all groups that only have the one animal (i.e. species-specific groups). So with this file, I want to get rid of groups 2 and 5, but keep the others:
1 Ape 5138150 5140933
1 Ape 4289 7147
1 Ape 2680951 2683603
1 Ape 1484200 1486662
1 Baboon 3706008 3708636
1 Baboon 11745108 11747790
1 Baboon 3823683 3826474
3 Elephant 18 691
3 Ape 1 824
4 Frog 823145 826431
4 Sloth 35088 37788
4 Snake 1071033 1074121
6 Tiger 2951524 2953649
6 Lion 178820 180879
Is there a quick/easy way to do this? My actual file has over 10,000 different groups, so doing it manually is not a (sensible) option. I have a feeling I should be able to do this with awk, but no luck so far.
With GNU awk for length(array):
$ cat tst.awk
BEGIN { RS=""; ORS="\n\n"; FS="\n" }
{
delete keys
for (i=1; i<=NF; i++) {
split($i,f," ")
keys[f[2]]
}
}
length(keys) > 1
$ awk -f tst.awk file
1 Ape 5138150 5140933
1 Ape 4289 7147
1 Ape 2680951 2683603
1 Ape 1484200 1486662
1 Baboon 3706008 3708636
1 Baboon 11745108 11747790
1 Baboon 3823683 3826474
3 Elephant 18 691
3 Ape 1 824
4 Frog 823145 826431
4 Sloth 35088 37788
4 Snake 1071033 1074121
6 Tiger 2951524 2953649
6 Lion 178820 180879
You can solve it with python:
group = []
animals = set()
with open('data') as f:
for l in f:
line = l.strip()
if line == '':
if len(animals) > 1:
for g in group:
print g
print ''
group = []
animals = set()
continue
group.append(line)
animals.add(line.split()[1])
if len(animals) > 1:
for g in group:
print g
data is the name of your input file.
Explanation:
Iterate over every line of the file.
If the line is not a blank line, we add the line to the group to being able to print it later. Also, we add the second column to the animals distinct set.
If it is a blank line, we check whether we had more than one animal in the group. In that case we print all the lines of the group. In any case, we reset the group and animals since we are starting a new group.
The lines outside of the loop are required to write the last group if it contains more than one animal and if the file does not end with a blank line.

Resources