I'm curious what the difference between preflightCommitment and commitment is.
Also, what are the different types of commitments listed below.
export type Commitment =
| 'processed'
| 'confirmed'
| 'finalized'
| 'recent'
| 'single'
| 'singleGossip'
| 'root'
| 'max';
preflightCommitment is the commitment used for the preflight transaction, AKA the transaction simulation, whereas commitment is used for the actual transaction.
As for the different commitments, they're all listed at https://docs.solana.com/developing/clients/jsonrpc-api#configuring-state-commitment
Some of those terms are old, but here's roughly how they would translate:
processed = recent
confirmed = singleGossip = single
finalized = root = max
Related
I am new to Airflow. I want to do an operation like below using airflow operators.
Briefly I want to read some data from a database table and according to the values of a column in that table I want to do different tasks.
This is the table which I used to get data.
+-----------+--------+
| task_name | status |
+-----------+--------+
| a | 1 |
| b | 2 |
| c | 4 |
| d | 3 |
| e | 4 |
+-----------+--------+
From the above table I want to select the rows where status=4 and according to their task name run the relevant jar file (for running jar files I am planning to use Bash Operator). I want to execute this task using Airflow. Note that I am using PostgreSQL.
This is the code which I have implemented so far.
from airflow.models import DAG
from airflow.operators.postgres_operator import PostgresOperator
from datetime import datetime, timedelta
from airflow import settings
#set the default attributes
default_args = {
'owner': 'Airflow',
'start_date': datetime(2020,10,4)
}
status_four_dag = DAG(
dag_id = 'status_check',
default_args = default_args,
schedule_interval = timedelta(seconds=5)
)
test=PostgresOperator(
task_id='check_status',
sql='''select * from table1 where status=4;''',
postgres_conn_id='test',
database='status',
dag=status_four_dag,
)
I am stuck in the place where I want to check the task_name and call the relevant BashOperators.
Your support is appreciated. Thank you.
XComs are used for communicating messages between tasks. Send the JAR filename and other arguments for forming the command to xcom and consume it in the subsequent tasks.
For example,
check_status >> handle_status
check_status - checks status from DB and write JAR filename and arguments to xcom
handle_status - pulls the JAR filename and arguments from xcom, forms the command and execute it
Sample code:
def check_status(**kwargs):
if randint(1, 100) % 2 == 0:
kwargs["ti"].xcom_push("jar_filename", "even.jar")
else:
kwargs["ti"].xcom_push("jar_filename", "odd.jar")
with DAG(dag_id='new_example', default_args=default_args) as dag:
t0 = PythonOperator(
task_id="check_status",
provide_context=True,
python_callable=check_status
)
t1 = BashOperator(
task_id="handle_status",
bash_command="""
jar_filename={{ ti.xcom_pull(task_ids='check_status', key='jar_filename') }}
echo "java -jar ${jar_filename}"
"""
)
t0 >> t1
Given a trained contextual bandit model, how can I retrieve a prediction vector on test samples?
For example, let's say I have a train set named "train.dat" containing lines formatted as below
1:-1:0.3 | a b c # <action:cost:probability | features>
2:2:0.3 | a d d
3:-1:0.3 | a b e
....
And I run below command.
vw -d train.dat --cb 30 -f cb.model --save_resume
This produces a file, 'cb.model'. Now, let's say I have a test dataset as below
| a d d
| a b e
I'd like to see probabilities as below
0.2 0.7 0.1
The interpretation of these probabilities would be that action 1 should be picked 20% of the time, action 2 - 70%, and action 3 - 10% of the time.
Is there a way to get something like this?
When you use "--cb K", the prediction is the optimal arm/action based on argmax policy, which is a static policy.
When using "--cb_explore K", the prediction output contains the probability for each arm/action. Depending the policy you pick, the probabilities are calculated differently.
If you send those lines to a daemon running your model, you'd get just that. You send a context, and the reply is a probability distribution across the number of allowed actions, presumably comprising the "recommendation" provided by the model.
Say you have 3 actions, like in your example. Start a contextual bandits daemon:
vowpalwabbit/vw -d train.dat --cb_explore 3 -t --daemon --quiet --port 26542
Then send a context to it:
| a d d
You'll get just what you want as the reply.
In the Workspace Class, initialize the object and then call the method predict(prediction_type: int). Below are the corresponding parameter values
class PredictionType(IntEnum):
SCALAR = pylibvw.vw.pSCALAR
SCALARS = pylibvw.vw.pSCALARS
ACTION_SCORES = pylibvw.vw.pACTION_SCORES
ACTION_PROBS = pylibvw.vw.pACTION_PROBS
MULTICLASS = pylibvw.vw.pMULTICLASS
MULTILABELS = pylibvw.vw.pMULTILABELS
PROB = pylibvw.vw.pPROB
MULTICLASSPROBS = pylibvw.vw.pMULTICLASSPROBS
DECISION_SCORES = pylibvw.vw.pDECISION_SCORES
ACTION_PDF_VALUE = pylibvw.vw.pACTION_PDF_VALUE
PDF = pylibvw.vw.pPDF
ACTIVE_MULTICLASS = pylibvw.vw.pACTIVE_MULTICLASS
NOPRED = pylibvw.vw.pNOPRED
I am trying to use NLTK package to capture the following chunk in a sentence:
verb + smth + noun
or it may be
verb + smth + noun + and + noun
I truthfully spent entire day messing with regex, but still nothing proper is produced..
I was looking at this tutorial which wasn't much of help.
When you have an idea of what those somethings that might come in between are, there is a relatively easy method using NLTK's CFG. This is most certainly not the most efficient way. For a comprehensive analysis, consult NLTK's book on chapter 8.
We have two patterns as you mentioned:
<verb> ... <noun>
<verb> ... <noun> "and" <noun>
We should assemble a list of VPs and NPs and also the range of possible words that could happen in between. As a silly little example:
grammar = nltk.CFG.fromstring("""
% start S
S -> VP SOMETHING NP
VP -> V
SOMETHING -> WORDS SOMETHING
SOMETHING ->
NP -> N 'and' N
NP -> N
V -> 'told' | 'scolded' | 'loved' | 'respected' | 'nominated' | 'rescued' | 'included'
N -> 'this' | 'us' | 'them' | 'you' | 'I' | 'me' | 'him'|'her'
WORDS -> 'among' | 'others' | 'not' | 'all' | 'of'| 'uhm' | '...' | 'let'| 'finish' | 'certainly' | 'maybe' | 'even' | 'me'
""")
Now suppose this is the list of the sentences we want to use our filter against:
sentences = ['scolded me and you', 'included certainly uhm maybe even her and I', 'loved me and maybe many others','nominated others not even him', 'told certainly among others uhm let me finish ... us and them', 'rescued all of us','rescued me and somebody else']
As you can see, the third and the last phrases don't pass the filter. We can check whether the rest match the pattern:
def sentence_filter(sent, grammar):
rd_parser = nltk.RecursiveDescentParser(grammar)
try:
for p in rd_parser.parse(sent):
print("SUCCESS!")
except:
print("Doesn't match the filter...")
for s in sentences:
s = s.split()
sentence_filter(s, grammar)
When we run this, we get this result:
>>>
SUCCESS!
SUCCESS!
Doesn't match the filter...
SUCCESS!
SUCCESS!
SUCCESS!
Doesn't match the filter...
>>>
I use AJAX mechanism to set create or modify records in this table:
table:
id | item_type | item_id | creator_id | attitude
1 | exemplar | 3 | 33 | 1
2 | exemplar | 4 | 33 | 0
3 | exemplar | 3 | 35 | 1
In plain English: there are many exemplars to choose for one user. A given user can only set only one exemplar to value 1. In this particular case Exemplar #3 is active (attitude = 1). I want to set its "attitude" to 0 and in the same controller method where I have the below code.
The below code creates a new record for an exemplar which has never been chosen before, or changes the value of 'attitude column.
$user_id = Auth::user()->id;
$countatt = $exemplar->attitudes()->where('creator_id', $user_id)->first();
if (!$countatt)
{
$countatt = new Userattitude;
$countatt->creator_id = $user_id;
$countatt->item_type = 'exemplar';
$countatt->item_id = $exemplar_id;
}
$countatt->attitude = $value; // $value = 1
$countatt->save();
Problem to solve:
1. how, using the best practices, set all other records of the same user (creator_id) and exemplar_id to 0
My best guess isbe to put the below 4 lines before the code quoted above:
$oldactive= Exemplar::where('creator_id', $user_id)->where(exemplar_id, $exemplar_id)->first();
$zeroing_attitude= $oldactive->attitudes()->first();
$zeroing_attitude->attitude = 0;
$zeroing_attitude->save();
;
The above solution works only in case when there is only one exemplar with value of 'attitude' set to 1. But in the future I want to allow users to have multiple exemplars active. I am not familiar with Eloquent enough to rewrite the logic for multiple active Exemplars.
Sometimes there will be no active Exemplars set, which means that this collection would be empty
$oldactive= Exemplar::where('creator_id', $user_id)->where(exemplar_id, $exemplar_id)->first();
How should I skip executing the rest of the code in such case? By adding IF as below?
if($oldactive) {}
Thank you.
$oldactive= Exemplar::where('creator_id', $user_id)->where(exemplar_id,$exemplar_id)->first();
foreach($oldactive->attitudes() as $zeroing_attitude){
$zeroing_attitude->attitude = 0;
$zeroing_attitude->save();
}
While using .NET 3.5 SP1 in ASP.NET MVC application, the ObjectContext can have lifetime on one Http Request OR of a SINGLE method.
using (MyEntities context = new MyEntities ())
{
//DO query etc
}
How much is increased performance cost of creating ObjectContext in every method VS per request ?
Thanks.
The cost of creating the context is very low. However, using a new context means that you don't have any cached queries from previous contexts. You can work around this to some degree with view generation or CompiledQuery. See also Performance Considerations for Entity Framework Applications
On the other hand, keeping a context around for a long time means you are tracking increasing amounts of state information, which has a performance cost of its own.
In my opinion, however, the most significant cost of a context is code complication. Using multiple contexts tends to lead to confusing code. So I try to use one context per group of related operations, e.g. handling a single HTTP request.
I am using EF6 and schema of 163 entities that are db first generated from oracle.
I am measuring Initialization times and time to get 100 records from indexed table.
C# Test
var times = new List<Tuple<DateTime, DateTime, DateTime>>();
var carTypes = new List<CAR_TYPE>();
var j = 1;
while (j <= 10000)
{
for (int i = 0; i < j; i++)
{
var startTime = DateTime.Now;
using (var db = new EcomEntities())
{
var contextInitializationTime = DateTime.Now;
carTypes = db.CAR_TYPE.Take(100).ToList();
var executionTime = DateTime.Now;
times.Add(new Tuple<DateTime, DateTime, DateTime>(startTime, contextInitializationTime, executionTime));
}
}
var averageInitTime = times.Average(o => o.Item2.Subtract(o.Item1).TotalMilliseconds);
var averageRunTime = times.Average(o => o.Item3.Subtract(o.Item1).TotalMilliseconds);
Debug.WriteLine("averageInitTime - " + j + " " + averageInitTime);
Debug.WriteLine("averageRunTime - " + j + " " + averageRunTime);
j = j*10;
}
Results:
Runs MS Runs MS
+------------------+-------+----------+-----------------+-------+----------+
| averageInitTime | 1 | 134.0134 | averageRunTime | 1 | 1719.172 |
+------------------+-------+----------+-----------------+-------+----------+
| averageInitTime | 10 | 12.27395 | averageRunTime | 10 | 160.3797 |
+------------------+-------+----------+-----------------+-------+----------+
| averageInitTime | 100 | 1.540695 | averageRunTime | 100 | 19.94794 |
+------------------+-------+----------+-----------------+-------+----------+
| averageInitTime | 1000 | 0.281756 | averageRunTime | 1000 | 6.121224 |
+------------------+-------+----------+-----------------+-------+----------+
| averageInitTime | 10000 | 0.167058 | averageRunTime | 10000 | 4.751353 |
+------------------+-------+----------+-----------------+-------+----------+
Is the underlying model small or large, simple or complex? The cost of initializing and using a new objectcontext grows with the size and complexity of the model. If you have a handful of entities, it is usually neglectable. If you have hundreds of entities then it can be significant.
See:
http://oakleafblog.blogspot.com/2008/08/entity-framework-instantiation-times.html
and
http://blogs.msdn.com/adonet/archive/2008/06/20/how-to-use-a-t4-template-for-view-generation.aspx