Related
Every example I've looked at so far seems to use a shared vocabulary between source and target languages, and I'm wondering if that is a hard-coded constraint of the Huggingface models, or my misunderstanding, or I've just not looked in the right place yet?
To take a random example, when I look at the files here, https://huggingface.co/Helsinki-NLP/opus-mt-en-zls/tree/main, I see separate "spm" (sentience piece model) files for source and target languages, and they are of different sizes (792kb vs. 850kb). But there is only a single "vocab.json" file. And the config.json file only mentions a single "vocab_size": 57680.
I've also been experimenting, e.g. tokenizer(inputs, text_target=inputs, return_tensors="pt"). If source and target used different vocabulary I would expect the returned input_ids and labels to use different numbers. But every model I've tried so far the numbers are identical (NO, my mistake - see update below).
Can a Huggingface tokenizer even support two vocabularies? If not then a model would need two tokenizers, which seems to clash with the way AutoTokenizer works.
UPDATE
Here is a test script to show the above model is actually using two spm vocabs with AutoTokenizer.
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model_name = 'Helsinki-NLP/opus-mt-en-zls'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
inputs = ['Filter all items from same host']
targets = ['Filtriraj sve stavke s istog hosta']
x=tokenizer(inputs, text_target=targets, return_tensors="pt")
print(x)
print(tokenizer.decode(x['input_ids'][0]))
print(tokenizer.decode(x['labels'][0]))
print("\nGiving inputs on both sides")
x=tokenizer(inputs, text_target=inputs, return_tensors="pt")
print(x) ## Expecting to see different numbers if they use different vocabs
print(tokenizer.decode(x['input_ids'][0]))
print(tokenizer.decode(x['labels'][0]))
print("\nGiving targets on both sides")
x=tokenizer(targets, text_target=targets, return_tensors="pt") ## Expecting to see different numbers if they use different vocabs
print(x)
print(tokenizer.decode(x['input_ids'][0]))
print(tokenizer.decode(x['labels'][0]))
print(model)
The output is:
{'input_ids': tensor([[10373, 90, 8255, 98, 605, 6276, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1]]), 'labels': tensor([[11638, 1392, 7636, 386, 35861, 95, 2130, 218, 6276, 27,
0]])}
▁Filter all▁items from same host</s>
Filtriraj sve stavke s istog hosta</s>
Giving inputs on both sides
{'input_ids': tensor([[10373, 90, 8255, 98, 605, 6276, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1]]), 'labels': tensor([[11638, 911, 90, 3188, 7, 98, 605, 6276, 0]])}
▁Filter all▁items from same host</s>
Filter all items from same host</s>
Giving targets on both sides
{'input_ids': tensor([[11638, 1392, 7636, 95, 120, 914, 465, 478, 95, 29,
25, 897, 6276, 27, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]), 'labels': tensor([[11638, 1392, 7636, 386, 35861, 95, 2130, 218, 6276, 27,
0]])}
Filtriraj sve stavke s istog hosta</s>
Filtriraj sve stavke s istog hosta</s>
When I choose identical strings in English or Croatian it gives slightly different numbers, showing that different tokenizers are involved. You can then see that the different ids sometimes map back to an identical string, sometimes not.
But when I print out the model we see it is actually a shared vocabulary, which makes the two spm models a bit pointless.
(encoder): MarianEncoder(
(embed_tokens): Embedding(57680, 512, padding_idx=57679)
...
(decoder): MarianDecoder(
(embed_tokens): Embedding(57680, 512, padding_idx=57679)
...
(lm_head): Linear(in_features=512, out_features=57680, bias=False)
I haven't got as far as finding out if a non-shared vocabulary is possible, but still yet to see evidence of one.
For Marian-based models, HuggingFace now supports separate vocabularies for source and target, but some models may not, especially older models.
(As you know, OPUS-MT models are based on MarianMT. The MarianMT framework supports it.)
Before https://github.com/huggingface/transformers/pull/15831, HuggingFace used a shared vocabulary file for Marian.
This PR updates the Marian model:
To allow not sharing embeddings between encoder and decoder.
Allow tying only decoder embeddings with lm_head.
Separate two vocabs in tokenizer for src and tgt language
...
share_encoder_decoder_embeddings: to indicate if emb should be shared or not
So models trained with earlier versions of the framework, or that parameter set to false, only have one shared vocabulary file for source and target.
The primary variable is AgeGroup which has 2 levels. I am trying to get the sample size to output, but for some reason the app either gives error or wont output anything. Can anyone help? The are some comments in the code to help with confusion
Code:
library(shiny)
library(shinyWidgets)
library(survival)
library(shinyjs)
library(survminer)
# Define UI for application that draws a histogram
ui <- fluidPage(
# Application title
titlePanel("ProMBA Haslam Ad Sample Size"),
#Put in all key5 inputs as numeric inputs that the user will type in and choose starting, default values for these inputs
tabPanel("Inputs",
div( id ="form",
column(4,
numericInput("power", label=h6("Power"), value = .9),
numericInput("alpha", label=h6("Alpha"), value = .05),
numericInput("precision", label=h6("Precision"), value =0.05),
numericInput("Delta", label=h6("Delta"), value=.3),
column(4,
numericInput("sample", label=h6("Starting Sample Size"), value = 40),
numericInput("reps", label=h6("Number of Replications"), value=1000)),
),
column(4,
#title of output
h4("Calculated Sample Size"),
verbatimTextOutput(("n"),placeholder=TRUE)),
#create action buttons for users to run the form and to reset the form
textOutput("Sample Size (n)"),
column(4,
actionButton("action","Calculate"))
)))
server = function(input,output,session){
buttonGo = eventReactive(input$action, {withProgress(message = "Running", {
#relist the key inputs and save them to be able to be used in the rest of the code
n<-input$sample/2
alpha<-input$alpha
power <- input$power
beta<-1-input$power
precision<-input$precision
delta <- input$Delta +1
rep <- input$reps
nincrease<-10
#manually5 load in the data from the baseline data .xlxs file
Reporting <- c("12/13/21","12/14/21","12/15/21","12/16/21","12/17/21","12/18/21","12/19/21","12/20/21","12/21/21","12/22/21","12/23/21","12/24/21","12/25/21","12/26/21","12/27/21","12/28/21","12/29/21","12/30/21","12/31/21","1/1/22")
AdSet <- "Status Quo"
Results <- c(70,52, 33, 84, 37, 41, 22, 53, 78, 66, 100, 110, 52, 43, 63, 84, 16, 64, 21, 69)
ResultIndicator <- "actions:link_click"
Budget <- 100
CostPerClick<- c(1.43, 1.92, 3.03, 1.19, 2.70, 2.44, 4.55, 1.89, 1.28, 1.52, 1.00, 0.91, 1.92, 2.33, 1.59, 1.19, 6.25, 1.56, 4.76, 1.45)
Impressions <- c(7020, 8430, 5850, 7920, 6890, 7150, 6150, 7370, 8440, 6590, 6750,8720, 6410,7720, 6940, 8010, 7520, 7190, 6540, 6020)
df <- data.frame(Reporting, AdSet, Results, ResultIndicator,Budget,CostPerClick,Impressions)
#define the standard deviation of the results as well as the mean for group 1 of the 2 level variable and the mean for group 2
mean1 = mean(df$Results)
sd1 = sd(df$Results)
mean2 = delta*mean1
click=rep(0,n)
#Create 2 level variable
AgeGroup <- rep(c("Age21-35","Age36-50"),each=n)
#create new data frame with 2 level variable and click repetitions
DataFrame2 <- data.frame(AgeGroup,click)
#create new data frame binding all of the input variables together
DataFrame3 <- data.frame(cbind(n,alpha,power,precision,delta,rep))
#create for loop to find the pvalue of the ttest run with click~AgeGroup
trials=function(){
for(i in 1:nrow(DataFrame2)){
if(any(DataFrame2$AgeGroup[i]=="Age21-35")){DataFrame2$click[i] =rnorm(1,mean1,sd1)}else{DataFrame2$click[i] =rnorm(1,mean2,sd1)}
}
pvalttest=t.test(click~AgeGroup, data=DataFrame2)
return(pvalttest$p.value)
}
p_values=replicate(200,trials())
p_values=replicate(input$rep,trials())
#find if the p value is significance
significance=p_values[p_values<alpha]
#find the power of the signifiance and the pvalue
power <- length(significance)/length(p_values)
print(c(power,n))
#run a while loop to find the n within the goal power limits
goalpower<-1-beta
lowergoal<-goalpower-input$precision
uppergoal<-goalpower+input$precision
while (power<lowergoal||power>uppergoal){
if (power<lowergoal){
n=n+nincrease
AgeGroup=c()
click=c()
AgeGroup=rep(c("Age21-35","Age36-50"), each=n)
click=rep(NA,2*n)
Dataframe2=data.frame(AgeGroup,click)
p_values=replicate(input$reps, trials())
significance=p_values[p_values<alpha]
power=length(significance)/length(p_values)
print(c(n, power))
}else{
nincrease=nincrease%/%(10/9) #%/% fixes issue of rounding
n=n-nincrease
AgeGroup=c()
click=c()
AgeGroup=rep(c("Age21-35","Age36-50"), each=n)
click=rep(NA,2*n)
DataFrame2=data.frame(AgeGroup,click)
p_values=replicate(input$reps, trials())
significance=p_values[p_values<alpha]
power=length(significance)/length(p_values)
print(c(n, power))
}
}
#n is defined as the sample size of one of the levels of the 2 level variable, so mulitply by 2 to get full sample size
n*2
})
}
shinyApp(ui, server)
i dont need the app to be pretty. I just want it to run whenever someone clicks the calculate button
I am trying to train my models and validate them using sklearn's cross validation. What I want to do is use the same folds across all of my models (which will be running from different python scripts).
How can I do this? Should I save them to a file? or should I save the kfold model? or should I use the same seed?
kfold = StratifiedKFold(n_splits=n_splits, shuffle=True, random_state=seed)
Well the easiest way I found to save the folds was to simply get them from the stratified k fold split method by looping over it. Then storing it to a json file:
kfold = StratifiedKFold(n_splits=n_splits, shuffle=True, random_state=seed)
folds = {}
count = 1
for train, test in kfold.split(np.zeros(len(y)), y.argmax(1)):
folds['fold_{}'.format(count)] = {}
folds['fold_{}'.format(count)]['train'] = train.tolist()
folds['fold_{}'.format(count)]['test'] = test.tolist()
count += 1
print(len(folds) == n_splits)#assert we have the same number of splits
#dump folds to json
import json
with open('folds.json', 'w') as fp:
json.dump(folds, fp)
Note 1: Argmax here is used because my y values are one hot variables so we need to get the class that is predicted/ground truth.
Now to load it from any other script:
#load to dict to be used
with open('folds.json') as f:
kfolds = json.load(f)
From here we can easily just loop over the elements in the dict:
for key, val in kfolds.items():
print(key)
train = val['train']
test = val['test']
Our json file looks like so:
{"fold_1": {"train": [193, 2405, 2895, 565, 1215, 274, 2839, 1735, 2536, 1196, 40, 2541, 980,...SNIP...830, 1032], "test": [1, 5, 6, 7, 10, 15, 20, 26, 37, 45, 52, 54, 55, 59, 60, 64, 65, 68, 74, 76, 78, 90, 100, 106, 107, 113, 122, 124, 132, 135, 141, 146,...SNIP...]}
I'm building a factory method that will to save some information in JSON for later analysis in an app that cuts boards.
These boards will have plating around them, so the JSON will contain the ids for each one of its boarders.
For example, we can have Platings 1 and 2.
And I need to generate boards that may look like this:
width: 500
height: 500
platings: {"up": 1, left: 2, right: 1}
So I can have more than once an id on a single Factory, but I have no way to do it since every time I create a plating I will get another ID.
This is an example of what I have tried to do
Factory :medium_board do
width 500
height 500
platings {{
"up": create(:plating).id, #id: 1 OK
"left": create(:another_plating).uuid, #id: 2, OK
"right": create(:plating).uuid #id: 3, NOT OK, should have been 1.
}}
end
Is there something fundamental that I am skipping?
Which is the best way to do this?
Best
I would use an after hook here. Something like this:
factory :medium_board do
after :build do |board|
plating1 = create(:plating)
plating2 = create(:another_plating)
board.platings = {
up: plating1.id,
left: plating2.id,
right: plating1.id,
}
end
end
I have an import script that imports well over 2000+ products including their images. I run this script via CLI because I feel that this is the best way to go speed-wise even though I have the same import script available and executable at the magento admin as an extension. The script runs pretty well. Almost perfect! However, sometimes the addToImageGallery somehow malfunctions and results into some images having No Image as the default product image and the only other image as not selected as defaults at all. How do I mass-update all products to set the first image in the media gallery for the product to the default 'base', 'image' and 'thumbnail' image(s)?
I found a couple of tricks on doing this (and more) on this link:
http://www.magentocommerce.com/boards/viewthread/59440/ (Thanks transio!)
Although, for Magento 1.6.2.0 (which I use), the first SQL trick there (Trick 1 - Auto-set default base, thumb, small image to first image.) needs a bit of modification.
On the second-to-the last-line there is a AND ev.attribute_id IN (70, 71, 72) part. This should point to attribute ID's which will probably not be relevant in Magento 1.6.2.0 anymore. To fix this, using any MySQL query tool (PHPMyAdmin or MySQL Query Browser), I took a look at the catalog_product_entity_varchar table. There should be entries like:
value_id, entity_type_id, attribute_id, store_id, entity_id, value
..
146649, 4, 116, 0, 1, '2'
146650, 4, 76, 0, 1, ''
146651, 4, 78, 0, 1, ''
146652, 4, 79, 0, 1, '/B/0/B05-01.jpg'
146653, 4, 80, 0, 1, '/B/0/B05-01.jpg'
146654, 4, 81, 0, 1, '/B/0/B05-01.jpg'
146655, 4, 96, 0, 1, ''
146656, 4, 100, 0, 1, ''
146657, 4, 102, 0, 1, 'container2'
..
My money was on the group of three image paths as possible replacements. So the resulting SQL now should be:
UPDATE catalog_product_entity_media_gallery AS mg,
catalog_product_entity_media_gallery_value AS mgv,
catalog_product_entity_varchar AS ev
SET ev.value = mg.value
WHERE mg.value_id = mgv.value_id
AND mg.entity_id = ev.entity_id
AND ev.attribute_id IN (79, 80, 81) # <-- attribute IDs updated here
AND mgv.position = 1;
So I committed to it, ran it and.. presto! All fixed! You might also want to encapsulate this in a transaction if you want. But this is out of this question's scope.
Well, this is the fix that worked for me so far! If there are any more out there, please share!
There was:
146652, 4, 79, 0, 1, '/B/0/B05-01.jpg'
146653, 4, 80, 0, 1, '/B/0/B05-01.jpg'
146654, 4, 81, 0, 1, '/B/0/B05-01.jpg'
So it should be:
AND ev.attribute_id IN (79, 80, 81) # <-- attribute IDs updated here
instead of:
AND ev.attribute_id IN (78, 80, 81) # <-- attribute IDs updated here
Is looking for something similar.
UPDATE catalog_product_entity_media_gallery AS mg,
catalog_product_entity_media_gallery_value AS mgv,
catalog_product_entity_varchar AS ev
SET ev.value = mg.value
WHERE mg.value_id = mgv.value_id
AND mg.entity_id = ev.entity_id
AND ev.attribute_id IN (79, 80, 81) # <-- attribute IDs updated here
AND mgv.position = 1;