Related
Every example I've looked at so far seems to use a shared vocabulary between source and target languages, and I'm wondering if that is a hard-coded constraint of the Huggingface models, or my misunderstanding, or I've just not looked in the right place yet?
To take a random example, when I look at the files here, https://huggingface.co/Helsinki-NLP/opus-mt-en-zls/tree/main, I see separate "spm" (sentience piece model) files for source and target languages, and they are of different sizes (792kb vs. 850kb). But there is only a single "vocab.json" file. And the config.json file only mentions a single "vocab_size": 57680.
I've also been experimenting, e.g. tokenizer(inputs, text_target=inputs, return_tensors="pt"). If source and target used different vocabulary I would expect the returned input_ids and labels to use different numbers. But every model I've tried so far the numbers are identical (NO, my mistake - see update below).
Can a Huggingface tokenizer even support two vocabularies? If not then a model would need two tokenizers, which seems to clash with the way AutoTokenizer works.
UPDATE
Here is a test script to show the above model is actually using two spm vocabs with AutoTokenizer.
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model_name = 'Helsinki-NLP/opus-mt-en-zls'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
inputs = ['Filter all items from same host']
targets = ['Filtriraj sve stavke s istog hosta']
x=tokenizer(inputs, text_target=targets, return_tensors="pt")
print(x)
print(tokenizer.decode(x['input_ids'][0]))
print(tokenizer.decode(x['labels'][0]))
print("\nGiving inputs on both sides")
x=tokenizer(inputs, text_target=inputs, return_tensors="pt")
print(x) ## Expecting to see different numbers if they use different vocabs
print(tokenizer.decode(x['input_ids'][0]))
print(tokenizer.decode(x['labels'][0]))
print("\nGiving targets on both sides")
x=tokenizer(targets, text_target=targets, return_tensors="pt") ## Expecting to see different numbers if they use different vocabs
print(x)
print(tokenizer.decode(x['input_ids'][0]))
print(tokenizer.decode(x['labels'][0]))
print(model)
The output is:
{'input_ids': tensor([[10373, 90, 8255, 98, 605, 6276, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1]]), 'labels': tensor([[11638, 1392, 7636, 386, 35861, 95, 2130, 218, 6276, 27,
0]])}
▁Filter all▁items from same host</s>
Filtriraj sve stavke s istog hosta</s>
Giving inputs on both sides
{'input_ids': tensor([[10373, 90, 8255, 98, 605, 6276, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1]]), 'labels': tensor([[11638, 911, 90, 3188, 7, 98, 605, 6276, 0]])}
▁Filter all▁items from same host</s>
Filter all items from same host</s>
Giving targets on both sides
{'input_ids': tensor([[11638, 1392, 7636, 95, 120, 914, 465, 478, 95, 29,
25, 897, 6276, 27, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]), 'labels': tensor([[11638, 1392, 7636, 386, 35861, 95, 2130, 218, 6276, 27,
0]])}
Filtriraj sve stavke s istog hosta</s>
Filtriraj sve stavke s istog hosta</s>
When I choose identical strings in English or Croatian it gives slightly different numbers, showing that different tokenizers are involved. You can then see that the different ids sometimes map back to an identical string, sometimes not.
But when I print out the model we see it is actually a shared vocabulary, which makes the two spm models a bit pointless.
(encoder): MarianEncoder(
(embed_tokens): Embedding(57680, 512, padding_idx=57679)
...
(decoder): MarianDecoder(
(embed_tokens): Embedding(57680, 512, padding_idx=57679)
...
(lm_head): Linear(in_features=512, out_features=57680, bias=False)
I haven't got as far as finding out if a non-shared vocabulary is possible, but still yet to see evidence of one.
For Marian-based models, HuggingFace now supports separate vocabularies for source and target, but some models may not, especially older models.
(As you know, OPUS-MT models are based on MarianMT. The MarianMT framework supports it.)
Before https://github.com/huggingface/transformers/pull/15831, HuggingFace used a shared vocabulary file for Marian.
This PR updates the Marian model:
To allow not sharing embeddings between encoder and decoder.
Allow tying only decoder embeddings with lm_head.
Separate two vocabs in tokenizer for src and tgt language
...
share_encoder_decoder_embeddings: to indicate if emb should be shared or not
So models trained with earlier versions of the framework, or that parameter set to false, only have one shared vocabulary file for source and target.
I'm attempting to store some data into the session storage and I'm getting the same cookie error as this guy, the cookie is over the system byte limit of 4096.
This seems pretty straight forward, don't attempt to store more than the system limit in the session. Right, but I'm not attempting to do that. Clearly, the cookie is over 4096 bytes and my additions have caused it to overflow, but that doesn't explain where the data is.
The data I'm attempting to store is only 1500 bytes. In fact, the entire session that is being saved is 1500 bytes (the errored session). Thats nowhere near the overflow limit. So that means one thing for certain: The data stored in :plug_session inside of conn is not the only data being stored inside of the session cookie.
This is the session that's throwing the CookieOverflowError:
:plug_session => %{
"_csrf_token" => "XmE4kgdxk4D0NwwlfTL77Ic62t123123sdfh1s",
"page_trail" => [{"/", "Catalog"}, {'/', "Catalog"}],
"shopping_cart_changeset" => #Ecto.Changeset<
action: nil,
changes: %{
order: #Ecto.Changeset<
action: :insert,
changes: %{
address: #Ecto.Changeset<
action: :insert,
changes: %{
address_one: "800 Arola Drive, apt 321, apt 321",
address_two: "apt 321",
city: "Wooster",
company: "Thomas",
country: "US",
name: "user one",
phone: "3305551111",
state: "WV",
zip_code: "44691"
},
errors: [],
data: #FulfillmentCart.Addresses.Address<>,
valid?: true
>,
priority: false,
shipping_method: #Ecto.Changeset<
action: :insert,
changes: %{id: 2, is_priority?: false, name: "3 Day Select"},
errors: [],
data: #FulfillmentCart.ShippingMethods.ShippingMethod<>,
valid?: true
>
},
errors: [],
data: #FulfillmentCart.Orders.Order<>,
valid?: true
>
},
errors: [],
data: #FulfillmentCart.ShoppingCarts.ShoppingCart<>,
valid?: true
>,
"user_id" => 8
},
I actually followed this guide on decoding a phoenix session cookie, and I get the session before the error.
Which gives me:
iex(8)> [_, payload, _] = String.split(cookie, ".", parts: 3)
["SFMyNTY",
"g3QAAAADbQAAAAtfY3NyZl90b2tlbm0AAAAYWU92dkRfVDh5UXlRTUh4TGlpRTQxOFREbQAAAApwYWdlX3RyYWlsbAAAAAJoAm0AAAABL20AAAAHQ2F0YWxvZ2gCawABL20AAAAHQ2F0YWxvZ2ptAAAAB3VzZXJfaWRhCA",
"Ytg5oklzyWMvtu1vyXVvQ2xBzdtMnS9zVth7LIRALsU"]
iex(9)> {:ok, encoded_term } = Base.url_decode64(payload, padding: false)
{:ok,
<<131, 116, 0, 0, 0, 3, 109, 0, 0, 0, 11, 95, 99, 115, 114, 102, 95, 116, 111,
107, 101, 110, 109, 0, 0, 0, 24, 89, 79, 118, 118, 68, 95, 84, 56, 121, 81,
121, 81, 77, 72, 120, 76, 105, 105, 69, 52, 49, ...>>}
iex(10)> :erlang.binary_to_term(encoded_term)
%{
"_csrf_token" => "YOvvD_T8yQyQMHxLiiE418TD",
"page_trail" => [{"/", "Catalog"}, {'/', "Catalog"}],
"user_id" => 8
}
iex(11)>
This is 127 bytes, so the addition of the 1500 bytes isn't the problem. It's the other allocation of storage that isn't represented inside of the session. What is that?
My assumption of the byte size of the text itself in :plug_session is correct, but the reason the cookie is overflowing is not because the byte size of the decoded text in :plug_session is too big but that the encoded version of the :plug_session is too big. I figured this out by creating multiple cookies and looking at the byte_size of the data.
Save a new cookie
conn = put_resp_cookie(conn, "address",
changeset.changes.order.changes.address.changes, sign: true)
Get a saved cookie
def get_resp_cookie(conn, attribute) do
cookie = conn.req_cookies[attribute]
case cookie != nil do
false ->
{:invalid, %{}}
true ->
[_, payload, _] = String.split(cookie, ".", parts: 3)
{:ok, encoded_term } = Base.url_decode64(payload, padding: false)
{val, max, max_age} = :erlang.binary_to_term(encoded_term)
{:valid, val}
end
end
get_resp_cookie/2 pattern matching
address_map = case Connection.get_resp_cookie(conn, "address") do
{:invalid, val} -> IO.puts("Unable to find cookie.");val
{:valid, val} -> val
end
I made a few changes to the way I save the data from when I posted this. Namely I am now storing a map of changes, not the actual changeset...which means that the session most likely would've worked for me all along.
I think the answer to this issue was that the encoded %Ecto.Changeset{} was too big for the cookie to hold.
If you use this solution then be wary, you have to manage the newly created cookies yourself.
I am executing below ansible task based on facts value gathered via ansible.
- hosts : mirth1
vars :
env : dev
gather_facts : true
tasks:
#- name: Build corresponding JSON files
- name: Build a Stream
shell: uptime
when: tags['ha'] == 'active'
Facts value gathered via ansible:
"hostvars[inventory_hostname]": "{'httpd_port': 80, 'ntpserver': '192.168.1.2', 'ansible_user': 'ec2-user', 'inventory_file': '/Users/bada/Documents/we-ansible/inventory_awsplugin/test', 'inventory_dir': '/Users/bada/Documents/we-ansible/inventory_awsplugin/test', 'ami_launch_index': 0, 'image_id': 'ami-0915bcb5fa654992', 'instance_id': 'i-06bd5115d656789a9', 'instance_type': 't3.small', 'key_name': 'mykey', 'launch_time': datetime.datetime(2021, 3, 5, 5, 44, 35, tzinfo=tzutc()), 'monitoring': {'state': 'disabled'}, 'placement': {'availability_zone': 'us-east-1a', 'group_name': '', 'region': 'us-east-1'}, 'private_dns_name': 'ip-172-12-16-224.ec2.internal', 'private_ip_address': '172.12.16.224', 'state': {'code': 16, 'name': 'running'}, 'subnet_id': 'subnet-04fcfc6', 'vpc_id': 'vpc-0cf0a45', 'architecture': 'x86_64', 'block_device_mappings': [{'device_name': '/dev/xvda', 'ebs': {'attach_time': datetime.datetime(2021, 3, 5, 5, 44, 36, tzinfo=tzutc()), 'delete_on_termination': True, 'status': 'attached', 'volume_id': 'vol-057912c770df38754'}}], 'client_token': 'a0ce63e5', 'ebs_optimized': False, 'ena_support': True, 'hypervisor': 'xen', 'network_interfaces': [{'attachment': {'attach_time': datetime.datetime(2021, 3, 5, 5, 44, 35, tzinfo=tzutc()), 'attachment_id': 'eni-attach-03fc486b4c06970ce', 'delete_on_termination': True, 'device_index': 0, 'status': 'attached', 'network_card_index': 0}, 'description': '', 'groups': [{'group_name': 'sg_priv_vce_test', 'group_id': 'sg-0b89c5'}], 'ipv6_addresses': [], 'mac_address': '12:0d:44:15:55:a9', 'network_interface_id': 'eni-0772a53', 'owner_id': '58435', 'private_dns_name': 'ip-172-16-12-224.ec2.internal', 'private_ip_address': '172.16.12.224', 'private_ip_addresses': [{'primary': True, 'private_dns_name': 'ip-172-16-12-224.ec2.internal', 'private_ip_address': '172.16.12.224'}], 'source_dest_check': True, 'status': 'in-use', 'subnet_id': 'subnet-04fcfc42cda7cdaa6', 'vpc_id': 'vpc-0cf0a4dded14c2f05', 'interface_type': 'interface'}], 'root_device_name': '/dev/xvda', 'root_device_type': 'ebs', 'security_groups': [{'group_name': 'sg_priv_test', 'group_id': 'sg-0b8b36c5'}], 'source_dest_check': True, 'tags': {'ha': 'active', 'Platform': 'linux', 'Role': 'Mirth', 'Environment': 'test', 'Name': 'we-test-mirth1', 'PrincipalId': 'AIDAIXY2CRBZU5K', 'Owner': 'we-ansible', 'Product': 'we'}, 'virtualization_type': 'hvm', 'cpu_options': {'core_count': 1, 'threads_per_core': 2}, 'capacity_reservation_specification': {'capacity_reservation_preference': 'open'}, 'hibernation_options': {'configured': False}, 'metadata_options': {'state': 'applied', 'http_tokens': 'optional', 'http_put_response_hop_limit': 1, }, 'enclave_options': {'enabled': False}, 'inventory_hostname': '172.16.12.224', 'group_names': ['aws_ec2', 'linux', 'mirthlin_servers'], 'ansible_facts': {}, 'playbook_dir': '/Users/bada/Docs/we-ansible', 'ansible_playbook_python': '/usr/local/opt/python#3.9/bin/python3.9', 'ansible_config_file': '/Users/bada/Documents/vce-ansible/ansible.cfg', 'groups': {'all': ['172.16.13.21','mirth_servers': ['172.16.12.224']}, 'omit': '__omit_place_holder__6af09f538bad60133ef3eb0949ec09272254aad1', 'ansible_version': {'string': '2.10.3', 'full': '2.10.3', 'major': 2, 'minor': 10, 'revision': 3}, 'ansible_check_mode': False, 'ansible_diff_mode': False, 'ansible_forks': 5, 'ansible_inventory_sources': ['/Users/bada/Documents/vce-ansible/inventory_awsplugin/test'], 'ansible_verbosity': 0}"
If you see above facts grabbed by ansible its storing instance tag fact; I am trying to execute a task when a tag ha has value as active but its been failing. Could you let me know how can I execute task based on fact value stored.
As pointed in the documentation:
Ansible facts are data related to your remote systems, including operating systems, IP addresses, attached filesystems, and more. You can access this data in the ansible_facts variable.
Source: https://docs.ansible.com/ansible/latest/user_guide/playbooks_vars_facts.html#ansible-facts
So your taks should be:
- shell: uptime
when: ansible_facts.tags['ha'] == 'active'
## or ansible_facts.tags.ha
## or ansible_facts['tags']['ha']
## if you want to keep them coherent
And since your debug was done this way you could also have used:
when: hostvars[inventory_hostname].tags.ha == 'active'
I am trying to train my models and validate them using sklearn's cross validation. What I want to do is use the same folds across all of my models (which will be running from different python scripts).
How can I do this? Should I save them to a file? or should I save the kfold model? or should I use the same seed?
kfold = StratifiedKFold(n_splits=n_splits, shuffle=True, random_state=seed)
Well the easiest way I found to save the folds was to simply get them from the stratified k fold split method by looping over it. Then storing it to a json file:
kfold = StratifiedKFold(n_splits=n_splits, shuffle=True, random_state=seed)
folds = {}
count = 1
for train, test in kfold.split(np.zeros(len(y)), y.argmax(1)):
folds['fold_{}'.format(count)] = {}
folds['fold_{}'.format(count)]['train'] = train.tolist()
folds['fold_{}'.format(count)]['test'] = test.tolist()
count += 1
print(len(folds) == n_splits)#assert we have the same number of splits
#dump folds to json
import json
with open('folds.json', 'w') as fp:
json.dump(folds, fp)
Note 1: Argmax here is used because my y values are one hot variables so we need to get the class that is predicted/ground truth.
Now to load it from any other script:
#load to dict to be used
with open('folds.json') as f:
kfolds = json.load(f)
From here we can easily just loop over the elements in the dict:
for key, val in kfolds.items():
print(key)
train = val['train']
test = val['test']
Our json file looks like so:
{"fold_1": {"train": [193, 2405, 2895, 565, 1215, 274, 2839, 1735, 2536, 1196, 40, 2541, 980,...SNIP...830, 1032], "test": [1, 5, 6, 7, 10, 15, 20, 26, 37, 45, 52, 54, 55, 59, 60, 64, 65, 68, 74, 76, 78, 90, 100, 106, 107, 113, 122, 124, 132, 135, 141, 146,...SNIP...]}
I have an import script that imports well over 2000+ products including their images. I run this script via CLI because I feel that this is the best way to go speed-wise even though I have the same import script available and executable at the magento admin as an extension. The script runs pretty well. Almost perfect! However, sometimes the addToImageGallery somehow malfunctions and results into some images having No Image as the default product image and the only other image as not selected as defaults at all. How do I mass-update all products to set the first image in the media gallery for the product to the default 'base', 'image' and 'thumbnail' image(s)?
I found a couple of tricks on doing this (and more) on this link:
http://www.magentocommerce.com/boards/viewthread/59440/ (Thanks transio!)
Although, for Magento 1.6.2.0 (which I use), the first SQL trick there (Trick 1 - Auto-set default base, thumb, small image to first image.) needs a bit of modification.
On the second-to-the last-line there is a AND ev.attribute_id IN (70, 71, 72) part. This should point to attribute ID's which will probably not be relevant in Magento 1.6.2.0 anymore. To fix this, using any MySQL query tool (PHPMyAdmin or MySQL Query Browser), I took a look at the catalog_product_entity_varchar table. There should be entries like:
value_id, entity_type_id, attribute_id, store_id, entity_id, value
..
146649, 4, 116, 0, 1, '2'
146650, 4, 76, 0, 1, ''
146651, 4, 78, 0, 1, ''
146652, 4, 79, 0, 1, '/B/0/B05-01.jpg'
146653, 4, 80, 0, 1, '/B/0/B05-01.jpg'
146654, 4, 81, 0, 1, '/B/0/B05-01.jpg'
146655, 4, 96, 0, 1, ''
146656, 4, 100, 0, 1, ''
146657, 4, 102, 0, 1, 'container2'
..
My money was on the group of three image paths as possible replacements. So the resulting SQL now should be:
UPDATE catalog_product_entity_media_gallery AS mg,
catalog_product_entity_media_gallery_value AS mgv,
catalog_product_entity_varchar AS ev
SET ev.value = mg.value
WHERE mg.value_id = mgv.value_id
AND mg.entity_id = ev.entity_id
AND ev.attribute_id IN (79, 80, 81) # <-- attribute IDs updated here
AND mgv.position = 1;
So I committed to it, ran it and.. presto! All fixed! You might also want to encapsulate this in a transaction if you want. But this is out of this question's scope.
Well, this is the fix that worked for me so far! If there are any more out there, please share!
There was:
146652, 4, 79, 0, 1, '/B/0/B05-01.jpg'
146653, 4, 80, 0, 1, '/B/0/B05-01.jpg'
146654, 4, 81, 0, 1, '/B/0/B05-01.jpg'
So it should be:
AND ev.attribute_id IN (79, 80, 81) # <-- attribute IDs updated here
instead of:
AND ev.attribute_id IN (78, 80, 81) # <-- attribute IDs updated here
Is looking for something similar.
UPDATE catalog_product_entity_media_gallery AS mg,
catalog_product_entity_media_gallery_value AS mgv,
catalog_product_entity_varchar AS ev
SET ev.value = mg.value
WHERE mg.value_id = mgv.value_id
AND mg.entity_id = ev.entity_id
AND ev.attribute_id IN (79, 80, 81) # <-- attribute IDs updated here
AND mgv.position = 1;