How can i add multiple dataset in chart.js package in Laravel - laravel

How can i add multiple dataset in nsoleTVs/Charts charts.js package in laravel.
my single dataset code is running well:
$data['transactionChart'] = new TransactionChart();
$data['transactionChart']->dataset('Sample', 'line',[100, 65, 84, 45, 90])
->options(['borderColor' => '#97d881']);

Simply use ->dataset() multiple times.
https://github.com/ConsoleTVs/Charts/issues/331
Example:
$data['transactionChart'] = new TransactionChart();
$data['transactionChart']->dataset('Sample', 'line',[100, 65, 84, 45, 90])
->options(['borderColor' => '#97d881']);
$data['transactionChart']->dataset('Another Sample', 'line',[100, 65, 84, 45, 90])
->options(['borderColor' => '#ff0000']);

Related

How does Phoenix conglomerate cookie data?

I'm attempting to store some data into the session storage and I'm getting the same cookie error as this guy, the cookie is over the system byte limit of 4096.
This seems pretty straight forward, don't attempt to store more than the system limit in the session. Right, but I'm not attempting to do that. Clearly, the cookie is over 4096 bytes and my additions have caused it to overflow, but that doesn't explain where the data is.
The data I'm attempting to store is only 1500 bytes. In fact, the entire session that is being saved is 1500 bytes (the errored session). Thats nowhere near the overflow limit. So that means one thing for certain: The data stored in :plug_session inside of conn is not the only data being stored inside of the session cookie.
This is the session that's throwing the CookieOverflowError:
:plug_session => %{
"_csrf_token" => "XmE4kgdxk4D0NwwlfTL77Ic62t123123sdfh1s",
"page_trail" => [{"/", "Catalog"}, {'/', "Catalog"}],
"shopping_cart_changeset" => #Ecto.Changeset<
action: nil,
changes: %{
order: #Ecto.Changeset<
action: :insert,
changes: %{
address: #Ecto.Changeset<
action: :insert,
changes: %{
address_one: "800 Arola Drive, apt 321, apt 321",
address_two: "apt 321",
city: "Wooster",
company: "Thomas",
country: "US",
name: "user one",
phone: "3305551111",
state: "WV",
zip_code: "44691"
},
errors: [],
data: #FulfillmentCart.Addresses.Address<>,
valid?: true
>,
priority: false,
shipping_method: #Ecto.Changeset<
action: :insert,
changes: %{id: 2, is_priority?: false, name: "3 Day Select"},
errors: [],
data: #FulfillmentCart.ShippingMethods.ShippingMethod<>,
valid?: true
>
},
errors: [],
data: #FulfillmentCart.Orders.Order<>,
valid?: true
>
},
errors: [],
data: #FulfillmentCart.ShoppingCarts.ShoppingCart<>,
valid?: true
>,
"user_id" => 8
},
I actually followed this guide on decoding a phoenix session cookie, and I get the session before the error.
Which gives me:
iex(8)> [_, payload, _] = String.split(cookie, ".", parts: 3)
["SFMyNTY",
"g3QAAAADbQAAAAtfY3NyZl90b2tlbm0AAAAYWU92dkRfVDh5UXlRTUh4TGlpRTQxOFREbQAAAApwYWdlX3RyYWlsbAAAAAJoAm0AAAABL20AAAAHQ2F0YWxvZ2gCawABL20AAAAHQ2F0YWxvZ2ptAAAAB3VzZXJfaWRhCA",
"Ytg5oklzyWMvtu1vyXVvQ2xBzdtMnS9zVth7LIRALsU"]
iex(9)> {:ok, encoded_term } = Base.url_decode64(payload, padding: false)
{:ok,
<<131, 116, 0, 0, 0, 3, 109, 0, 0, 0, 11, 95, 99, 115, 114, 102, 95, 116, 111,
107, 101, 110, 109, 0, 0, 0, 24, 89, 79, 118, 118, 68, 95, 84, 56, 121, 81,
121, 81, 77, 72, 120, 76, 105, 105, 69, 52, 49, ...>>}
iex(10)> :erlang.binary_to_term(encoded_term)
%{
"_csrf_token" => "YOvvD_T8yQyQMHxLiiE418TD",
"page_trail" => [{"/", "Catalog"}, {'/', "Catalog"}],
"user_id" => 8
}
iex(11)>
This is 127 bytes, so the addition of the 1500 bytes isn't the problem. It's the other allocation of storage that isn't represented inside of the session. What is that?
My assumption of the byte size of the text itself in :plug_session is correct, but the reason the cookie is overflowing is not because the byte size of the decoded text in :plug_session is too big but that the encoded version of the :plug_session is too big. I figured this out by creating multiple cookies and looking at the byte_size of the data.
Save a new cookie
conn = put_resp_cookie(conn, "address",
changeset.changes.order.changes.address.changes, sign: true)
Get a saved cookie
def get_resp_cookie(conn, attribute) do
cookie = conn.req_cookies[attribute]
case cookie != nil do
false ->
{:invalid, %{}}
true ->
[_, payload, _] = String.split(cookie, ".", parts: 3)
{:ok, encoded_term } = Base.url_decode64(payload, padding: false)
{val, max, max_age} = :erlang.binary_to_term(encoded_term)
{:valid, val}
end
end
get_resp_cookie/2 pattern matching
address_map = case Connection.get_resp_cookie(conn, "address") do
{:invalid, val} -> IO.puts("Unable to find cookie.");val
{:valid, val} -> val
end
I made a few changes to the way I save the data from when I posted this. Namely I am now storing a map of changes, not the actual changeset...which means that the session most likely would've worked for me all along.
I think the answer to this issue was that the encoded %Ecto.Changeset{} was too big for the cookie to hold.
If you use this solution then be wary, you have to manage the newly created cookies yourself.

gnuradio error: found character '%' that cannot start any token

I get this error when I run GNU Radio Companion. Of course, the multi_rtl_source.block.yml block doesn't work and doesn't show up in the menu:
ERROR:gnuradio.grc.core.platform:Error while loading /usr/local/share/gnuradio/grc/blocks/multi_rtl_source.block.yml
Traceback (most recent call last):
File "/usr/lib/python3.8/site-packages/gnuradio/grc/core/platform.py", line 169, in build_library
data = cache.get_or_load(file_path)
File "/usr/lib/python3.8/site-packages/gnuradio/grc/core/cache.py", line 66, in get_or_load
data = yaml.safe_load(fp)
File "/usr/lib/python3.8/site-packages/yaml/__init__.py", line 162, in safe_load
return load(stream, SafeLoader)
File "/usr/lib/python3.8/site-packages/yaml/__init__.py", line 114, in load
return loader.get_single_data()
File "/usr/lib/python3.8/site-packages/yaml/constructor.py", line 49, in get_single_data
node = self.get_single_node()
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 36, in get_single_node
document = self.compose_document()
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 55, in compose_document
node = self.compose_node(None, None)
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 84, in compose_node
node = self.compose_mapping_node(anchor)
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 133, in compose_mapping_node
item_value = self.compose_node(node, item_key)
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 82, in compose_node
node = self.compose_sequence_node(anchor)
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 111, in compose_sequence_node
node.value.append(self.compose_node(node, index))
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 84, in compose_node
node = self.compose_mapping_node(anchor)
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 127, in compose_mapping_node
while not self.check_event(MappingEndEvent):
File "/usr/lib/python3.8/site-packages/yaml/parser.py", line 98, in check_event
self.current_event = self.state()
File "/usr/lib/python3.8/site-packages/yaml/parser.py", line 428, in parse_block_mapping_key
if self.check_token(KeyToken):
File "/usr/lib/python3.8/site-packages/yaml/scanner.py", line 116, in check_token
self.fetch_more_tokens()
File "/usr/lib/python3.8/site-packages/yaml/scanner.py", line 258, in fetch_more_tokens
raise ScannerError("while scanning for the next token", None,
yaml.scanner.ScannerError: while scanning for the next token
found character '%' that cannot start any token
in "/usr/local/share/gnuradio/grc/blocks/multi_rtl_source.block.yml", line 33, column 5
ERROR:gnuradio.grc.core.platform:while scanning for the next token
found character '%' that cannot start any token
in "/usr/local/share/gnuradio/grc/blocks/multi_rtl_source.block.yml", line 33, column 5
Traceback (most recent call last):
File "/usr/lib/python3.8/site-packages/gnuradio/grc/core/platform.py", line 169, in build_library
data = cache.get_or_load(file_path)
File "/usr/lib/python3.8/site-packages/gnuradio/grc/core/cache.py", line 66, in get_or_load
data = yaml.safe_load(fp)
File "/usr/lib/python3.8/site-packages/yaml/__init__.py", line 162, in safe_load
return load(stream, SafeLoader)
File "/usr/lib/python3.8/site-packages/yaml/__init__.py", line 114, in load
return loader.get_single_data()
File "/usr/lib/python3.8/site-packages/yaml/constructor.py", line 49, in get_single_data
node = self.get_single_node()
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 36, in get_single_node
document = self.compose_document()
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 55, in compose_document
node = self.compose_node(None, None)
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 84, in compose_node
node = self.compose_mapping_node(anchor)
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 133, in compose_mapping_node
item_value = self.compose_node(node, item_key)
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 82, in compose_node
node = self.compose_sequence_node(anchor)
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 111, in compose_sequence_node
node.value.append(self.compose_node(node, index))
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 84, in compose_node
node = self.compose_mapping_node(anchor)
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 127, in compose_mapping_node
while not self.check_event(MappingEndEvent):
File "/usr/lib/python3.8/site-packages/yaml/parser.py", line 98, in check_event
self.current_event = self.state()
File "/usr/lib/python3.8/site-packages/yaml/parser.py", line 428, in parse_block_mapping_key
if self.check_token(KeyToken):
File "/usr/lib/python3.8/site-packages/yaml/scanner.py", line 116, in check_token
self.fetch_more_tokens()
File "/usr/lib/python3.8/site-packages/yaml/scanner.py", line 258, in fetch_more_tokens
raise ScannerError("while scanning for the next token", None,
yaml.scanner.ScannerError: while scanning for the next token
found character '%' that cannot start any token
in "/usr/local/share/gnuradio/grc/blocks/multi_rtl_source.block.yml", line 33, column 5
I also get this:
>>> Check: /usr/local/share/gnuradio/grc/blocks/multi_rtl_source.block.yml
>>> FlowGraph Error: while scanning for the next token
found character '%' that cannot start any token
in "/usr/local/share/gnuradio/grc/blocks/multi_rtl_source.block.yml", line 33, column 5
There is an YAML directive in line 33, column 5:
- id: sync_gain0
label: Ch0: Sync RF Gain (dB)
category: Synchronization
dtype: real
default: 10
hide: \\
%if nchan() > n: <== line 33
part
%else:
all
%endif
full code of multi_rtl_source.block.yml can be found here
There is an article in the GNU Radio wiki in which it is written that you can place YAML dicrices in GRC blocks. So where this error came from and how to fix it?
hide: \\
In YAML the correct way to have multi-line string is using > or | specifiers (see https://yaml-multiline.info/) and not \\, for example
hide: |
Alternatively you can write the hide condition on a single line like this
hide: ${'part' if nchan > 0 else 'all'}
And here is how to fix it in gen_multi_rtl_block.py
## -104,57 +100,32 ## template_p = """\
category: Synchronization
dtype: real
default: 10
- hide: &
- ${"%"} if nchan() > n:
-part
- ${"%"} else:
-all
- ${"%"} endif
+ hide: ${'$'}{'part' if nchan > ${n} else 'all'}

K-fold cross validation - save folds for different models

I am trying to train my models and validate them using sklearn's cross validation. What I want to do is use the same folds across all of my models (which will be running from different python scripts).
How can I do this? Should I save them to a file? or should I save the kfold model? or should I use the same seed?
kfold = StratifiedKFold(n_splits=n_splits, shuffle=True, random_state=seed)
Well the easiest way I found to save the folds was to simply get them from the stratified k fold split method by looping over it. Then storing it to a json file:
kfold = StratifiedKFold(n_splits=n_splits, shuffle=True, random_state=seed)
folds = {}
count = 1
for train, test in kfold.split(np.zeros(len(y)), y.argmax(1)):
folds['fold_{}'.format(count)] = {}
folds['fold_{}'.format(count)]['train'] = train.tolist()
folds['fold_{}'.format(count)]['test'] = test.tolist()
count += 1
print(len(folds) == n_splits)#assert we have the same number of splits
#dump folds to json
import json
with open('folds.json', 'w') as fp:
json.dump(folds, fp)
Note 1: Argmax here is used because my y values are one hot variables so we need to get the class that is predicted/ground truth.
Now to load it from any other script:
#load to dict to be used
with open('folds.json') as f:
kfolds = json.load(f)
From here we can easily just loop over the elements in the dict:
for key, val in kfolds.items():
print(key)
train = val['train']
test = val['test']
Our json file looks like so:
{"fold_1": {"train": [193, 2405, 2895, 565, 1215, 274, 2839, 1735, 2536, 1196, 40, 2541, 980,...SNIP...830, 1032], "test": [1, 5, 6, 7, 10, 15, 20, 26, 37, 45, 52, 54, 55, 59, 60, 64, 65, 68, 74, 76, 78, 90, 100, 106, 107, 113, 122, 124, 132, 135, 141, 146,...SNIP...]}

use for loop to call multiple functions in lua

I want to call multiple methods in lua that are very similar except their parameters change by one character. The way I'm doing it now works but is extremely in efficient.
function scene:createScene(event)
screenGroup = self.view
level1= display.newRoundedRect( 50, 110, 50, 50, 5 )
level1:setFillColor( 100,0,200 )
level2= display.newRoundedRect( 105, 110, 50, 50, 5 )
level2:setFillColor (100,200,0)
--and so on so forth
screenGroup:insert (level1)
screenGroup:insert (level2)
screenGroup:insert (level3)
screenGroup:insert (level4)
end
I plan on extending the screenGroop:insert method to hundreds of levels, maybe up to (level300). As you can see the way I'm doing it now is inefficient. I tried doing
for i=1, 4, 1 do
screenGroup:insert(level..i)
end
but I get the error "table expected."
The best way in this case is to probably use a table:
local levels = {}
levels[1] = display.newRoundedRect( 50, 110, 50, 50, 5 )
levels[1]:setFillColor( 100,0,200 )
levels[2] = display.newRoundedRect( 105, 110, 50, 50, 5 )
levels[2]:setFillColor (100,200,0)
--and so on so forth
for _, level in ipairs(levels) do
screenGroup:insert(level)
end
For other alternatives check the SO answer from #EtanReisner's comment.
If your 'level' tables are global, which is appears they are, you can use getfenv to index them.
for i = 1, number_of_levels do
screenGroup:insert(getfenv()["level" .. i])
end
getfenv returns the environment, with all global variables, in the form of a dictionary. Therefore, you can index it like a normal table like getfenv()["key"]

magento - Update product default image to first image in image gallery

I have an import script that imports well over 2000+ products including their images. I run this script via CLI because I feel that this is the best way to go speed-wise even though I have the same import script available and executable at the magento admin as an extension. The script runs pretty well. Almost perfect! However, sometimes the addToImageGallery somehow malfunctions and results into some images having No Image as the default product image and the only other image as not selected as defaults at all. How do I mass-update all products to set the first image in the media gallery for the product to the default 'base', 'image' and 'thumbnail' image(s)?
I found a couple of tricks on doing this (and more) on this link:
http://www.magentocommerce.com/boards/viewthread/59440/ (Thanks transio!)
Although, for Magento 1.6.2.0 (which I use), the first SQL trick there (Trick 1 - Auto-set default base, thumb, small image to first image.) needs a bit of modification.
On the second-to-the last-line there is a AND ev.attribute_id IN (70, 71, 72) part. This should point to attribute ID's which will probably not be relevant in Magento 1.6.2.0 anymore. To fix this, using any MySQL query tool (PHPMyAdmin or MySQL Query Browser), I took a look at the catalog_product_entity_varchar table. There should be entries like:
value_id, entity_type_id, attribute_id, store_id, entity_id, value
..
146649, 4, 116, 0, 1, '2'
146650, 4, 76, 0, 1, ''
146651, 4, 78, 0, 1, ''
146652, 4, 79, 0, 1, '/B/0/B05-01.jpg'
146653, 4, 80, 0, 1, '/B/0/B05-01.jpg'
146654, 4, 81, 0, 1, '/B/0/B05-01.jpg'
146655, 4, 96, 0, 1, ''
146656, 4, 100, 0, 1, ''
146657, 4, 102, 0, 1, 'container2'
..
My money was on the group of three image paths as possible replacements. So the resulting SQL now should be:
UPDATE catalog_product_entity_media_gallery AS mg,
catalog_product_entity_media_gallery_value AS mgv,
catalog_product_entity_varchar AS ev
SET ev.value = mg.value
WHERE mg.value_id = mgv.value_id
AND mg.entity_id = ev.entity_id
AND ev.attribute_id IN (79, 80, 81) # <-- attribute IDs updated here
AND mgv.position = 1;
So I committed to it, ran it and.. presto! All fixed! You might also want to encapsulate this in a transaction if you want. But this is out of this question's scope.
Well, this is the fix that worked for me so far! If there are any more out there, please share!
There was:
146652, 4, 79, 0, 1, '/B/0/B05-01.jpg'
146653, 4, 80, 0, 1, '/B/0/B05-01.jpg'
146654, 4, 81, 0, 1, '/B/0/B05-01.jpg'
So it should be:
AND ev.attribute_id IN (79, 80, 81) # <-- attribute IDs updated here
instead of:
AND ev.attribute_id IN (78, 80, 81) # <-- attribute IDs updated here
Is looking for something similar.
UPDATE catalog_product_entity_media_gallery AS mg,
catalog_product_entity_media_gallery_value AS mgv,
catalog_product_entity_varchar AS ev
SET ev.value = mg.value
WHERE mg.value_id = mgv.value_id
AND mg.entity_id = ev.entity_id
AND ev.attribute_id IN (79, 80, 81) # <-- attribute IDs updated here
AND mgv.position = 1;

Resources