I get this error when I run GNU Radio Companion. Of course, the multi_rtl_source.block.yml block doesn't work and doesn't show up in the menu:
ERROR:gnuradio.grc.core.platform:Error while loading /usr/local/share/gnuradio/grc/blocks/multi_rtl_source.block.yml
Traceback (most recent call last):
File "/usr/lib/python3.8/site-packages/gnuradio/grc/core/platform.py", line 169, in build_library
data = cache.get_or_load(file_path)
File "/usr/lib/python3.8/site-packages/gnuradio/grc/core/cache.py", line 66, in get_or_load
data = yaml.safe_load(fp)
File "/usr/lib/python3.8/site-packages/yaml/__init__.py", line 162, in safe_load
return load(stream, SafeLoader)
File "/usr/lib/python3.8/site-packages/yaml/__init__.py", line 114, in load
return loader.get_single_data()
File "/usr/lib/python3.8/site-packages/yaml/constructor.py", line 49, in get_single_data
node = self.get_single_node()
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 36, in get_single_node
document = self.compose_document()
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 55, in compose_document
node = self.compose_node(None, None)
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 84, in compose_node
node = self.compose_mapping_node(anchor)
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 133, in compose_mapping_node
item_value = self.compose_node(node, item_key)
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 82, in compose_node
node = self.compose_sequence_node(anchor)
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 111, in compose_sequence_node
node.value.append(self.compose_node(node, index))
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 84, in compose_node
node = self.compose_mapping_node(anchor)
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 127, in compose_mapping_node
while not self.check_event(MappingEndEvent):
File "/usr/lib/python3.8/site-packages/yaml/parser.py", line 98, in check_event
self.current_event = self.state()
File "/usr/lib/python3.8/site-packages/yaml/parser.py", line 428, in parse_block_mapping_key
if self.check_token(KeyToken):
File "/usr/lib/python3.8/site-packages/yaml/scanner.py", line 116, in check_token
self.fetch_more_tokens()
File "/usr/lib/python3.8/site-packages/yaml/scanner.py", line 258, in fetch_more_tokens
raise ScannerError("while scanning for the next token", None,
yaml.scanner.ScannerError: while scanning for the next token
found character '%' that cannot start any token
in "/usr/local/share/gnuradio/grc/blocks/multi_rtl_source.block.yml", line 33, column 5
ERROR:gnuradio.grc.core.platform:while scanning for the next token
found character '%' that cannot start any token
in "/usr/local/share/gnuradio/grc/blocks/multi_rtl_source.block.yml", line 33, column 5
Traceback (most recent call last):
File "/usr/lib/python3.8/site-packages/gnuradio/grc/core/platform.py", line 169, in build_library
data = cache.get_or_load(file_path)
File "/usr/lib/python3.8/site-packages/gnuradio/grc/core/cache.py", line 66, in get_or_load
data = yaml.safe_load(fp)
File "/usr/lib/python3.8/site-packages/yaml/__init__.py", line 162, in safe_load
return load(stream, SafeLoader)
File "/usr/lib/python3.8/site-packages/yaml/__init__.py", line 114, in load
return loader.get_single_data()
File "/usr/lib/python3.8/site-packages/yaml/constructor.py", line 49, in get_single_data
node = self.get_single_node()
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 36, in get_single_node
document = self.compose_document()
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 55, in compose_document
node = self.compose_node(None, None)
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 84, in compose_node
node = self.compose_mapping_node(anchor)
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 133, in compose_mapping_node
item_value = self.compose_node(node, item_key)
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 82, in compose_node
node = self.compose_sequence_node(anchor)
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 111, in compose_sequence_node
node.value.append(self.compose_node(node, index))
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 84, in compose_node
node = self.compose_mapping_node(anchor)
File "/usr/lib/python3.8/site-packages/yaml/composer.py", line 127, in compose_mapping_node
while not self.check_event(MappingEndEvent):
File "/usr/lib/python3.8/site-packages/yaml/parser.py", line 98, in check_event
self.current_event = self.state()
File "/usr/lib/python3.8/site-packages/yaml/parser.py", line 428, in parse_block_mapping_key
if self.check_token(KeyToken):
File "/usr/lib/python3.8/site-packages/yaml/scanner.py", line 116, in check_token
self.fetch_more_tokens()
File "/usr/lib/python3.8/site-packages/yaml/scanner.py", line 258, in fetch_more_tokens
raise ScannerError("while scanning for the next token", None,
yaml.scanner.ScannerError: while scanning for the next token
found character '%' that cannot start any token
in "/usr/local/share/gnuradio/grc/blocks/multi_rtl_source.block.yml", line 33, column 5
I also get this:
>>> Check: /usr/local/share/gnuradio/grc/blocks/multi_rtl_source.block.yml
>>> FlowGraph Error: while scanning for the next token
found character '%' that cannot start any token
in "/usr/local/share/gnuradio/grc/blocks/multi_rtl_source.block.yml", line 33, column 5
There is an YAML directive in line 33, column 5:
- id: sync_gain0
label: Ch0: Sync RF Gain (dB)
category: Synchronization
dtype: real
default: 10
hide: \\
%if nchan() > n: <== line 33
part
%else:
all
%endif
full code of multi_rtl_source.block.yml can be found here
There is an article in the GNU Radio wiki in which it is written that you can place YAML dicrices in GRC blocks. So where this error came from and how to fix it?
hide: \\
In YAML the correct way to have multi-line string is using > or | specifiers (see https://yaml-multiline.info/) and not \\, for example
hide: |
Alternatively you can write the hide condition on a single line like this
hide: ${'part' if nchan > 0 else 'all'}
And here is how to fix it in gen_multi_rtl_block.py
## -104,57 +100,32 ## template_p = """\
category: Synchronization
dtype: real
default: 10
- hide: &
- ${"%"} if nchan() > n:
-part
- ${"%"} else:
-all
- ${"%"} endif
+ hide: ${'$'}{'part' if nchan > ${n} else 'all'}
Related
MY CODE:
model_1 = SM.logit(formula = f_1, data=Company_train).fit()
I AM GETTING THIS ERROR EVEN IF MY SYNTAX IS CORRECT FOR STATSMODEL LOGIT FUNCTION WHEN I AM FITTING THE DATA. THE ERROR IS AS FOLLOWS:
Traceback (most recent call last):
File "C:\Users\Asus\anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3444, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "C:\Users\Asus\AppData\Local\Temp/ipykernel_8360/3618389853.py", line 1, in <module>
model_1 = SM.logit(formula = f_1, data=Company_train).fit()
File "C:\Users\Asus\anaconda3\lib\site-packages\statsmodels\base\model.py", line 169, in from_formula
tmp = handle_formula_data(data, None, formula, depth=eval_env,
File "C:\Users\Asus\anaconda3\lib\site-packages\statsmodels\formula\formulatools.py", line 63, in handle_formula_data
result = dmatrices(formula, Y, depth, return_type='dataframe',
File "C:\Users\Asus\anaconda3\lib\site-packages\patsy\highlevel.py", line 309, in dmatrices
(lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
File "C:\Users\Asus\anaconda3\lib\site-packages\patsy\highlevel.py", line 164, in _do_highlevel_design
design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
File "C:\Users\Asus\anaconda3\lib\site-packages\patsy\highlevel.py", line 66, in _try_incr_builders
return design_matrix_builders([formula_like.lhs_termlist,
File "C:\Users\Asus\anaconda3\lib\site-packages\patsy\build.py", line 689, in design_matrix_builders
factor_states = _factors_memorize(all_factors, data_iter_maker, eval_env)
File "C:\Users\Asus\anaconda3\lib\site-packages\patsy\build.py", line 354, in _factors_memorize
which_pass = factor.memorize_passes_needed(state, eval_env)
File "C:\Users\Asus\anaconda3\lib\site-packages\patsy\eval.py", line 474, in memorize_passes_needed
subset_names = [name for name in ast_names(self.code)
File "C:\Users\Asus\anaconda3\lib\site-packages\patsy\eval.py", line 474, in <listcomp>
subset_names = [name for name in ast_names(self.code)
File "C:\Users\Asus\anaconda3\lib\site-packages\patsy\eval.py", line 105, in ast_names
for node in ast.walk(ast.parse(code)):
File "C:\Users\Asus\anaconda3\lib\ast.py", line 50, in parse
return compile(source, filename, mode, flags,
File "<unknown>", line 1
Contingent liabilities
^
SyntaxError: invalid syntax
I am trying to train my models and validate them using sklearn's cross validation. What I want to do is use the same folds across all of my models (which will be running from different python scripts).
How can I do this? Should I save them to a file? or should I save the kfold model? or should I use the same seed?
kfold = StratifiedKFold(n_splits=n_splits, shuffle=True, random_state=seed)
Well the easiest way I found to save the folds was to simply get them from the stratified k fold split method by looping over it. Then storing it to a json file:
kfold = StratifiedKFold(n_splits=n_splits, shuffle=True, random_state=seed)
folds = {}
count = 1
for train, test in kfold.split(np.zeros(len(y)), y.argmax(1)):
folds['fold_{}'.format(count)] = {}
folds['fold_{}'.format(count)]['train'] = train.tolist()
folds['fold_{}'.format(count)]['test'] = test.tolist()
count += 1
print(len(folds) == n_splits)#assert we have the same number of splits
#dump folds to json
import json
with open('folds.json', 'w') as fp:
json.dump(folds, fp)
Note 1: Argmax here is used because my y values are one hot variables so we need to get the class that is predicted/ground truth.
Now to load it from any other script:
#load to dict to be used
with open('folds.json') as f:
kfolds = json.load(f)
From here we can easily just loop over the elements in the dict:
for key, val in kfolds.items():
print(key)
train = val['train']
test = val['test']
Our json file looks like so:
{"fold_1": {"train": [193, 2405, 2895, 565, 1215, 274, 2839, 1735, 2536, 1196, 40, 2541, 980,...SNIP...830, 1032], "test": [1, 5, 6, 7, 10, 15, 20, 26, 37, 45, 52, 54, 55, 59, 60, 64, 65, 68, 74, 76, 78, 90, 100, 106, 107, 113, 122, 124, 132, 135, 141, 146,...SNIP...]}
How can i add multiple dataset in nsoleTVs/Charts charts.js package in laravel.
my single dataset code is running well:
$data['transactionChart'] = new TransactionChart();
$data['transactionChart']->dataset('Sample', 'line',[100, 65, 84, 45, 90])
->options(['borderColor' => '#97d881']);
Simply use ->dataset() multiple times.
https://github.com/ConsoleTVs/Charts/issues/331
Example:
$data['transactionChart'] = new TransactionChart();
$data['transactionChart']->dataset('Sample', 'line',[100, 65, 84, 45, 90])
->options(['borderColor' => '#97d881']);
$data['transactionChart']->dataset('Another Sample', 'line',[100, 65, 84, 45, 90])
->options(['borderColor' => '#ff0000']);
I have two strings:
a = 'hà nội'
b = 'hà nội'
When I compare them with a == b, it returns false.
I checked the byte codes:
a.bytes = [104, 97, 204, 128, 32, 110, 195, 180, 204, 163, 105]
b.bytes = [104, 195, 160, 32, 110, 225, 187, 153, 105]
What is the cause? How can I fix it so that a == b returns true?
This is an issue with Unicode equivalence.
In order to compare these strings you need to normalize them, so that they both use the same byte sequences for these types of characters.
a.unicode_normalize == b.unicode_normalize
unicode_normalize(form=:nfc) [link]
Returns a normalized form of str, using Unicode normalizations NFC,
NFD, NFKC, or NFKD. The normalization form used is determined by form,
which is any of the four values :nfc, :nfd, :nfkc, or :nfkd. The
default is :nfc.
If the string is not in a Unicode Encoding, then an Exception is
raised. In this context, 'Unicode Encoding' means any of UTF-8,
UTF-16BE/LE, and UTF-32BE/LE, as well as GB18030, UCS_2BE, and
UCS_4BE. Anything else than UTF-8 is implemented by converting to
UTF-8, which makes it slower than UTF-8.
I am new in mpi4py. I wrote the code in order to process a large numpy array data by multiple processor. As I am unable to provide the input file I am mentioning the shape of data. Shape of data is [3000000,15] and it contains string type of data.
from mpi4py import MPI
import numpy as np
import datetime as dt
import math as math
comm = MPI.COMM_WORLD
numprocs = comm.size
rank = comm.Get_rank()
fname = "6.binetflow"
data = np.loadtxt(open(fname,"rb"), dtype=object, delimiter=",", skiprows=1)
X = data[:,[0,1,3,14,6,6,6,6,6,6,6,6]]
num_rows = math.ceil(len(X)/float(numprocs))
X = X.flatten()
sendCounts = list()
displacements = list()
for p in range(numprocs):
if p == (numprocs-1): #for last processor
sendCounts.append(int(len(X) - (p*num_rows*12)))
displacements.append(int(p*num_rows*12))
break
sendCounts.append(int(num_rows*12))
displacements.append(int(p*sendCounts[p]))
sendbuf = np.array(X[displacements[rank]: (displacements[rank]+sendCounts[rank])])
## Each processor will do some task on sendbuf
if rank == 0:
recvbuf = np.empty(sum(sendCounts), dtype=object)
else:
recvbuf = None
print("sendbuf: ",sendbuf)
comm.Gatherv(sendbuf=sendbuf, recvbuf=(recvbuf, sendCounts), root=0)
if rank == 0:
print("Gathered array: {}".format(recvbuf))
But I am facing below error:
Traceback (most recent call last):
File "hello.py", line 36, in <module>
comm.Gatherv(sendbuf=sendbuf, recvbuf=(recvbuf, sendCounts), root=0)
File "MPI/Comm.pyx", line 602, in mpi4py.MPI.Comm.Gatherv (d:\build\mpi4py\mpi4py-2.0.0\src\mpi4py.MPI.c:97993)
File "MPI/msgbuffer.pxi", line 525, in mpi4py.MPI._p_msg_cco.for_gather (d:\build\mpi4py\mpi4py-2.0.0\src\mpi4py.MPI.c:34678)
File "MPI/msgbuffer.pxi", line 446, in mpi4py.MPI._p_msg_cco.for_cco_send (d:\build\mpi4py\mpi4py-2.0.0\src\mpi4py.MPI.c:33938)
File "MPI/msgbuffer.pxi", line 148, in mpi4py.MPI.message_simple (d:\build\mpi4py\mpi4py-2.0.0\src\mpi4py.MPI.c:30349)
File "MPI/msgbuffer.pxi", line 93, in mpi4py.MPI.message_basic (d:\build\mpi4py\mpi4py-2.0.0\src\mpi4py.MPI.c:29448)
KeyError: 'O'
Traceback (most recent call last):
File "hello.py", line 36, in <module>
comm.Gatherv(sendbuf=sendbuf, recvbuf=(recvbuf, sendCounts), root=0)
File "MPI/Comm.pyx", line 602, in mpi4py.MPI.Comm.Gatherv (d:\build\mpi4py\mpi4py-2.0.0\src\mpi4py.MPI.c:97993)
File "MPI/msgbuffer.pxi", line 525, in mpi4py.MPI._p_msg_cco.for_gather (d:\build\mpi4py\mpi4py-2.0.0\src\mpi4py.MPI.c:34678)
File "MPI/msgbuffer.pxi", line 446, in mpi4py.MPI._p_msg_cco.for_cco_send (d:\build\mpi4py\mpi4py-2.0.0\src\mpi4py.MPI.c:33938)
File "MPI/msgbuffer.pxi", line 148, in mpi4py.MPI.message_simple (d:\build\mpi4py\mpi4py-2.0.0\src\mpi4py.MPI.c:30349)
File "MPI/msgbuffer.pxi", line 93, in mpi4py.MPI.message_basic (d:\build\mpi4py\mpi4py-2.0.0\src\mpi4py.MPI.c:29448)
KeyError: 'O'
Traceback (most recent call last):
File "hello.py", line 36, in <module>
comm.Gatherv(sendbuf=sendbuf, recvbuf=(recvbuf, sendCounts), root=0)
File "MPI/Comm.pyx", line 602, in mpi4py.MPI.Comm.Gatherv (d:\build\mpi4py\mpi4py-2.0.0\src\mpi4py.MPI.c:97993)
File "MPI/msgbuffer.pxi", line 525, in mpi4py.MPI._p_msg_cco.for_gather (d:\build\mpi4py\mpi4py-2.0.0\src\mpi4py.MPI.c:34678)
File "MPI/msgbuffer.pxi", line 446, in mpi4py.MPI._p_msg_cco.for_cco_send (d:\build\mpi4py\mpi4py-2.0.0\src\mpi4py.MPI.c:33938)
File "MPI/msgbuffer.pxi", line 148, in mpi4py.MPI.message_simple (d:\build\mpi4py\mpi4py-2.0.0\src\mpi4py.MPI.c:30349)
File "MPI/msgbuffer.pxi", line 93, in mpi4py.MPI.message_basic (d:\build\mpi4py\mpi4py-2.0.0\src\mpi4py.MPI.c:29448)
KeyError: 'O'
Traceback (most recent call last):
File "hello.py", line 36, in <module>
comm.Gatherv(sendbuf=sendbuf, recvbuf=(recvbuf, sendCounts), root=0)
File "MPI/Comm.pyx", line 602, in mpi4py.MPI.Comm.Gatherv (d:\build\mpi4py\mpi4py-2.0.0\src\mpi4py.MPI.c:97993)
File "MPI/msgbuffer.pxi", line 516, in mpi4py.MPI._p_msg_cco.for_gather (d:\build\mpi4py\mpi4py-2.0.0\src\mpi4py.MPI.c:34587)
File "MPI/msgbuffer.pxi", line 466, in mpi4py.MPI._p_msg_cco.for_cco_recv (d:\build\mpi4py\mpi4py-2.0.0\src\mpi4py.MPI.c:34097)
File "MPI/msgbuffer.pxi", line 261, in mpi4py.MPI.message_vector (d:\build\mpi4py\mpi4py-2.0.0\src\mpi4py.MPI.c:31977)
File "MPI/msgbuffer.pxi", line 93, in mpi4py.MPI.message_basic (d:\build\mpi4py\mpi4py-2.0.0\src\mpi4py.MPI.c:29448)
KeyError: 'O'
Any help will be much appreciated. I am stuck in this problem for a long time.
Thanks
The problem is dtype=object.
Mpi4py provides two kinds of communication functions, those whose names begin with an upper-case letter, e.g. Scatter, and those whose names begin with a lower-case letter, e.g. scatter. From the Mpi4py documentation:
In MPI for Python, the Bcast(), Scatter(), Gather(), Allgather() and Alltoall() methods of Comm instances provide support for collective communications of memory buffers. The variants bcast(), scatter(), gather(), allgather() and alltoall() can communicate generic Python objects.
What is not clear from this is that even though numpy arrays supposedly expose memory buffers, the buffers apparently need to be to one of a small set of primitive data types, and certainly don't work with generic objects. Compare the following two pieces of code:
from mpi4py import MPI
import numpy
Comm = MPI.COMM_WORLD
Size = Comm.Get_size()
Rank = Comm.Get_rank()
if Rank == 0:
Data = numpy.empty(Size, dtype=object)
else:
Data = None
Data = Comm.scatter(Data, 0) # I work fine!
print("Data on rank %d: " % Rank, Data)
and
from mpi4py import MPI
import numpy
Comm = MPI.COMM_WORLD
Size = Comm.Get_size()
Rank = Comm.Get_rank()
if Rank == 0:
Data = numpy.empty(Size, dtype=object)
else:
Data = None
Datb = numpy.empty(1, dtype=object)
Comm.Scatter(Data, Datb, 0) # I throw KeyError!
print("Datb on rank %d: " % Rank, Datb)
Unfortunately, Mpi4py provides no scatterv. From the same place in the docs:
The vector variants (which can communicate different amounts of data to each process) Scatterv(), Gatherv(), Allgatherv() and Alltoallv() are also supported, they can only communicate objects exposing memory buffers.
These are not exceptions to the upper- vs lower-case rule for dtypes, either:
from mpi4py import MPI
import numpy
Comm = MPI.COMM_WORLD
Size = Comm.Get_size()
Rank = Comm.Get_rank()
if Rank == 0:
Data = numpy.empty(2*Size+1, dtype=numpy.dtype('float64'))
else:
Data = None
if Rank == 0:
Datb = numpy.empty(3, dtype=numpy.dtype('float64'))
else:
Datb = numpy.empty(2, dtype=numpy.dtype('float64'))
Comm.Scatterv(Data, Datb, 0) # I work fine!
print("Datb on rank %d: " % Rank, Datb)
versus
from mpi4py import MPI
import numpy
Comm = MPI.COMM_WORLD
Size = Comm.Get_size()
Rank = Comm.Get_rank()
if Rank == 0:
Data = numpy.empty(2*Size+1, dtype=object)
else:
Data = None
if Rank == 0:
Datb = numpy.empty(3, dtype=object)
else:
Datb = numpy.empty(2, dtype=object)
Comm.Scatterv(Data, Datb, 0) # I throw KeyError!
print("Datb on rank %d: " % Rank, Datb)
You'll unfortunately need to write your code so that it can use scatter, necessitating the same SendCount for each process, or more primitive, point-to-point communication functions, or use some parallel facility other than Mpi4py.
Using Mpi4py 2.0.0, the current stable version at the time of this writing.