brownie ganache chain has incorrect timestamp - yaml

I'm playing around with an ethereum webapp. To test my frontend client, I want to perform some transactions 'in the past', so when my client reads the current timestamp, the visuals will update accordingly. For that to work, I want to start my local chain with initial block at 1980-01-01 00:00:00+00:00.
I've edited brownie-config.yaml to add a --time option to ganache (time: 1980-01-01T00:00:00+00:00), and it seems to be propagated to the ganache-cli command, but the chain.time method gives me the current time:
$ brownie console
Brownie v1.14.6 - Python development framework for Ethereum
MyProject is the active project.
Launching 'ganache-cli --accounts 10 --hardfork istanbul --gasLimit 12000000 --mnemonic hill --port 8545 --time 1980-01-01 00:00:00+00:00'...
Brownie environment is ready.
>>> from datetime import datetime
>>> datetime.fromtimestamp(chain.time())
datetime.datetime(2021, 5, 15, 17, 28, 37) # Should be (1980, 1, 1, 0, 0, 0)
>>>
Am I missing something?

Related

Scipy.io.mmread Throws Value Error in Streamlit Heroku App

I have been trying to read a scipy matrix file in my streamlit app.
The app runs on local machine without any errors in app or console.
While deployed on Heroku it raises ValueError: not enough values to unpack (expected 5, got 2)
on the line co_occurrence_matrix = scipy.io.mmread("./database/matrix.mtx").
I've crosschecked the following points, and not sure where to look for the problem.
Matrix is created with
smatrix = scipy.sparse.csr_matrix(matrix)
scipy.isspmatrix(smatrix) #-> returns True
scipy.io.mmwrite("./database/matrix.mtx\",smatrix)
All library versions including python itself are identical between the two apps. Both checked on consoles with pip list. And requirements file is created with pip freeze.
Files are synchronized on git, git status returns 'up to date. Heroku uses the file on git, the local app runs on the synchronized file.
If it makes any difference, the .mtx file is uploaded via git lfs.
Heroku deploys the app successfully, yet inside the streamlit app, it gives the error.
Full Error:
File "/app/.heroku/python/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script
exec(code, module.__dict__)File "/app/main.py", line 4, in <module>
from gorsellestirme_util import set_bg, Plotter, matrix, word_list, main_dfFile "/app/gorsellestirme_util.py", line 30, in <module>
matrix, word_list, main_df = matrix_reader()File "/app/.heroku/python/lib/python3.9/site-packages/streamlit/runtime/legacy_caching/caching.py", line 625, in wrapped_func
return get_or_create_cached_value()File "/app/.heroku/python/lib/python3.9/site-packages/streamlit/runtime/legacy_caching/caching.py", line 609, in get_or_create_cached_value
return_value = non_optional_func(*args, **kwargs)File "/app/gorsellestirme_util.py", line 12, in matrix_reader
co_occurrence_matrix = mmread("./database/matrix.mtx")File "/app/.heroku/python/lib/python3.9/site-packages/scipy/io/_mmio.py", line 77, in mmread
return MMFile().read(source)File "/app/.heroku/python/lib/python3.9/site-packages/scipy/io/_mmio.py", line 438, in read
self._parse_header(stream)File "/app/.heroku/python/lib/python3.9/site-packages/scipy/io/_mmio.py", line 502, in _parse_header
self.__class__.info(stream)File "/app/.heroku/python/lib/python3.9/site-packages/scipy/io/_mmio.py", line 234, in info
mmid, matrix, format, field, symmetry = \
As for the source of the problem: It is clearly stated in the Heroku docs that Heroku does not support git-lfs files. I've missed that point.
As a workaround, there are multiple build packs in Heroku elements. FYI, those buildpacks are also limited with Heroku's 500Mb filesize cap. And security has to be considered as an issue as those buildpacks require third-party access to your git.

YOLOV5 running on mac

I have configured the environment to run on the new Metal Performance Shaders (MPS) backend for GPU training acceleration for PyTorch and when running Yolov5 on my Macbook M2 Air it is always creating an error.
RES_DIR = set_res_dir()
if TRAIN:
!python /Users/krishpatel/yolov5/train.py --data /Users/krishpatel/yolov5/roboflow/data.yaml --weights yolov5s.pt \
--img 640 --epochs {EPOCHS} --batch-size 32 --device mps --name {RES_DIR}
this is the error
screenshot of the error
UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:11.)
t = t[j] # filter
0%| | 0/20 [00:16<?, ?it/s]
Traceback (most recent call last):
File "/Users/krishpatel/yolov5/train.py", line 630, in <module>
main(opt)
File "/Users/krishpatel/yolov5/train.py", line 524, in main
train(opt.hyp, opt, device, callbacks)
File "/Users/krishpatel/yolov5/train.py", line 307, in train
loss, loss_items = compute_loss(pred, targets.to(device)) # loss scaled by batch_size
File "/Users/krishpatel/yolov5/utils/loss.py", line 125, in __call__
tcls, tbox, indices, anchors = self.build_targets(p, targets) # targets
File "/Users/krishpatel/yolov5/utils/loss.py", line 213, in build_targets
j, k = ((gxy % 1 < g) & (gxy > 1)).T
NotImplementedError: The operator 'aten::remainder.Tensor_out' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.
what should i do? because when running in just cpu it is taking a very very very long time? any suggestions would be appreciated.
So I tried to search everywhere but couldn't find anything for macbook. I think if it doesn't help i would have to run it on google colab but then what would be the point of buying expensive macbook and not use gpu for running.

Problems using ganache-cli command

It's saying that it doesn't recognize ganache-cli as a command, despite installing it and everything else as directed.
Using:
brownie v1.17.2
node v17.2.0 (npm v8.1.4)
nvm 0.39.0
Python 3.9.7
Ganache CLI v6.12.2 (ganache-core: 2.13.2)
As part of the Solidity course here, specifically lesson 5. Github repo here.
x#y brownie_simple_storage % brownie run scripts/deploy.py
Brownie v1.17.2 - Python development framework for Ethereum
BrownieSimpleStorageProject is the active project.
Launching 'ganache-cli --port 8545 --gasLimit 12000000 --accounts 10 --hardfork istanbul --mnemonic brownie'...
File "brownie/_cli/__main__.py", line 64, in main
importlib.import_module(f"brownie._cli.{cmd}").main()
File "brownie/_cli/run.py", line 44, in main
network.connect(CONFIG.argv["network"])
File "brownie/network/main.py", line 50, in connect
rpc.launch(active["cmd"], **active["cmd_settings"])
File "brownie/network/rpc/__init__.py", line 93, in launch
raise RPCProcessError(cmd, uri)
RPCProcessError: Unable to launch local RPC client.
Command: ganache-cli
URI: http://127.0.0.1:8545
Looks like this can be resolved using nvm v 16.
nvm install 16
nvm use 16
node --version
v16.13.1
x#y brownie_simple_storage % brownie run scripts/deploy.py
Brownie v1.17.2 - Python development framework for Ethereum
BrownieSimpleStorageProject is the active project.
Launching 'ganache-cli --port 8545 --gasLimit 12000000 --accounts 10 --hardfork istanbul --mnemonic brownie'...
Running 'scripts/deploy.py::main'...
Hello!
Terminating local RPC client...
Most likely the issue you're dealing with is because ganache is already running in another active project, in order to have brownie recognize ganache is to make sure that's the only environment running ganache close to the project running the node. Which, is most likely the web3 simple storage file... not the newly created brownie file.

LinAlgError: not positive definite, even with jitter. When using a conda environment instead of pip

I am trying to fit some random data to a GP with the RBF kernel, using the GPy package. When I change the active dimensions, I get the LinAlgError: not positive definite, even with jitter error. This error is generated only with a conda environment. When I use pip, I have never run into this error. Has anyone come across this?
import numpy as np
import GPy
import random
def func(x):
return np.sum(np.power(x, 5) - np.power(x, 3))
# 20 random data with 10 dimensions
random.seed(2)
random_sample = [[random.uniform(0,3.4) for i in range(10)] for j in range(20)]
# get the first random sample as an observed data
y = np.array([func(random_sample[0])])
X = np.array([random_sample[0]])
y.shape = (1, 1)
X.shape = (1, 10)
# different set of dimensions
set_dim = [[np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])],
[np.array([0, 1]), np.array([2, 3]), np.array([4, 5]), np.array([6, 7]), np.array([8, 9])],
[np.array([0, 1, 2, 3, 4]), np.array([5, 6, 7, 8, 9])],
[np.array([0, 1, 2, 3]), np.array([4, 5, 6]), np.array([7, 8, 9])]]
for i in range(len(set_dim)):
# new kernel based on active dims
k = GPy.kern.Add([GPy.kern.RBF(input_dim=len(set_dim[i][x]), active_dims=set_dim[i][x]) for x in range(len(set_dim[i]))])
# increase data set with the next random sample
y = np.concatenate((y, np.array([[func(random_sample[i+1])]])))
X = np.concatenate((X, np.array([random_sample[i+1]])))
model = GPy.models.GPRegression(X, y, k)
model.optimize()
The output of conda list for gpy, scipy and numpy.
The paths of the above packages.
Possible Channel-Mixing Issue
Sometimes package builds from across different channels (e.g., anaconda versus conda-forge) are incompatible. The times I've encountered this, it happened when compiled symbols were referenced across packages, and the different build stacks used on the channels used different symbol names, leading to missing symbols when mixing.
I can report that using the exact same package versions as OP, but prioritizing the Conda Forge channel builds, gives me reliable behavior. While not conclusive, this would be consistent with the issue somehow coming from the mixing of the Conda Forge build of GPy with otherwise Anaconda builds of dependencies (e.g., numpy, scipy). Specifically suggestive is the fact that I have the exact same GPy build and that module is where the error originates. At the same time, there is nothing in the error that immediately suggests this is a channel mixing issue.
Workaround
In practice, I avoid channel mixing issues by always using YAML definitions to create my environments. This is a helpful practice because it encourages one to explicitly state the channel priority as part of the definition and it makes Conda aware of your preference from the outset. The following environment definition works for me:
gpy_cf.yaml
name: gpy_cf
channels:
- conda-forge
- defaults
dependencies:
- python=3.6
- gpy=1.9.6
- numpy=1.16.2
- scipy=1.2.1
and using
conda env create -f gpy_cf.yaml
conda activate gpy_cf
Unless you really do need these exact versions, I would remove whatever versioning constraints are unnecessary (at the very least remove the patches).
Broken Version
For the record, this is the version that I can replicate the error with:
gpy_mixed.yaml
name: gpy_mixed
channels:
- defaults
- conda-forge
dependencies:
- python=3.6
- conda-forge::gpy=1.9.6
- numpy=1.16.2
- scipy=1.2.1
In this case, we force gpy to come from Conda Forge and let everything else source from the Anaconda (defaults) channel, similar to the configuration found in OP.

Why python+sqlite3 is extremely slow?

I tried to process the same request to the same database using "Python 2.7.4 + sqlite3" and "Firefox SQLite Manager 0.8.0".
On the tiny database (8000 records) both Python and Firefox work fast and give the same result.
On the bigger database (2600000 records):
SQLite Manager processed database in 28seconds (24 records)
Python program is working already for 20 minutes without any result
What can be wrong with the following program, so python sqlite3 cannot process the query in reasonable time, while the same request can be processed faster?
import sqlite3
_sql1 = """SELECT DISTINCT J2.rule_description,
J2.feature_type,
J2.action_item_id,
J2.rule_items
FROM journal J1,
journal J2
WHERE J1.base = J2.base
AND J1.action_item_id=J2.action_item_id
AND J1.type="Action disabled"
AND J2.type="Action applied"
AND J1.rule_description="Some test rule"
AND J1.action_item_id IN (1, 2, 3, 14, 15, 16, 17, 18, 19, 30, 31, 32)
"""
if __name__ == '__main__':
sqlite_output = r'D:\results.sqlite'
with sqlite3.connect(sqlite_output) as connection:
for row in connection.execute(_sql1):
print row
UPDATE: Command Line Shell For SQLite also returns the same 24 records
UPDATE2: sqlite3.sqlite_version is '3.6.21'
It seems, that the problem is related with the old version of sqlite that shipped with Python 2.7. Everything works fine in python 3.3.
Thanks a lot to #CL for the great comment!
In python 2.7
>>> import sqlite3
>>> sqlite3.sqlite_version
'3.6.21'
In python 3.3
>>> import sqlite3
>>> sqlite3.sqlite_version
'3.7.12'

Resources