I tried to process the same request to the same database using "Python 2.7.4 + sqlite3" and "Firefox SQLite Manager 0.8.0".
On the tiny database (8000 records) both Python and Firefox work fast and give the same result.
On the bigger database (2600000 records):
SQLite Manager processed database in 28seconds (24 records)
Python program is working already for 20 minutes without any result
What can be wrong with the following program, so python sqlite3 cannot process the query in reasonable time, while the same request can be processed faster?
import sqlite3
_sql1 = """SELECT DISTINCT J2.rule_description,
J2.feature_type,
J2.action_item_id,
J2.rule_items
FROM journal J1,
journal J2
WHERE J1.base = J2.base
AND J1.action_item_id=J2.action_item_id
AND J1.type="Action disabled"
AND J2.type="Action applied"
AND J1.rule_description="Some test rule"
AND J1.action_item_id IN (1, 2, 3, 14, 15, 16, 17, 18, 19, 30, 31, 32)
"""
if __name__ == '__main__':
sqlite_output = r'D:\results.sqlite'
with sqlite3.connect(sqlite_output) as connection:
for row in connection.execute(_sql1):
print row
UPDATE: Command Line Shell For SQLite also returns the same 24 records
UPDATE2: sqlite3.sqlite_version is '3.6.21'
It seems, that the problem is related with the old version of sqlite that shipped with Python 2.7. Everything works fine in python 3.3.
Thanks a lot to #CL for the great comment!
In python 2.7
>>> import sqlite3
>>> sqlite3.sqlite_version
'3.6.21'
In python 3.3
>>> import sqlite3
>>> sqlite3.sqlite_version
'3.7.12'
Related
I am trying to install tensor flow on my macOS M1.
As per chip compatibility I know that not all the pip images of tensor flow works or are even compatible. But I found this repository
https://github.com/apple/tensorflow_macos
Which is supposed to be working on Apple M1.
After the installation, I downgraded my python to version 3.8 and start the installation, everything went just fine without any issue.
just for testing purpose, I found this script online.
#!/usr/bin/env python
# coding: utf-8
# ## Sentiment Analysis on US Airline Reviews
# In[1]:
import pandas as pd
import matplotlib.pyplot as plt
from tensorflow.python.compiler.mlcompute import mlcompute
mlcompute.set_mlc_device(device_name='cpu')
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM,Dense, Dropout, SpatialDropout1D
from tensorflow.keras.layers import Embedding
df = pd.read_csv("./Tweets.csv")
# In[2]:
df.head()
# In[23]:
df.columns
# In[4]:
tweet_df = df[['text','airline_sentiment']]
print(tweet_df.shape)
tweet_df.head(5)
# In[22]:
tweet_df = tweet_df[tweet_df['airline_sentiment'] != 'neutral']
print(tweet_df.shape)
tweet_df.head(5)
# In[21]:
tweet_df["airline_sentiment"].value_counts()
# In[6]:
sentiment_label = tweet_df.airline_sentiment.factorize()
sentiment_label
# In[7]:
tweet = tweet_df.text.values
tokenizer = Tokenizer(num_words=5000)
tokenizer.fit_on_texts(tweet)
vocab_size = len(tokenizer.word_index) + 1
encoded_docs = tokenizer.texts_to_sequences(tweet)
padded_sequence = pad_sequences(encoded_docs, maxlen=200)
# In[8]:
print(tokenizer.word_index)
# In[9]:
print(tweet[0])
print(encoded_docs[0])
# In[10]:
print(padded_sequence[0])
# In[11]:
embedding_vector_length = 32
model = Sequential()
model.add(Embedding(vocab_size, embedding_vector_length, input_length=200) )
model.add(SpatialDropout1D(0.25))
model.add(LSTM(50, dropout=0.5, recurrent_dropout=0.5))
model.add(Dropout(0.2))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',optimizer='adam', metrics=['accuracy'])
print(model.summary())
# In[12]:
history = model.fit(padded_sequence,sentiment_label[0],validation_split=0.2, epochs=5, batch_size=32)
# In[16]:
plt.plot(history.history['accuracy'], label='acc')
plt.plot(history.history['val_accuracy'], label='val_acc')
plt.legend()
plt.show()
plt.savefig("Accuracy plot.jpg")
# In[25]:
plt.plot(history.history['loss'], label='loss')
plt.plot(history.history['val_loss'], label='val_loss')
plt.legend()
plt.show()
plt.savefig("Loss plot.jpg")
# In[18]:
def predict_sentiment(text):
tw = tokenizer.texts_to_sequences([text])
tw = pad_sequences(tw,maxlen=200)
prediction = int(model.predict(tw).round().item())
print("Predicted label: ", sentiment_label[1][prediction])
# In[19]:
test_sentence1 = "I enjoyed my journey on this flight."
predict_sentiment(test_sentence1)
test_sentence2 = "This is the worst flight experience of my life!"
predict_sentiment(test_sentence2)
But when I run it,
I get this error
Traceback (most recent call last):
File "/Users/user/Desktop/MachineLearning/env/lib/python3.8/site-packages/tensorflow/python/pywrap_tensorflow.py", line 64, in <module>
from tensorflow.python._pywrap_tensorflow_internal import *
ImportError: dlopen(/Users/user/Desktop/MachineLearning/env/lib/python3.8/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so, 6): no suitable image found. Did find:
/Users/user/Desktop/MachineLearning/env/lib/python3.8/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so: mach-o, but wrong architecture
/Users/user/Desktop/MachineLearning/env/lib/python3.8/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so: mach-o, but wrong architecture
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "Sentiment Analysis.py", line 13, in <module>
from tensorflow.python.compiler.mlcompute import mlcompute
File "/Users/user/Desktop/MachineLearning/env/lib/python3.8/site-packages/tensorflow/__init__.py", line 41, in <module>
from tensorflow.python.tools import module_util as _module_util
File "/Users/user/Desktop/MachineLearning/env/lib/python3.8/site-packages/tensorflow/python/__init__.py", line 39, in <module>
from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow
File "/Users/user/Desktop/MachineLearning/env/lib/python3.8/site-packages/tensorflow/python/pywrap_tensorflow.py", line 83, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "/Users/user/Desktop/MachineLearning/env/lib/python3.8/site-packages/tensorflow/python/pywrap_tensorflow.py", line 64, in <module>
from tensorflow.python._pywrap_tensorflow_internal import *
ImportError: dlopen(/Users/user/Desktop/MachineLearning/env/lib/python3.8/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so, 6): no suitable image found. Did find:
/Users/user/Desktop/MachineLearning/env/lib/python3.8/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so: mach-o, but wrong architecture
/Users/user/Desktop/MachineLearning/env/lib/python3.8/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so: mach-o, but wrong architecture
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
The error is about the architecture but I have no idea how to fix. Did anyone found a solution to this problem?
Thank you so much for any help you can provide.
Things should work better now.
As of Oct. 25, 2021 macOS 12 Monterey is generally available.
Upgrade your machine to Monterey or newer OS if you haven't already.
If you have conda installed, I would probably uninstall it. You can have multiple conda versions installed but things can get tricky.
Then follow the instructions from Apple here. I cleaned them up a bit below:
Download and install Conda from Miniforge:
chmod +x ~/Downloads/Miniforge3-MacOSX-arm64.sh
sh ~/Downloads/Miniforge3-MacOSX-arm64.sh
source ~/miniforge3/bin/activate
In an active conda environment, install the TensorFlow dependencies, base TensorFlow, and TensorFlow metal:
conda install -c apple tensorflow-deps
pip install tensorflow-macos
pip install tensorflow-metal
You should be good to go with fast training speeds.
I'm playing around with an ethereum webapp. To test my frontend client, I want to perform some transactions 'in the past', so when my client reads the current timestamp, the visuals will update accordingly. For that to work, I want to start my local chain with initial block at 1980-01-01 00:00:00+00:00.
I've edited brownie-config.yaml to add a --time option to ganache (time: 1980-01-01T00:00:00+00:00), and it seems to be propagated to the ganache-cli command, but the chain.time method gives me the current time:
$ brownie console
Brownie v1.14.6 - Python development framework for Ethereum
MyProject is the active project.
Launching 'ganache-cli --accounts 10 --hardfork istanbul --gasLimit 12000000 --mnemonic hill --port 8545 --time 1980-01-01 00:00:00+00:00'...
Brownie environment is ready.
>>> from datetime import datetime
>>> datetime.fromtimestamp(chain.time())
datetime.datetime(2021, 5, 15, 17, 28, 37) # Should be (1980, 1, 1, 0, 0, 0)
>>>
Am I missing something?
I am trying to fit some random data to a GP with the RBF kernel, using the GPy package. When I change the active dimensions, I get the LinAlgError: not positive definite, even with jitter error. This error is generated only with a conda environment. When I use pip, I have never run into this error. Has anyone come across this?
import numpy as np
import GPy
import random
def func(x):
return np.sum(np.power(x, 5) - np.power(x, 3))
# 20 random data with 10 dimensions
random.seed(2)
random_sample = [[random.uniform(0,3.4) for i in range(10)] for j in range(20)]
# get the first random sample as an observed data
y = np.array([func(random_sample[0])])
X = np.array([random_sample[0]])
y.shape = (1, 1)
X.shape = (1, 10)
# different set of dimensions
set_dim = [[np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])],
[np.array([0, 1]), np.array([2, 3]), np.array([4, 5]), np.array([6, 7]), np.array([8, 9])],
[np.array([0, 1, 2, 3, 4]), np.array([5, 6, 7, 8, 9])],
[np.array([0, 1, 2, 3]), np.array([4, 5, 6]), np.array([7, 8, 9])]]
for i in range(len(set_dim)):
# new kernel based on active dims
k = GPy.kern.Add([GPy.kern.RBF(input_dim=len(set_dim[i][x]), active_dims=set_dim[i][x]) for x in range(len(set_dim[i]))])
# increase data set with the next random sample
y = np.concatenate((y, np.array([[func(random_sample[i+1])]])))
X = np.concatenate((X, np.array([random_sample[i+1]])))
model = GPy.models.GPRegression(X, y, k)
model.optimize()
The output of conda list for gpy, scipy and numpy.
The paths of the above packages.
Possible Channel-Mixing Issue
Sometimes package builds from across different channels (e.g., anaconda versus conda-forge) are incompatible. The times I've encountered this, it happened when compiled symbols were referenced across packages, and the different build stacks used on the channels used different symbol names, leading to missing symbols when mixing.
I can report that using the exact same package versions as OP, but prioritizing the Conda Forge channel builds, gives me reliable behavior. While not conclusive, this would be consistent with the issue somehow coming from the mixing of the Conda Forge build of GPy with otherwise Anaconda builds of dependencies (e.g., numpy, scipy). Specifically suggestive is the fact that I have the exact same GPy build and that module is where the error originates. At the same time, there is nothing in the error that immediately suggests this is a channel mixing issue.
Workaround
In practice, I avoid channel mixing issues by always using YAML definitions to create my environments. This is a helpful practice because it encourages one to explicitly state the channel priority as part of the definition and it makes Conda aware of your preference from the outset. The following environment definition works for me:
gpy_cf.yaml
name: gpy_cf
channels:
- conda-forge
- defaults
dependencies:
- python=3.6
- gpy=1.9.6
- numpy=1.16.2
- scipy=1.2.1
and using
conda env create -f gpy_cf.yaml
conda activate gpy_cf
Unless you really do need these exact versions, I would remove whatever versioning constraints are unnecessary (at the very least remove the patches).
Broken Version
For the record, this is the version that I can replicate the error with:
gpy_mixed.yaml
name: gpy_mixed
channels:
- defaults
- conda-forge
dependencies:
- python=3.6
- conda-forge::gpy=1.9.6
- numpy=1.16.2
- scipy=1.2.1
In this case, we force gpy to come from Conda Forge and let everything else source from the Anaconda (defaults) channel, similar to the configuration found in OP.
I have installed the tc-lib-barcode library on Win10 localhost (using composer) and on a remote operational server to generate QR codes. It works great on the Win10 localhost, but not on the remote server.
The problem is that on the remote server the code line "$imageData = $bobj->getPngData();" causes it to freeze. I am guessing that the problem is something to do with the installation (removed and re-installed several times) or with PHP extensions and I stuck for a test that will point to the actual problem. Any comments or tips would be appreciated.
Here is the essential code I am using:
require ('../../vendor/autoload.php');
$barcode = new \Com\Tecnick\Barcode\Barcode();
$bobj = $barcode->getBarcodeObj('QRCODE,H', 'EditInvolvement.php?toID='.$InvolvementID, - 16, - 16, 'black', array(- 2, - 2, - 2, - 2))->setBackgroundColor('#f0f0f0');
$imageData = $bobj->getPngData();
file_put_contents('qr-code/'.$InvolvementID.'.png', $imageData); ```
Pretty new to both scapy and python so apologies for what may be a thickheaded question.
I know that it is new and may have issues on Windows but I have successfully installed scapy3 on Windows 2012r2 and Ubuntu Linux. Unfortunately, I actually hope to use it on Windows 7 and am getting the following error message:
Traceback (most recent call last):
File "C:\Python35\Scripts\\scapy", line 25, in <module>
interact()
File "C:\Python35\lib\site-packages\scapy\main.py", line 293, in interact
scapy_builtins = __import__("scapy.all",globals(),locals(),".").__dict__
File "C:\Python35\lib\site-packages\scapy\all.py", line 16, in <module>
from .arch import *
File "C:\Python35\lib\site-packages\scapy\arch\__init__.py", line 95, in <module>
from .windows import *
File "C:\Python35\lib\site-packages\scapy\arch\windows\__init__.py", line 200, in <module>
ifaces.load_from_powershell()
File "C:\Python35\lib\site-packages\scapy\arch\windows\__init__.py", line 151, in load_from_powers
hell
for i in get_windows_if_list():
File "C:\Python35\lib\site-packages\scapy\arch\windows\__init__.py", line 86, in get_windows_if_list
name, value = [ j.strip() for j in i.split(':') ]
ValueError: too many values to unpack (expected 2)
I have searched via google and on stackoverflow but have not found a solution.
Any guidance appreciated.
platform is Windows 7 and python35
Late answer: you are using a fork of scapy that does not officially supports windows 7.
Since very recently, the original secdev/scapy fork supports Python 3, so there is need to keep using the one not supporting windows 7 :-)
Feel free to have a look at
https://github.com/secdev/scapy