Attribute error on keras on tensorflow - image

While using Tensorflow v.1.0.1 and Keras 2.0 and running this code:
from keras import backend as K
if K.image_data_format() == 'channels_first':
input_shape = (1, img_width, img_height)
I'm getting the following error:
AttributeError: module 'keras.backend' has no attribute
'image_data_format'
How can I solve this?

It's because image_data_format is defined in keras.backend.common in keras 2.0.
If you have an earlier version, you could try checking for the value of dim_ordering in your config file (default is tensorflow ordering tf corresponding to channels last).

Two ways to solve this
Solution 1 (if you are using tensorflow.keras)
from tensorflow.keras import backend as K #instead of from keras import backend as K
Solution 2 (if you are using Keras directly)
from keras import backend as K
replace K.image_data_format() with K.common.image_dim_ordering

In keras latest version i.e. keras == 2.4.3, I resolved this issue using below code
from keras.backend import image_data_format

Related

Deprecated ARMA module

I'm following a Time Series Analysis module on DataCamp and tried using the exact same code in Jupyter notebook:
# Import the ARMA module from statsmodels
from statsmodels.tsa.arima_model import ARMA
# Forecast the first AR(1) model
mod = ARMA(simulated_data_1, order=(1,0))
res = mod.fit()
res.plot_predict(start=990, end=1010)
plt.show()
However, I keep getting the following error:
NotImplementedError:
statsmodels.tsa.arima_model.ARMA and statsmodels.tsa.arima_model.ARIMA have
been removed in favor of statsmodels.tsa.arima.model.ARIMA (note the .
between arima and model) and statsmodels.tsa.SARIMAX.
statsmodels.tsa.arima.model.ARIMA makes use of the statespace framework and
is both well tested and maintained. It also offers alternative specialized
parameter estimators.
After making some adjustments by using the correct package, I was able to get a chart but not the confidence interval as in the first image.
# Import the ARMA module from statsmodels
from statsmodels.tsa.arima.model import ARIMA
# Fit an AR(1) model to the first simulated data
model = ARIMA(simulated_data_1, order=(1,0,0))
result = model.fit()
plt.plot(simulated_data_1[990:])
plt.plot(result.predict(start=990, end=1010, ax=ax), color='red')
plt.show()
How can I add in the confidence interval?

HuggingFace 'TFEmbeddings' object has no attribute 'word_embeddings'

Trying to run this TextualHeatmap example, we encounter 'TFEmbeddings' object has no attribute 'word_embeddings' error in the following code snippet from the HuggingFace transformers library. Any help is appreciated.
from transformers import TFDistilBertForMaskedLM, DistilBertTokenizer
dbert_model = TFDistilBertForMaskedLM.from_pretrained('distilbert-base-uncased')
dbert_tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
dbert_embmat = dbert_model.distilbert.embeddings.word_embeddings
Try to use '.weight' instead of '.word_embeddings' as per hugging face latest implementation. It works for me.
Downgrading the version of transformers will work.
pip install transformers==3.1.0

'Ridge' is not a CV regularization model; try ManualAlphaSelection instead

I am trying to find the best Alpha for a Ridge model without CV, using Yellowbrick ManualAlphaSelection API. My code is pretty basic and it has been taken from the yellowbrick´s documentation. Even though it does not work:
from yellowbrick.regressor import ManualAlphaSelection
from sklearn.linear_model import Ridge
model = ManualAlphaSelection(Ridge(), scoring='neg_mean_squared_error')
model.fit(X_train, y_train)
model.show()
Python raises the message: 'Ridge' is not a CV regularization model; try ManualAlphaSelection instead.
But this message is wrong because the ManualAlphaSelection is already being used.
This actually appears to be a bug in our library 😅
Would you mind opening up a bug report on GitHub so we can be sure to fix it? Thank you for checking out Yellowbrick!

Use Dash with websockets

What is the best way to use Dash with Websockets to build a real-time dashboard ? I would like to update a graph everytime a message is received but the only thing I've found is calling the callback every x seconds like the example below.
import dash
import dash_core_components as dcc
import dash_html_components as html
import dash_daq as daq
from dash.dependencies import Input, Output
import plotly
import plotly.graph_objs as go
from websocket import create_connection
from tinydb import TinyDB, Query
import json
import ssl
# Setting up the websocket and the necessary web handles
ws = create_connection(address, sslopt={"cert_reqs": ssl.CERT_NONE})
app = dash.Dash(__name__)
app.layout = html.Div(
[
dcc.Graph(id='live-graph', animate=True),
dcc.Interval(
id='graph-update',
interval=1*1000,
n_intervals=0)
]
)
#app.callback(Output('live-graph', 'figure'),
[Input('graph-update', 'n_intervals')])
def update_graph_live(n):
message = ws.recv()
x=message.get('data1')
y=message.get('data2')
.....
fig = go.Figure(
data = [go.Bar(x=x,y=y)],
layout=go.Layout(
title=go.layout.Title(text="Bar Chart")
)
)
)
return fig
if __name__ == '__main__':
app.run_server(debug=True)
Is there a way to trigger the callback everytime a message is received (maybe storing them in a database before) ?
This forum post describes a method to use websocket callbacks with Dash:
https://community.plot.ly/t/triggering-callback-from-within-python/23321/6
Update
Tried it, it works well. Environment is Windows 10 x64 + Python 3.7.
To test, download the .tar.gz file and run python usage.py. It will complain about some missing packages, install these. Might have to edit the address from 0.0.0.0 to 127.0.0.1 in usage.py. Browse to http://127.0.0.1:5000 to see the results. If I had more time, I'd put this example up on GitHub (ping me if you're having trouble getting it to work, or the original gets lost).
I had two separate servers: one for dash, the other one as a socket server. They are running on different ports. On receiving a message, I edited a common json file to share information to dash's callback. That's how I did it.

How to save and recover PyBrain training?

Is there a way to save and recover a trained Neural Network in PyBrain, so that I don't have to retrain it each time I run the script?
PyBrain's Neural Networks can be saved and loaded using either python's built in pickle/cPickle module, or by using PyBrain's XML NetworkWriter.
# Using pickle
from pybrain.tools.shortcuts import buildNetwork
import pickle
net = buildNetwork(2,4,1)
fileObject = open('filename', 'w')
pickle.dump(net, fileObject)
fileObject.close()
fileObject = open('filename','r')
net = pickle.load(fileObject)
Note cPickle is implemented in C, and therefore should be much faster than pickle. Usage should mostly be the same as pickle, so just import and use cPickle instead.
# Using NetworkWriter
from pybrain.tools.shortcuts import buildNetwork
from pybrain.tools.customxml.networkwriter import NetworkWriter
from pybrain.tools.customxml.networkreader import NetworkReader
net = buildNetwork(2,4,1)
NetworkWriter.writeToFile(net, 'filename.xml')
net = NetworkReader.readFrom('filename.xml')
The NetworkWriter and NetworkReader work great. I noticed that upon saving and loading via pickle, that the network is no longer changeable via training-functions. Thus, I would recommend using the NetworkWriter-method.
NetworkWriter is the way to go. Using Pickle you can't retrain network as Jorg tells.
You need something like this:
from pybrain.tools.shortcuts import buildNetwork
from pybrain.tools.customxml import NetworkWriter
from pybrain.tools.customxml import NetworkReader
net = buildNetwork(4,6,1)
NetworkWriter.writeToFile(net, 'filename.xml')
net = NetworkReader.readFrom('filename.xml')

Resources