Theano backend use focal loss have some bug - url-rewriting

My question is, how to rewrite the following two lines of code with theano 0.8.2 codeļ¼š
from theano import tensor as T #(use T function rewrite code)
pt_1 = tf.where(tf.equal(target, 1), output, tf.ones_like(output))
pt_0 = tf.where(tf.equal(target, 0), output, tf.zeros_like(output))

Related

ljspeech Hugging Face examples not working

When trying to run the ljspeech example, I get the following error, even when the model is moved to the only GPU in the system. I am using Cuda 11.7, Pytorch 1.13.1, and Fairseq 0.12.2.
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select)
The code used:
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
import IPython.display as ipd
import torch
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"facebook/fastspeech2-en-ljspeech",
arg_overrides={"vocoder": "hifigan", "fp16": False}
)
model = models[0].to(torch.device('cuda'))
models[0] = model
TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg)
generator = task.build_generator(models, cfg)
text = "Hello, this is a test run."
sample = TTSHubInterface.get_model_input(task, text)
wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample)
ipd.Audio(wav, rate=rate)

Execute is not defined in IBM quantum computing lab

I am using IBM's quantum computing lab, and was following a tutorial made by IBM for getting started, and my code is throwing errors. I followed the tutorial exactly. Here is my code:
#-----------Cell 1:
import numpy as np
# Importing standard Qiskit libraries
from qiskit import QuantumCircuit, transpile, Aer, IBMQ
from qiskit.tools.jupyter import *
from qiskit.visualization import *
from ibm_quantum_widgets import *
from qiskit.providers.aer import QasmSimulator
# Loading your IBM Quantum account(s)
provider = IBMQ.load_account()
#-----------Cell 2:
# Build
#------
# Create a Quantum Circuit acting on the q register
circuit = QuantumCircuit(2, 2)
# Add a H gate on qubit 0
circuit.h(0)
# Add a CX (CNOT) gate on control qubit 0 and target qubit 1
circuit.cx(0, 1)
# Map the quantum measurement to the classical bits
circuit.measure([0,1], [0,1])
# END
# Execute
#--------
# Use Aer's qasm_simulator
simulator = Aer.get_backend('qasm_simulator')
# Execute the circuit on the qasm simulator
job = execute(circuit, simulator, shots=1000)
# Grab results from the job
result = job.result()
# Return counts
counts = result.get_counts(circuit)
print("\nTotal count for 00 and 11 are:",counts)
# END
# Visualize
#----------
# Import draw_circuit, then use it to draw the circuit
from ibm_quantum_widgets import draw_circuit
draw_circuit(circuit)
# Analyze
#--------
# Plot a histogram
plot_histogram(counts)
# END
This code throws this error:
Traceback (most recent call last):
File "/tmp/ipykernel_59/1801586149.py", line 26, in <module>
job = execute(circuit, simulator, shots=1000)
NameError: name 'execute' is not defined
Use %tb to get the full traceback.
I am new to IBM and quantum computing, how do I fix this error?
Here is the tutorial I was following if you need it: https://quantum-computing.ibm.com/lab/docs/iql/first-circuit
You did not import execute from qiskit.
Change
from qiskit import QuantumCircuit, transpile, Aer, IBMQ
to
from qiskit import QuantumCircuit, transpile, Aer, IBMQ, execute

Pydotplus, Graphviz error: Program terminated with status: 1. stderr follows: 'C:\Users\En' is not recognized as an internal or external command

from pydotplus import graph_from_dot_data
from sklearn.tree import export_graphviz
from IPython.display import Image
dot_data = export_graphviz(tree,filled=True,rounded=True,class_names=['Setosa','Versicolor','Virginica'],feature_names=['petal length','petal width'],out_file=None)
graph = graph_from_dot_data(dot_data)
Image(graph.create_png())
Program terminated with status:
1. stderr follows: 'C:\Users\En' is not recognized as an internal or external command,
operable program or batch file.
it seems that it split my username into half.How do i overcome this?
I have a very similar example that I'm trying out, it's based on a ML how-to book which is working with a Taiwan Credit Card dataset predicting default risk. My setup is as follows:
from six import StringIO
from sklearn.tree import export_graphviz
from IPython.display import Image
import pydotplus
Then creating the decision tree plot is done in this way:
dot_data = StringIO()
export_graphviz(decision_tree=class_tree,
out_file=dot_data,
filled=True,
rounded=True,
feature_names = X_train.columns,
class_names = ['pay','default'],
special_characters=True)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
I think it's all coming from the out_file=dot_data argument but cannot figure out where the file path is created and stored as print(dot_data.getvalue()) did not show any pathname.
In my research I came across sklearn.plot_tree() which seems to do everything that the graphviz does. So I took the above exporet_graphviz arguments and were matching arguments were in the .plot_tree method I added them.
I ended up with the following which created the same image as was found in the text:
from sklearn import tree
plt.figure(figsize=(20, 10))
tree.plot_tree(class_tree,
filled=True, rounded=True,
feature_names = X_train.columns,
class_names = ['pay','default'],
fontsize=12)
plt.show()

Saving or downloading plotly iplot images on Google Colaboratory

I have been attempting to download a plot created using plotly on google colaboratory. So far this is what I have attempted:
I have tried changing
files.download('foo.svg')
to
files.download('foo')
and I still get no results. I navigated to the files on Google colab and nothing shows there
import numpy as np
import pandas as pd
from plotly.offline import iplot
import plotly.graph_objs as go
from google.colab import files
def enable_plotly_in_cell():
import IPython
from plotly.offline import init_notebook_mode
display(IPython.core.display.HTML('''<script src="/static/components/requirejs/require.js"></script>'''))
init_notebook_mode(connected=False)
#this actually shows the plot
enable_plotly_in_cell()
N = 500
x = np.linspace(0, 1, N)
y = np.random.randn(N)
df = pd.DataFrame({'x': x, 'y': y})
df.head()
data = [
go.Scatter(
x=df['x'], # assign x as the dataframe column 'x'
y=df['y']
)
]
iplot(data,image = 'svg', filename = 'foo')
files.download('foo.svg')
This is the error I am getting:
OSErrorTraceback (most recent call last)
<ipython-input-18-31523eb02a59> in <module>()
29 iplot(data,image = 'svg', filename = 'foo')
30
---> 31 files.download('foo.svg')
32
/usr/local/lib/python2.7/dist-packages/google/colab/files.pyc in download(filename)
140 msg = 'Cannot find file: {}'.format(filename)
141 if _six.PY2:
--> 142 raise OSError(msg)
143 else:
144 raise FileNotFoundError(msg) # pylint: disable=undefined-variable
OSError: Cannot find file: foo.svg
To save vector or raster images (e.g. SVGs or PNGs) from Plotly figures you need to have Kaleido (preferred) or Orca (legacy) installed, which is actually possible using the following commands in Colab:
Kaleido:
!pip install kaleido
Orca:
!pip install plotly>=4.0.0
!wget https://github.com/plotly/orca/releases/download/v1.2.1/orca-1.2.1-x86_64.AppImage -O /usr/local/bin/orca
!chmod +x /usr/local/bin/orca
!apt-get install xvfb libgtk2.0-0 libgconf-2-4
Once either of the above is done you can use the following code to make, show and export a figure (using plotly version 4):
import plotly.graph_objects as go
fig = go.Figure( go.Scatter(x=[1,2,3], y=[1,3,2] ) )
fig.show()
fig.write_image("image.svg")
fig.write_image("image.png")
The files can then be downloaded with:
from google.colab import files
files.download('image.svg')
files.download('image.png')
Try this, it does work for me:
import plotly.graph_objects as go
fig = go.Figure(...) # plot your fig
go.Figure.write_html(fig,"file.html") # write as html or image
files.download("file.html") # download your file and give me a vote my answer

GridSearch with XGBoost producing Depreciation error on infinite loop

I am trying to do a hyperparameter tuning using GridSearchCV on XGBoost.But, I'm getting the following error.
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/label.py:151: DeprecationWarning: The truth value of an empty array is ambiguous. Returning False, but in future this will result in an error. Use `array.size > 0` to check that an array is not empty.if diff:
This keeps on running forever. Given below is the code.
classifier = xgb.XGBClassifier()
from sklearn.grid_search import GridSearchCV
n_estimators=[10,50,100,150,200,250,300]
max_depth=[2,3,4,5,6,7,8,9,10]
learning_rate=[0.1,0.01,0.09,0.08,0.07,0.001]
colsample_bytree=[0.5,0.6,0.7,0.8,0.9]
min_child_weight=[1,2,3,4,5,6,7,8,9,10]
gamma=[0.001,0.01,0.1,0.2,0.3,0.4,0.5,1]
subsample=[0.5,0.6,0.7,0.8,0.9]
param_grid=dict(n_estimators=n_estimators,max_depth=max_depth,learning_rate=learning_rate,colsample_bytree=colsample_bytree,min_child_weight=min_child_weight,gamma=gamma,subsample=subsample)
grid = GridSearchCV(classifier, param_grid, cv=10, scoring='accuracy')
grid.fit(X, Y)
grid.grid_scores_
print(grid.best_score_)
print(grid.best_params_)
print(grid.best_estimator_)
# Predicting the Test set results
Y_pred = classifier.predict(X_test)
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(Y_test, Y_pred)
I am using python3.5, XGBOOT and gridsearch library has already been preloaded. I am running this on google collaboratory.
Please suggest what is going wrong ?

Resources