Plot Real Time Serial Data in Python - user-interface

I couldn't plot the real time serial data come from arduino.
I working on a project this project details are like that :
I have pressure sensor that detects the pressure on them probs.I transfer the pressure measured on the probes to the computer via serial port communication with the help of arduino uno.
ı can read the data come from arduino on the Spyder Editor(python)
Using this data ( 2 pressure value coming from arduino) ı have to plot real time serial data on the GUI.
I create the GUI.
But I couldn't plot the data.
I would really appreciate any help or comments , thanks for every effort !
`
# -*- coding: utf-8 -*-
"""
Created on Thu Dec 29 03:57:27 2022
#author: Berk
"""
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
from matplotlib.figure import Figure
import tkinter as tk
import numpy as np
import serial
import time
import datetime
xs=[]
ys=[]
serialPort=serial.Serial(port="COM5", baudrate=9600,
bytesize=8,timeout=2,stopbits=serial.STOPBITS_ONE)
cond=False
date=str(datetime.datetime.now())
date=date.replace(" ",",")
date=date.replace(":",".")
while (True):
try:
if (serialPort.in_waiting > 0):
serialString=serialPort.readline()
data=serialString.decode("utf8", errors="replace")
replaced_data=data.replace("\r","")
replaced_data=replaced_data.replace("\n","")
# replaced_data=replaced_data.replace(":","-")
print(replaced_data)
###---plot data---###
def plot_data():
global cond,data
data=np.append()
print(data)
if(cond==True):
a=serialPort.readline()
a.decode()
if(len(data<100)):
data=np.append(data,float(a[0:4]))
else:
data[0:99]=data[1:100]
data[99]=float(a[0:4])
print(a)
lines.set_xdata(np.arange(0,len(data)))
lines.set_ydata(data)
canvas.draw()
def plot_start():
global cond
cond=True
serialPort.reset_input_buffer()
def plot_stop():
global cond
cond=False
#######GUI########
root=tk.Tk()
root.title('Pressure Value From Sensor')
root.configure(background='light blue')
root.geometry("700x500")
#####PLOTTİN ON GUI##########
fig=Figure();
ax=fig.add_subplot(111)
ax.set_title('Pressure By Time')
ax.set_xlabel('By Time')
ax.set_ylabel('Pressure')
ax.set_xlim(0,500)
ax.set_ylim(0,500)
lines=ax.plot([],[],0)
canvas=FigureCanvasTkAgg(fig,master=root)
canvas.get_tk_widget().place(x=10,y=10,width=500,height=400)
canvas.draw()
root.after(1,plot_data)
##button##
root.update();
start=tk.Button(root,text='Stop',font=('calibri',12),command=lambda:plot_start())
start.place(x=100,y=450)
root.update();
start=tk.Button(root,text='Start',font=('calibri',12),command=lambda:plot_stop())
start.place(x=start.winfo_x()+start.winfo_reqwidth()+20,y=450)
##startint serial port###
serialPort=serial.Serial(port="COM5", baudrate=9600,
bytesize=8,timeout=2,stopbits=serial.STOPBITS_ONE)
serialPort.reset_input_buffer()
root.after(1,plot_data)
root.mainloop()
`

Related

ljspeech Hugging Face examples not working

When trying to run the ljspeech example, I get the following error, even when the model is moved to the only GPU in the system. I am using Cuda 11.7, Pytorch 1.13.1, and Fairseq 0.12.2.
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select)
The code used:
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
import IPython.display as ipd
import torch
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"facebook/fastspeech2-en-ljspeech",
arg_overrides={"vocoder": "hifigan", "fp16": False}
)
model = models[0].to(torch.device('cuda'))
models[0] = model
TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg)
generator = task.build_generator(models, cfg)
text = "Hello, this is a test run."
sample = TTSHubInterface.get_model_input(task, text)
wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample)
ipd.Audio(wav, rate=rate)

ESP32S2 with Micropython buffering/cutting stderr (adafruit MAGTAG board)

I uploaded micropython on to ESP32S2 MAGTAG board. But i have issue with error messages and some print() code. It looks like some print messages are just cut and also i don't see the whole traceback line if there was an error. Example1: (this shows how rint just hoops over some messages. In the below code it should show "Neopixel Initiated" but instead it shows only "Ne" and current RTC time which i display later in code).:
print("STARTING MY CODE")
import machine
import socket
import ure
import utime
import network
import urequests
import ntptime
import json
import time
import neopixel
print("STARTING Neopixel")
np=neopixel.NeoPixel(machine.Pin(1),4)
np_enable=machine.Pin(21)
np_enable=machine.Pin(21, machine.Pin.OUT)
np_enable.value(0)
np[0]=(2,0,0)
np[1]=(0,2,0)
np[2]=(0,0,8)
np[3]=(2,2,2)
np.write()
print("Neopixel Initiated")
print("STARTING MY 4")
print("STARTING MY CODE")
print("Runing MY CODE 1")
print("Runing MY CODE 2")
rtc=machine.RTC()
tim0=machine.Timer(0)
print("Runing MY CODE 3")
tim0.deinit()
print("RTC time:", rtc.datetime())
Output is:
Ready to download this file,please wait!
> .......................................................................................
> download ok exec(open('./main.py').read(),globals())
> STARTING MY CODE
> STARTING Neopixel
> Ne0, 3, 32, 53167)
> next message is printed from further in code
When i get an error i can't see traceback messages as on example here I intentionally placed error in "npi.write()":
> Ready to download this file,please wait!
> .......................................................................................
> download ok exec(open('./main.py').read(),globals())
> STARTING MY CODE
> STARTING Neopixel
> Trac >>>
I don't know if this has to do with buffering in python or EPS32S2 but it makes it impossible to debugg. Have been doing same project on ESP32 and there was no issue.
What can be done to solve it?
P.S i know it suppose to be circuit python for this pcb.

Execute is not defined in IBM quantum computing lab

I am using IBM's quantum computing lab, and was following a tutorial made by IBM for getting started, and my code is throwing errors. I followed the tutorial exactly. Here is my code:
#-----------Cell 1:
import numpy as np
# Importing standard Qiskit libraries
from qiskit import QuantumCircuit, transpile, Aer, IBMQ
from qiskit.tools.jupyter import *
from qiskit.visualization import *
from ibm_quantum_widgets import *
from qiskit.providers.aer import QasmSimulator
# Loading your IBM Quantum account(s)
provider = IBMQ.load_account()
#-----------Cell 2:
# Build
#------
# Create a Quantum Circuit acting on the q register
circuit = QuantumCircuit(2, 2)
# Add a H gate on qubit 0
circuit.h(0)
# Add a CX (CNOT) gate on control qubit 0 and target qubit 1
circuit.cx(0, 1)
# Map the quantum measurement to the classical bits
circuit.measure([0,1], [0,1])
# END
# Execute
#--------
# Use Aer's qasm_simulator
simulator = Aer.get_backend('qasm_simulator')
# Execute the circuit on the qasm simulator
job = execute(circuit, simulator, shots=1000)
# Grab results from the job
result = job.result()
# Return counts
counts = result.get_counts(circuit)
print("\nTotal count for 00 and 11 are:",counts)
# END
# Visualize
#----------
# Import draw_circuit, then use it to draw the circuit
from ibm_quantum_widgets import draw_circuit
draw_circuit(circuit)
# Analyze
#--------
# Plot a histogram
plot_histogram(counts)
# END
This code throws this error:
Traceback (most recent call last):
File "/tmp/ipykernel_59/1801586149.py", line 26, in <module>
job = execute(circuit, simulator, shots=1000)
NameError: name 'execute' is not defined
Use %tb to get the full traceback.
I am new to IBM and quantum computing, how do I fix this error?
Here is the tutorial I was following if you need it: https://quantum-computing.ibm.com/lab/docs/iql/first-circuit
You did not import execute from qiskit.
Change
from qiskit import QuantumCircuit, transpile, Aer, IBMQ
to
from qiskit import QuantumCircuit, transpile, Aer, IBMQ, execute

Decorator function is not working as expected

I was doing some testing with imports, and I wanted to test how fast certain packages get imported using function decorators. Here is my code:
import time
def timeit(func):
def wrapper():
start = time.time()
func()
end = time.time()
print(f'{func.__name__} executed in {end - start} second(s)')
return wrapper
#timeit
def import_matplotlib():
import matplotlib.pyplot
#timeit
def import_numpy():
import numpy
import_matplotlib()
import_numpy()
Output
import_matplotlib executed in 0.4385249614715576 second(s)
import_numpy executed in 0.0 second(s)
This is not the expected output given that numpy isn't imported in an instant. What is happening here, and how can this be fixed? Thank you.
Edit
If I make this change to import_numpy():
#timeit
def import_numpy():
import numpy
time.sleep(2)
The output becomes this:
import_matplotlib executed in 0.4556155204772949 second(s)
import_numpy executed in 2.0041260719299316 second(s)
This tells me that there isn't anything wrong with my decorator function. Why is this behavior occurring?
Try using the timeit module? It was built for this purpose and makes that code simpler.
>>> import timeit
>>> timeit.timeit(stmt='import numpy')
0.13844075199995132

Pydotplus, Graphviz error: Program terminated with status: 1. stderr follows: 'C:\Users\En' is not recognized as an internal or external command

from pydotplus import graph_from_dot_data
from sklearn.tree import export_graphviz
from IPython.display import Image
dot_data = export_graphviz(tree,filled=True,rounded=True,class_names=['Setosa','Versicolor','Virginica'],feature_names=['petal length','petal width'],out_file=None)
graph = graph_from_dot_data(dot_data)
Image(graph.create_png())
Program terminated with status:
1. stderr follows: 'C:\Users\En' is not recognized as an internal or external command,
operable program or batch file.
it seems that it split my username into half.How do i overcome this?
I have a very similar example that I'm trying out, it's based on a ML how-to book which is working with a Taiwan Credit Card dataset predicting default risk. My setup is as follows:
from six import StringIO
from sklearn.tree import export_graphviz
from IPython.display import Image
import pydotplus
Then creating the decision tree plot is done in this way:
dot_data = StringIO()
export_graphviz(decision_tree=class_tree,
out_file=dot_data,
filled=True,
rounded=True,
feature_names = X_train.columns,
class_names = ['pay','default'],
special_characters=True)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
I think it's all coming from the out_file=dot_data argument but cannot figure out where the file path is created and stored as print(dot_data.getvalue()) did not show any pathname.
In my research I came across sklearn.plot_tree() which seems to do everything that the graphviz does. So I took the above exporet_graphviz arguments and were matching arguments were in the .plot_tree method I added them.
I ended up with the following which created the same image as was found in the text:
from sklearn import tree
plt.figure(figsize=(20, 10))
tree.plot_tree(class_tree,
filled=True, rounded=True,
feature_names = X_train.columns,
class_names = ['pay','default'],
fontsize=12)
plt.show()

Resources