forecast = model.forecast(steps=82)[0]
AttributeError Traceback (most recent call last)
Input In [135], in <cell line: 1>()
----> 1 forecast = model.forecast(steps=82)[0]
AttributeError: 'ARIMA' object has no attribute 'forecast'
import library in statsmodel
import statsmodels.api as sm
use model with library stats model
model = sm.tsa.arima.ARIMA(train_data, order=(1, 1, 1))
fitted = model.fit()
print(fitted.summary())
and now your atrribute forecast can be used.
Related
I've created a simple NLP model in PyTorch, trained it and it works as expected in Python.
Then I've exported it to the TorchScript with jit.trace. And loading it back into Python works fine and model works as expected.
But when I try to execute it in rust with tch-rs (Rust bindings for the C++ api of PyTorch), the following error occurs and I have no idea how to debug it:
Error: Torch("The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
File \"code/__torch__/___torch_mangle_469.py\", line 17, in forward
dropout = self.dropout
bert = self.bert
_0 = (dropout).forward((bert).forward(input_id, mask, ), )
~~~~~~~~~~~~~ <--- HERE
_1 = (relu).forward((linear).forward(_0, ), )
return _1
File \"code/__torch__/transformers/models/bert/modeling_bert/___torch_mangle_465.py\", line 19, in forward
batch_size = ops.prim.NumToTensor(torch.size(input_id, 0))
_0 = int(batch_size)
seq_length = ops.prim.NumToTensor(torch.size(input_id, 1))
~~~~~~~~~~ <--- HERE
_1 = int(seq_length)
_2 = int(seq_length)
Traceback of TorchScript, original code (most recent call last):
/user/.conda/envs/tch/lib/python3.10/site-packages/transformers/models/bert/modeling_bert.py(954): forward
/user/.conda/envs/tch/lib/python3.10/site-packages/torch/nn/modules/module.py(1176): _slow_forward
/user/.conda/envs/tch/lib/python3.10/site-packages/torch/nn/modules/module.py(1192): _call_impl
/var/folders/zs/vmmy3w4n0ns1c0kj91skmfnm0000gn/T/ipykernel_10987/868892765.py(17): forward
/user/.conda/envs/tch/lib/python3.10/site-packages/torch/nn/modules/module.py(1176): _slow_forward
/user/.conda/envs/tch/lib/python3.10/site-packages/torch/nn/modules/module.py(1192): _call_impl
/user/.conda/envs/tch/lib/python3.10/site-packages/torch/jit/_trace.py(957): trace_module
/user/.conda/envs/tch/lib/python3.10/site-packages/torch/jit/_trace.py(753): trace
/var/folders/zs/vmmy3w4n0ns1c0kj91skmfnm0000gn/T/ipykernel_10987/749605851.py(1): <module>
/user/.conda/envs/tch/lib/python3.10/site-packages/IPython/core/interactiveshell.py(3430): run_code
/user/.conda/envs/tch/lib/python3.10/site-packages/IPython/core/interactiveshell.py(3341): run_ast_nodes
/user/.conda/envs/tch/lib/python3.10/site-packages/IPython/core/interactiveshell.py(3168): run_cell_async
/user/.conda/envs/tch/lib/python3.10/site-packages/IPython/core/async_helpers.py(129): _pseudo_sync_runner
/user/.conda/envs/tch/lib/python3.10/site-packages/IPython/core/interactiveshell.py(2970): _run_cell
/user/.conda/envs/tch/lib/python3.10/site-packages/IPython/core/interactiveshell.py(2941): run_cell
/user/.conda/envs/tch/lib/python3.10/site-packages/ipykernel/zmqshell.py(531): run_cell
/user/.conda/envs/tch/lib/python3.10/site-packages/ipykernel/ipkernel.py(380): do_execute
/user/.conda/envs/tch/lib/python3.10/site-packages/ipykernel/kernelbase.py(700): execute_request
/user/.conda/envs/tch/lib/python3.10/site-packages/ipykernel/kernelbase.py(383): dispatch_shell
/user/.conda/envs/tch/lib/python3.10/site-packages/ipykernel/kernelbase.py(496): process_one
/user/.conda/envs/tch/lib/python3.10/site-packages/ipykernel/kernelbase.py(510): dispatch_queue
/user/.conda/envs/tch/lib/python3.10/asyncio/events.py(80): _run
/user/.conda/envs/tch/lib/python3.10/asyncio/base_events.py(1868): _run_once
/user/.conda/envs/tch/lib/python3.10/asyncio/base_events.py(597): run_forever
/user/.conda/envs/tch/lib/python3.10/site-packages/tornado/platform/asyncio.py(212): start
/user/.conda/envs/tch/lib/python3.10/site-packages/ipykernel/kernelapp.py(701): start
/user/.conda/envs/tch/lib/python3.10/site-packages/traitlets/config/application.py(990): launch_instance
/user/.conda/envs/tch/lib/python3.10/site-packages/ipykernel_launcher.py(12): <module>
/user/.conda/envs/tch/lib/python3.10/runpy.py(75): _run_code
/user/.conda/envs/tch/lib/python3.10/runpy.py(191): _run_module_as_main
RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
")
And here is the simple model that I try to execute:
from torch import nn
from transformers import BertModel
class BertClassifier(nn.Module):
def __init__(self, dropout=0.5):
super(BertClassifier, self).__init__()
self.bert = BertModel.from_pretrained('bert-base-cased')
self.dropout = nn.Dropout(dropout)
self.linear = nn.Linear(768, 5)
self.relu = nn.ReLU()
def forward(self, input_id, mask):
_, pooled_output = self.bert(input_ids= input_id, attention_mask=mask,return_dict=False)
dropout_output = self.dropout(pooled_output)
linear_output = self.linear(dropout_output)
final_layer = self.relu(linear_output)
return final_layer
I'm new to ML and I can't find any docs on how to debug TorchScript runtime errors so I appreciate any help in solving this problem
I have the following chunk of code from this link:
from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।"
chinese_text = "生活就像一盒巧克力。"
model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M")
tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M")
Which gives me the following error:
TypeError Traceback (most recent call last)
<ipython-input-13-c56f34229c4a> in <module>()
5
6 model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M")
----> 7 tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M")
TypeError: 'NoneType' object is not callable
I'm using Google Colab, but funnily enough it works perfectly fine in VSCode.
Same here, it works fine in my local env but not on colab, I fixed this by using transformers==4.16.0 instead of the latest version, it was chosen arbitrarily however, it may work with a more recent one
skipFlds = [desc.shapeFieldName, desc.OIDFieldName]
Error: AtrributeError: DescribeData: Method shapeFieldName does not exist
That error is common when a shape/geometry field doesn't exist, like in a regular table. Double check you are describing a spatial dataset; if so, then the dataset may be corrupted and ArcPy isn't recognizing a spatial data type exists.
>>> import arcpy
>>>
>>> fc = # path to feature class
>>> tbl = # path to table
>>>
>>> desc_fc = arcpy.Describe(fc)
>>> desc_fc.shapeFieldName
'Shape'
>>>
>>> desc_tbl = arcpy.Describe(tbl)
>>> desc_tbl.shapeFieldName
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: DescribeData: Method shapeFieldName does not exist
>>>
Unable to Split frame using split_frame(). The dataframe is able to show() but I cannot split it. Please help.
Below is a sample of the code I have used.
from h2o.estimators.random_forest import H2ORandomForestEstimator
from h2o.estimators.gbm import H2OGradientBoostingEstimator
from h2o.estimators.deeplearning import H2ODeepLearningEstimator
from h2o.estimators.glm import H2OGeneralizedLinearEstimator
from h2o.estimators.stackedensemble import H2OStackedEnsembleEstimator
from __future__ import print_function
temp = spark.read.option("header","true").option("inferSchema","true").csv("hdfs://bda-ns/user/august_week2.csv")
train,test,valid = temp.split_frame(ratios=[.75, .15])
Expected: no error. Data split into test and train data frame.
Actual:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/cloudera/parcels/SPARK2-2.3.0.cloudera2-1.cdh5.13.3.p0.316101/lib/spark2/python/pyspark/sql/dataframe.py", line 1182, in __getattr__
"'%s' object has no attribute '%s'" % (self.__class__.__name__, name))
AttributeError: 'DataFrame' object has no attribute 'split_frame'
>>> train,test,valid = temp.split_frame(ratios=[.75, .15])
Traceback (most recent call last):
File "/opt/cloudera/parcels/SPARK2-2.3.0.cloudera2-1.cdh5.13.3.p0.316101/lib/spark2/python/pyspark/context.py", line 234, in signal_handler
You could use randomsplit on your spark dataframe.
If you want to use the H2O-3 split_frame method, you would first have to convert your spark frame to an h2o frame. In which case you could use hc.as_h2o_frame(spark_df) where hc is your h2o_context (note: you would also need to create the h2o_context for this to work).
I have been trying to find a way to update the GUI thread from a Python thread outside of main. The PyQt5 docs on sourceforge have good instructions on how to do this. But I still can't get things to work.
Is there a good way to explain the following output from an interactive session? Shouldn't there be a way to call the emit method on these objects?
>>> from PyQt5.QtCore import QObject, pyqtSignal
>>> obj = QObject()
>>> sig = pyqtSignal()
>>> obj.emit(sig)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'QObject' object has no attribute 'emit'
and
>>> obj.sig.emit()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'QObject' object has no attribute 'sig'
and
>>> obj.sig = pyqtSignal()
>>> obj.sig.emit()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'PyQt5.QtCore.pyqtSignal' object has no attribute 'emit'
Following words and codes are in PyQt5 docs.
New signals should only be defined in sub-classes of QObject.They must be part of the class definition and cannot be dynamically added as class attributes after the class has been defined.
from PyQt5.QtCore import QObject, pyqtSignal
class Foo(QObject):
# Define a new signal called 'trigger' that has no arguments.
trigger = pyqtSignal()
def connect_and_emit_trigger(self):
# Connect the trigger signal to a slot.
self.trigger.connect(self.handle_trigger)
# Emit the signal.
self.trigger.emit()
def handle_trigger(self):
# Show that the slot has been called.
print "trigger signal received"