DeepAr Forecast Gluonts 'TrainOutput' object has no attribute 'predict' - forecast

I'm creating a model to forecast real estate price using DeepAr. I have monthly values from 2000 until today. I would like to forecast the value in the next 2 years. So, its to forecast unknown future target values with gluonts DeepAR.
The model is created using:
estimator = DeepAREstimator(prediction_length=24, context_length=120, freq = "M", trainer=Trainer())
train_output = estimator.train_model(subset_train_list)
I want to predict the next 2 years with :
train_output.predict(subset_test_list) corresponding of units with target values until today but I get this error :
AttributeError: 'TrainOutput' object has no attribute 'predict'

Related

DAX Count measure values between given range

I am trying to create a measure that would count the instances when another measure is between given values.
The first is a measure of forecast accuracy, which is calculated over products and customers with a target value of 1. Then I would like to make a monthly report which shows for how many products the forecast accuracy is less than .85, between 0.85 and 1.15 and over 1.15.
The measure I tried for the middle category, which does not give the desired result:
var tab = SUMMARIZE(data, data[ComponentNumber], "Accuracy", [Forecast accuracy])
return SUMX(tab, IF([Accuracy] > 0.85 && [Accuracy] < 1.15, 1, 0))
The data table has also a customer number, which is why I tried first evaluating the measure [Forecast accuracy] only over components, disregarding the customers.
One source of the problem may lie in the fact that the measure [Forecast accuracy] is calculated as a division of two measures [Ordered Quantity] and [Forecast Quantity], of which the former is in another table. Does this affect the evaluation of my attempted measure?

I can't catch the trend in my model and forecast it

I work for a call center and i need to forecast call volumes
In order to do so i followed these steps :
-Filling missing values with linear interpolation
-Divided my data into trend+residual+seasonality
-Made my data stationnary using stat model
#read dataset
df= pd.read_excel("df.xlsx", index_col=0)
#define daily frequency
df= df.asfreq(df.index.freq)
#replace missing data with linear intepolation
df.interpolate(method='linear', inplace=True)
#decompose time series using seasonal_decompose from SKlearn
result= seasonal_decompose(df.value, model='mult')
#remove seasonality and trend to make data stationnary
df_non_seasonal=df.value.values/
(result_add.seasonal*result_add.trend)
#Make forecasts with Prophet
from atspy import AutomatedModel
model_list = ["Prophet"]
model = AutomatedModel(df = df_non_seasonal,model_list=model_list)
My model works well to forecast call volumes, my problem is that my trend is very hard to forecast, it seems to be impredictible, i can't find a way to catch it
Do you have any advices for the processing of my trend that would make it easier to predict?

Holt-winter model fit values

Hw to get model fit model for training period in holt winter method for Univariate time series. For future forecast I can use the following syntax but not sure what is syntax for training period.
result = model.fit()
start = len(df)
end = len(df) + 6
# Predictions for one-year against the test set
fcast = result.forecast(start ,end)```
i got the answer it is
result.fittedvalues

'LpVariable' object does not support indexing

I ran into the error of 'LpVariable' object does not support indexing. Is this due to how my data are being setup for my PuLP optimization?
Basically, I am trying to get the optimized sales by multiplying the replenishment quantity with the selling price for each SKU (UPC). The replenishment quantity will be based on the quantity sold (available in the dataset) and will be the constraint for the optimization problem. I should not replenish way too much than what I sold.
Could someone with PuLP experience please help me with my error? Is the way I setup my For-Loop logical?
Here is the key portion of my Python codes:
After sorting my data based on LocationNumber, the first and last few rows of my data are as follows:
enter image description here
df.drop(['LowlawWeekYear'], axis=1, inplace=True) # Drop the WeekYear column
store_list = {108} # This is LocationNumber. Will only run one store for now
for store_number in sorted(store_list):
specific_store = df[df['LocationNumber'] == store_number]
Qty_Price_df = specific_store.groupby('UPC', as_index = False)['AvgSellingPricewoTax', 'Units'].mean()
# get the mean of the AvgSellingPricewoTax and Units. Ultimately, I only need one row for each UPC with the average values of AvgSellingPricewoTax and Units.
SKU_list = sorted(list(set(Qty_Price_df.UPC))) # List of SKU numbers
Variable_list = dict(zip(Qty_Price_df.UPC, Qty_Price_df.UPC)) # Variables which I am looking to optimize
Price_list = dict(zip(Qty_Price_df.UPC, Qty_Price_df.AvgSellingPricewoTax))
Qty_list = dict(zip(Qty_Price_df.UPC, Qty_Price_df.Units)) # Quantity sold per UPC. This will be used in the constraint.
from pulp import *
optimization = LpProblem("Perfect_Store", LpMaximize)
Variable_list = LpVariable("SKU", lowBound=0) # Continuous by default
# Define objective function
optimization += lpSum([Price_list[type]*Variable_list[type] for type in SKU_list]), "Total Sales by multiplying Price with Variable Qty"
***# Here is where I ran into the error message: 'LpVariable' object does not support indexing***
# Set constraint for each SKU
for c in SKU_list:
optimization += (Qty_list[c] <= Qty_list[c]*1.05), "Constraints for each SKU is the replenishment quantity which should not be more than 5% of the quantity sold"
print("Status:", LpStatus[optimization.status])

Ruby / Redmine: compile a hash from issue subjects and appropriate custom fields

I’m using Redmine and Computed Custom Field plugin.
The plugin provides a possibility to make custom fields computed and it accepts ruby code for calculations.
In Redmine I have a project (Project_id = 11) where in I calculate the cost of products in a separate custom field for each issue. It looks like this:
Each Issue has a custom field (cf_id = 31) for selecting the product: Pears, Pineapples, Tomatoes, Coconuts.
Each Issue has a custom field (cf_id = 32) for entering the quantity (pieces) of goods.
Each Issue has a custom field (cf_id = 33) for entering the weight (pounds) of goods.
Each Issue has a computed custom field (cf_id = 34) in which the formula calculates the cost of the product.
The formula in the computed custom field (cf_id = 34) includes two hashes with prices of products (depending on the product type):
products_by_weight = {
"Pears" => [110],
"Tomatoes" => [120]
}
products_by_pieces = {
"Pineapples" => [130,300],
"Coconuts" => [140,200]
}
Then my formula checks the product selected in cf_id = 31 for belonging to the first or the second hashes and performs the corresponding calculations:
Multiplies the price by weight (cf_id = 32) in case of using the goods from the first list
Or multiplies the price by the quantity (cf_id = 33) in case of using the goods from the second list. The second value in the value array of "products_by_pieces" hash is the weight limit per piece. If the weight divided by the limit is a larger amount than entered in cf_id = 32, then the formula in scenario 2 will use this quantity instead of the one indicated in cf_id = 32.
Now I`m trying to move these variables outside the formula. I made a project (Project_id = 22) in which I want to save these variables as issues.
I imagine it like this:
The name of the issue is the name of the product
Each issue has two custom fields:
cf_id = 41 is price of product
cf_id = 42 is weight limit per piece
For each issue a category is assigned: "products_by_weight" or "products_by_pieces".
I want to compile the same hashes that are now included in my formula in the cf_id = 34 of the issues of the project 11, but automatically from the issues of the project 22, taking into account the category.
So far, all I have achieved is to find the price of a known product from such issues of the project 22.
price = Project.find(22).issues.where(subject: "Pineapples").first.try(:custom_field_value,41)
But this does not help in any way and requires changes to the code when adding each new product.
I'm new to programming and Ruby, so I’m trying to experiment with Redmine classes, and tried to compile a hash with such code:
Issue.by_category(Project.find(22))
But as a result, so far I have received only this:
[{"status_id"=>"27", "closed"=>true, "category_id"=>"1", "total"=>"10"}]
Which is completely different from the result I expect.
Any help would be helpful!
UPD.
Right now, my variables (product prices and weight limits) are in a hash, which is directly a part of the code for computed field 34. But I do not want these variables (prices and weight limits) to be part of the code. I want to manage them as Issues with corresponding custom fields (41 and 42) in a separate project (22) - in such a way that regular user can change or add these values in Issues without having to change the code of the calculated custom field (34). So I want to compile that hash based on the Issues from Project 22 instead of writing it directly. I assume this is so, that the Subjects of the Issues of the Project 22 should become the keys and array of custom fields [41,41] - the values. In doing so, I need two separate hashes determined by the assigned category ("goods_by_weight" and "goods_by_pieces") because they are calculated differently and in Project 22 I have other variables written as values of custom fields in Issues with a different category.
I solved this problem in the following way.
As planned, now I store the price list for my products as issues in a separate Project (ID 22). To get the prices hash of all products of the selected category (A) in the computed custom field's formula if Issue of Project (ID 11), I do the following:
PRICELIST_PROJECT_ID = 22
CATEGORY_A_ID = 1
PRICE_VALUE_CFID = 41
WLIMIT_VALUE_CFID = 42
delimiter = ','
pricelist_issues_cat_a = Project.find(PRICELIST_PROJECT_ID).issues.select { |rate| rate.category_id == CATEGORY_A_ID }
cata_products_names = []
cata_products_pvalues = []
for i in (0..pricelist_issues_cat_a.size-1) do
cata_products_names[i] = pricelist_issues_cat_a[i].try(:subject)
cata_products_pvalues[i] = pricelist_issues_cat_a[i].try(:custom_field_value,PRICE_VALUE_CFID).split(Regexp.union(delimiter)).map(&:to_f)
end
cata_price_hash = Hash[cata_products_names.zip(cata_products_pvalues)]
And the same way for product category B.
Not sure if this is the most efficient way, but it works for me.

Resources