Statsmodels sarimax : turn process output off - statsmodels

I'm using the Statsmodels statespace sarimax like so:
model = sm.tsa.statespace.SARIMAX(param1, param2, etc)
fit_model = model.fit()
When this process is running, there's an output of the process printed on the terminal, see below:
Is there a way to turn this output off?

Set disp to False explicitly:
fit_model = model.fit(disp=False)

Related

"working" terminal prompt running in parallel Python 3.10

I am trying to show an animated "working" prompt while running some python code. I've been searching for a way to do so but the solutions i've found are not quite what i want (if i recall correctly tqdm and alive-progress require a for loop with a defined number of iterations) and I'd like to find a way to code it myself.
The closest I've gotten to making it is using asyncio as follows:
async def main():
dummy_task = asyncio.create_task(dummy_search())
bar_task = asyncio.create_task(progress())
test = await dummy_task
bar_task.cancel()
where dummy task can be any async task and bar_task is:
FLUSH_LINE = "\033[K"
async def progress(mode=""):
def integers():
n = 0
while True:
yield n
n += 1
progress_indicator = ["-", "\\", "|", "/"]
message = "working"
message_len = len(message)
message += "-" * message_len
try:
if not mode:
for i in (n for n in integers()):
await asyncio.sleep(0.05)
message = message[1:] + message[0]
print(f"{FLUSH_LINE}{progress_indicator[i % 4]} [{message[:message_len]}]", end="\r")
finally:
print(f"{FLUSH_LINE}")
The only problem with this approach is that asyncio does not actually run tasks in parallel, so if the dummy_task does not use await at any point, the bar_task will not run until the dummy task is complete, and it won't show the working prompt in the terminal.
How should I go about trying to run both tasks in parallel? Do I need to use multiprocessing? If so, would both tasks write to the same terminal by default?
Thanks in advance.

Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors

I tried to compile a tflite model to edgetpu model and run into some error like that.
Edge TPU Compiler version 16.0.384591198
Started a compilation timeout timer of 180 seconds.
ERROR: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors.
Compilation failed: Model failed in Tflite interpreter. Please ensure model can be loaded/run in Tflite interpreter.
Compilation child process completed within timeout period.
Compilation failed!
I define my model like that:
preprocess_input = tf.keras.applications.efficientnet.preprocess_input
def Model(image_size=IMG_SIZE):
input_shape = image_size + (3,)
inputs = tf.keras.Input(shape=input_shape)
x = preprocess_input(inputs)
base_model = tf.keras.applications.efficientnet.EfficientNetB0(input_shape=input_shape, include_top=False, weights="imagenet")
base_model.trainable = False
x = base_model(x, training=False)
x = tfl.GlobalAvgPool2D()(x)
x = tfl.Dropout(rate=0.2)(x)
outputs = tfl.Dense(90, activation='softmax')(x)
model = tf.keras.Model(inputs, outputs)
return model
The model summary is like that:
I convert to tflite model like that:
converter = tf.lite.TFLiteConverter.from_keras_model(model)
# Defining the representative dataset from training images.
def representative_dataset_gen():
for image, label in train_dataset.take(100):
yield [image]
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset_gen
# Using Integer Quantization.
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
# Setting the input and output tensors to uint8.
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model = converter.convert()
if not os.path.isdir('exported'):
os.mkdir('exported')
with open('/workspace/eff/exported/groups_1.tflite', 'wb') as f:
f.write(tflite_model)
Environment:
Edge TPU Compiler version 16.0.384591198
Python version 3.6.9
tensorflow 1.15.3
When looking for solutions on google, someone said you need to get rid of the preprocess_input, I'm not sure what that means.
How can I check if there is a dynamic shape tensor in the model
and how can I fix it?

output = grep.communicate()[0] returns a null when getting the battery usage details

I am using a mac. I am writing code to get the batterylevel and batterystatus of the mac using python2.7. Below is the code I have written but I checked printing the output variable but it's empty.
#!/usr/bin/env python
import os
import subprocess
import psutil
def getBatteryLevel():
BATTERY_CMD = ["/usr/sbin/ioreg", "-l"]
GREP_CMD = ["/usr/bin/egrep", "Capacity|ExternalChargeCapable"]
process = subprocess.Popen(BATTERY_CMD, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
grep = subprocess.Popen(GREP_CMD, stdin=process.stdout, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
grep.wait()
output = grep.communicate()[0]
batteryStatus = output.split("\n")
maxCapacity = float(batteryStatus[1].split("=")[1].lstrip())
curCapacity = float(batteryStatus[2].split("=")[1].lstrip())
remaining = 100 * (curCapacity / maxCapacity)
print "BATTERY LEVEL : "
print remaining
getBatteryLevel()
I think the grep does not give output because I am using a mac mini. I tested with macbook pro and the code worked fine there
It's best to avoid running sub-processes from python when you can avoid it.
The first command cannot hardly be avoided, but python is more than able to "emulate" grep. The less sub-processes you run the more your script is portable.
My proposal uses check_output for the ioreg call (which makes it simpler), then it splits the output using splitlines and filters the "interesting" lines when they match the regular expression.
import re
def getBatteryLevel():
BATTERY_CMD = ["/usr/sbin/ioreg", "-l"]
the_regex = re.compile("Capacity|ExternalChargeCapable")
output = subprocess.check_output(BATTERY_CMD)
batteryStatus = [l for l in output.splitlines() if the_regex.search(l)]
no need for grep and you'll avoid it's quirks/specificities across various systems.

Using apply functions in SparkR

I am currently trying to implement some functions using sparkR version 1.5.1. I have seen older (version 1.3) examples, where people used the apply function on DataFrames, but it looks like this is no longer directly available. Example:
x = c(1,2)
xDF_R = data.frame(x)
colnames(xDF_R) = c("number")
xDF_S = createDataFrame(sqlContext,xDF_R)
Now, I can use the function sapply on the data.frame object
xDF_R$result = sapply(xDF_R$number, ppois, q=10)
When I use a similar logic on the DataFrame
xDF_S$result = sapply(xDF_S$number, ppois, q=10)
I get the error message "Error in as.list.default(X) :
no method for coercing this S4 class to a vector"
Can I somehow do this?
This is possible with user defined functions in Spark 2.0.
wrapper = function(df){
+ out = df
+ out$result = sapply(df$number, ppois, q=10)
+ return(out)
+ }
> xDF_S2 = dapplyCollect(xDF_S, wrapper)
> identical(xDF_S2, xDF_R)
[1] TRUE
Note you need a wrapper function like this because you can't pass the extra arguments in directly, but that may change in the future.
The native R functions do not support Spark DataFrames. We can use user defined functions in SparkR to execute native R modules. These are executed on the executors and thus the libraries must be available on all the executors.
For example, suppose we have a custom function holt_forecast which takes in a data.table as an argument.
Sample R code
sales_R_df %>%
group_by(product_id) %>%
do(holt_forecast(data.table(.))) %>%
data.table(.) -> dt_holt
For using UDFs, we need to specify the schema of the output data.frame returned by the execution of the native R method. This schema is used by Spark to generate back the Spark DataFrame.
Equivalent SparkR code
Define the schema
structField("product_id", "integer"),
structField("audit_date", "date"),
structField("holt_unit_forecast", "double"),
structField("holt_unit_forecast_std", "double")
)
Execute the method
library(data.table)
library(lubridate)
library(dplyr)
library(forecast)
sales <- data.table(x)
y <- data.frame(key,holt_forecast(sales))
}, dt_holt_schema)
Reference: https://shbhmrzd.medium.com/stl-and-holt-from-r-to-sparkr-1815bacfe1cc

Redirecting stdout/stderr of spawn() to a string in Ruby

I would like to execute an external process in Ruby using spawn (for multiple concurrent child processes) and collect the stdout or stderr into a string, in a similar way to what can be done with Python's subprocess Popen.communicate().
I tried redirecting :out/:err to a new StringIO object, but that generates an ArgumentError, and temporarily redefining $stdxxx would mix up the outputs of the child processes.
In case you don't like popen, here's my way:
r, w = IO.pipe
pid = Process.spawn(command, :out => w, :err => [:child, :out])
w.close
...
pid, status = Process.wait2
output = r.read
r.close
Anyway you can't redirect to a String object directly. You can at most direct it to an IO object and then read from that, just like the code above.
Why do you need spawn? Unless you are on Windows you can use popen*, e.g. popen4:
require "open4"
pid, p_i, p_o, p_e = Open4.popen4("ls")
p_i.close
o, e = [p_o, p_e].map { |p| begin p.read ensure p.close end }
s = Process::waitpid2(pid).last
From the Ruby docs it seems that you can't, but you can do this:
spawn("ls", 0 => ["/tmp/ruby_stdout_temp", "w"])
stdoutStr=File.read("/tmp/ruby_stdout_temp")
You can also do the same with standard error. Or, if you wan't to do that and don't mind popen:
io=IO.popen("ls")
stdout=io.read
The most simple and straightforward way seems
require 'open3'
out, err, ps = Open3.capture3("ls")
puts "Process failed with status #{ps.exitstatus}" unless ps.success?
Here we have the outputs as strings.

Resources