in juppyter notebook getting error as
AttributeError Traceback (most recent call last)
t0=time.clock() # start timer for finding simulation time
AttributeError: module 'time' has no attribute 'clock'
for following code
import numpy as np
import matplotlib.pyplot as plt
import time
t0=time.clock() # start timer for finding simulation time
# Problem parameters
k1=50 # cart 1 spring constant (N/m)
k2=50 # cart 2 spring constant (N/m)
b1=3 # cart 1 viscous damping coefficient (kg/s)
b2=3 # cart 2 viscous damping coefficient (kg/s)
m1=5 # cart 1 mass (kg)
m2=5 # cart 2 mass (kg)
x10=1 # cart 1 initial position (m)
x20=-1 # cart 2 initial position (m)
v10=0 # cart 1 initial velocity (m/s)
v20=0 # cart 2 initial velocity (m/s)
# Set time step stuff
simTime=10 # simulation time (s)
tStep=0.001 # simulation time step
iterations=int(simTime/tStep) # total number of iterations
t=np.arange(0,iterations)
# Pre-allocate variables for speed and add initial conditions
x1=np.zeros((iterations,1))
x1[0,:]=x10
x2=np.zeros((iterations,1))
x2[0,:]=x20
v1=np.zeros((iterations,1))
v1[0,:]=v10
v2=np.zeros((iterations,1))
v2[0,:]=v20
a1=np.zeros((iterations,1))
a1[0,:]=-(b1*v10-b2*(v20-v10)+k1*x10-k2*(x20-x10))/m1
a2=np.zeros((iterations,1))
a2[0,:]=-(b2*(v20-v10)+k2*(x20-x10))/m2
# Solve the ODE's with Euler's Method
for n in range(1,iterations):
x1[n,:]=x1[n-1,:]+v1[n-1,:]*tStep # cart 1 position
x2[n,:]=x2[n-1,:]+v2[n-1,:]*tStep # cart 2 position
v1[n,:]=v1[n-1,:]+a1[n-1,:]*tStep # cart 1 velocity
v2[n,:]=v2[n-1,:]+a2[n-1,:]*tStep # cart 2 velocity
# Find cart accelerations
a1[n,:]=-(b1*v1[n,:]-b2*(v2[n,:]-v1[n,:])+k1*x1[n,:]-k2*(x2[n,:]-x1[n,:]))/m1
a2[n,:]=-(b2*(v2[n,:]-v1[n,:])+k2*(x2[n,:]-x1[n,:]))/m2
Are you using a Python version >3.7? The time.clock function has been removed as of version Python 3.8. The removal notice was included in the time documentation for Python 3.7.
Related
Using Micropython for the ESP32 microcontroller, flashed with the latest firmware at time of writing (v1.18)
I'm making an alarm (sort-of) system where I get multiple time values ("13:15" for example) from my website, and then I have to ring an alarm bell at those times.
I've done the website and I can do the ring stuff, but I don't know how to actually create time objects from the previously mentioned strings ("13:15"), and then check if any of the times inputted match the current time, the date is irrelevant.
From reading the documentation, im getting the sense that this cant be done, since ive looked through the micropython module github, and you apparently cant get datetime in micropython, and i know that in regular python my problem can be solved with datetime.
import ntptime
import time
import network
# Set esp as a wifi station
station = network.WLAN(network.STA_IF)
# Activate wifi station
station.active(True)
# Connect to wifi ap
station.connect(ssid,passwd)
while station.isconnected() == False:
print('.')
time.sleep(1)
print(station.ifconfig())
try:
print("Local time before synchronization: %s" %str(time.localtime()))
ntptime.settime()
print("Local time after synchronization: %s" %str(time.localtime()))
except:
print("Error syncing time, exiting...")
this is the shortened code from my project, with only the time parts, now comes into play the time comparison thing I don't know how to do.
Using ntptime to get time from server. I use "time.google.com", to get the time. Then, I transform it into seconds (st) to be more accurate. And set my targets hour in seconds 1 hour = 3600 s.
import utime
import ntptime
def server_time():
try:
# Ask to time.google.com server the current time.
ntptime.host = "time.google.com"
ntptime.settime()
t = time.localtime()
# print(t)
# transform tuple time 't' to seconds value. 1 hour =
st = t[3]*3600 + t[4]*60 + t[5]
return st
except:
# print('no time')
st = -1
return st
while True:
# Returns an increasing millisecond counter since the Board reset.
now = utime.ticks_ms()
# Check current time every 5000 ms (5s) without going to sleep or stop any other process.
if now >= period + 5000:
period += 5000
# call your servertime function
st = server_time()
if ((st > 0) and (st < 39600)) or (st > 82800): # Turn On 17:00 Mexico Time.
# something will be On between 17:00 - 06:00
elif ((st <82800) and (st > 39600)): # Turn Off 6:00.
# something will be Off between 06:00 - 17:00
else:
pass
After running ntptime.settime() you can do the following to retrieve the time, keep in mind this is in UTC:
rtc = machine.RTC()
hour = rtc.datetime()[4] if (rtc.datetime()[4]) > 9 else "0%s" % rtc.datetime()[4]
minute = rtc.datetime()[5] if rtc.datetime()[5] > 9 else "0%s" % rtc.datetime()[5]
The if else statement makes sure that numbers lower or equal to 9 are padded with a zero.
As a part of a large QC benchmark I am creating a large number (approx 100K) of scatter plots in a single PDF using PdfPages backend. (See further down for the code)
The issue I am having is that the plotting takes too much time, see output from a custom profiling/debugging effort:
Checkpoint1: Predictions done in 1.110076904296875 millis
Checkpoint2: df created and correlations calculated in 3.108978271484375 millis
Checkpoint3: plotting and accumulating done in 231.31990432739258 millis
Cycle completed in 0.23553895950317383 secs
----------------------
Checkpoint1: Predictions done in 3.718852996826172 millis
Checkpoint2: df created and correlations calculated in 2.353191375732422 millis
Checkpoint3: plotting and accumulating done in 155.93385696411133 millis
Cycle completed in 0.16200590133666992 secs
----------------------
Checkpoint1: Predictions done in 2.920866012573242 millis
Checkpoint2: df created and correlations calculated in 1.995086669921875 millis
Checkpoint3: plotting and accumulating done in 161.8819236755371 millis
Cycle completed in 0.16679787635803223 secs
The figure for plotting gets an 2-3x increase if I annotate the points, which is necessary for the use case. As you can see below I have tried both itertuples() and apply(), switching to apply did not give a significant change in the times as far as I can see.
def annotate(row, ax):
ax.annotate(row.name, (row.exp, row.model),
xytext=(10, 20), textcoords='offset points',
arrowprops=dict(arrowstyle="-", connectionstyle="arc,angleA=180,armA=10"),
family='sans-serif', fontsize=8, color='darkslategrey')
def plot2File(df, file, seq, z, p, s):
""" Plot predictions vs experimental """
plttitle = f"Correlations for {seq}+{z} \n pearson={p} \n spearman={s}"
ax = df.plot(x='exp', y='model', kind='scatter', title=plttitle, s=40)
df.apply(annotate, ax=ax, axis=1)
# for row in df.itertuples():
# ax.annotate(row.Index, (row.exp, row.model),
# xytext=(10, 20), textcoords='offset points',
# arrowprops=dict(arrowstyle="-", connectionstyle="arc,angleA=180,armA=10"),
# family='sans-serif', fontsize=8, color='darkslategrey')
plt.savefig(file, bbox_inches='tight', format='pdf')
plt.close()
Given the nice explanation by Jeff on a question regarding iterrows() I was wondering if it would be possible to vectorize the annotation process? Or should I ditch using a data frame altogether?
I'm preparing a small presentation in Ipython where I want to show how easy it is to do parallel operation in Julia.
It's basically a Monte Carlo Pi calculation described here
The problem is that I can't make it work in parallel inside an IPython (Jupyter) Notebook, it only uses one.
I started Julia as: julia -p 4
If I define the functions inside the REPL and run it there it works ok.
#everywhere function compute_pi(N::Int)
"""
Compute pi with a Monte Carlo simulation of N darts thrown in [-1,1]^2
Returns estimate of pi
"""
n_landed_in_circle = 0
for i = 1:N
x = rand() * 2 - 1 # uniformly distributed number on x-axis
y = rand() * 2 - 1 # uniformly distributed number on y-axis
r2 = x*x + y*y # radius squared, in radial coordinates
if r2 < 1.0
n_landed_in_circle += 1
end
end
return n_landed_in_circle / N * 4.0
end
function parallel_pi_computation(N::Int; ncores::Int=4)
"""
Compute pi in parallel, over ncores cores, with a Monte Carlo simulation throwing N total darts
"""
# compute sum of pi's estimated among all cores in parallel
sum_of_pis = #parallel (+) for i=1:ncores
compute_pi(int(N/ncores))
end
return sum_of_pis / ncores # average value
end
julia> #time parallel_pi_computation(int(1e9))
elapsed time: 2.702617652 seconds (93400 bytes allocated)
3.1416044160000003
But when I do:
using IJulia
notebook()
And try to do the same thing inside the Notebook it only uses 1 core:
In [5]: #time parallel_pi_computation(int(10e8))
elapsed time: 10.277870808 seconds (219188 bytes allocated)
Out[5]: 3.141679988
So, why isnt Jupyter using all the cores? What can I do to make it work?
Thanks.
Using addprocs(4) as the first command in your notebook should provide four workers for doing parallel operations from within your notebook.
One way to solve this is to create a kernel that always uses 4 cores. For that some manual work is required. I assume that you are on a unix machine.
In the folder ~/.ipython/kernels/julia-0.x, you will find following kernel.json file:
{
"display_name": "Julia 0.3.9",
"argv": [
"/usr/local/Cellar/julia/0.3.9_1/bin/julia",
"-i",
"-F",
"/Users/ch/.julia/v0.3/IJulia/src/kernel.jl",
"{connection_file}"
],
"language": "julia"
}
If you copy the whole folder cp -r julia-0.x julia-0.x-p4, and modify the newly copied kernel.json file:
{
"display_name": "Julia 0.3.9 p4",
"argv": [
"/usr/local/Cellar/julia/0.3.9_1/bin/julia",
"-p",
"4",
"-i",
"-F",
"/Users/ch/.julia/v0.3/IJulia/src/kernel.jl",
"{connection_file}"
],
"language": "julia"
}
The paths will probably be different for you. Note that I only gave the kernel a new name and added the command line argument `-p 4.
You should see a new kernel named Julia 0.3.9 p4 which should always use 4 cores.
Also note that this kernel file will not get updated when you update IJulia, so you have to update it manually whenever you update julia or IJulia.
You can add new kernels using this command:
using IJulia
#for 4 cores
installkernel("Julia_4_threads", env=Dict("JULIA_NUM_THREADS"=>"4"))
#or for 8 cores
installkernel("Julia_8_threads", env=Dict("JULIA_NUM_THREADS"=>"8"))
After restart your VSCode this options will apear you your select kernel option.
I have stock data at the tick level and would like to create a rolling list of all ticks for the previous 10 seconds. The code below works, but takes a very long time for large amounts of data. I'd like to vectorize this process or otherwise make it faster, but I'm not coming up with anything. Any suggestions or nudges in the right direction would be appreciated.
library(quantmod)
set.seed(150)
# Create five minutes of xts example data at .1 second intervals
mins <- 5
ticks <- mins * 60 * 10 + 1
times <- xts(runif(seq_len(ticks),1,100), order.by=seq(as.POSIXct("1973-03-17 09:00:00"),
as.POSIXct("1973-03-17 09:05:00"), length = ticks))
# Randomly remove some ticks to create unequal intervals
times <- times[runif(seq_along(times))>.3]
# Number of seconds to look back
lookback <- 10
dist.list <- list(rep(NA, nrow(times)))
system.time(
for (i in 1:length(times)) {
dist.list[[i]] <- times[paste(strptime(index(times[i])-(lookback-1), format = "%Y-%m-%d %H:%M:%S"), "/",
strptime(index(times[i])-1, format = "%Y-%m-%d %H:%M:%S"), sep = "")]
}
)
> user system elapsed
6.12 0.00 5.85
You should check out the window function, it will make your subselection of dates a lot easier. The following code uses lapply to do the work of the for loop.
# Your code
system.time(
for (i in 1:length(times)) {
dist.list[[i]] <- times[paste(strptime(index(times[i])-(lookback-1), format = "%Y-%m-%d %H:%M:%S"), "/",
strptime(index(times[i])-1, format = "%Y-%m-%d %H:%M:%S"), sep = "")]
}
)
# user system elapsed
# 10.09 0.00 10.11
# My code
system.time(dist.list<-lapply(index(times),
function(x) window(times,start=x-lookback-1,end=x))
)
# user system elapsed
# 3.02 0.00 3.03
So, about a third faster.
But, if you really want to speed things up, and you are willing to forgo millisecond accuracy (which I think your original method implicitly does), you could just run the loop on unique date-hour-second combinations, because they will all return the same time window. This should speed things up roughly twenty or thirty times:
dat.time=unique(as.POSIXct(as.character(index(times)))) # Cheesy method to drop the ms.
system.time(dist.list.2<-lapply(dat.time,function(x) window(times,start=x-lookback-1,end=x)))
# user system elapsed
# 0.37 0.00 0.39
In magento sidebar basically how the price filter option is working, i went through all the templte and block files under my custom design.
I am getting this ranges by default.
1. $0.00 - $10,000.00 (1027)
2. $10,000.00 - $20,000.00 (3)
3. $20,000.00 - $30,000.00 (1)
These limits are automatically taken but i want give my own ranges, but they are using only one template file called filter.phtml if i touch that then all other filter options are having problem. How can i customize this price filter as per my own set of ranges?
I need something like this
# $40.00 - $60.00 (155)
# $60.00 - $80.00 (150)
# $80.00 - $100.00 (153)
# $100.00 - $200.00 (248)
# $200.00 - $300.00 (100)
# $300.00 - $400.00 (43)
# $400.00 - $500.00 (20)
# $500.00 - $600.00 (6)
# $600.00 - $700.00 (6)
# $700.00 - $800.00 (2)
If you look in filter.phtml, you will see that it is using the block Mage_Catalog_Block_Layer_Filter_xxx where xxx is the attribute type. Which in turn leads you to the model: Mage_Catalog_Model_Layer_Filter_Price.
Inside app/code/core/Mage/Catalog/Model/Layer/Filter/Price.php, you will see the method getPriceRange() which calculates the price breaks.
You can override that model by copying it into app/code/local/Mage/Catalog/Model/Layer/Filter and adjusting that method so that it calculates the ranges per your requirements.
Good luck.
JD