Is it possible to display the local error of a integrator step of a variable time step solver in simulink. I would like to find out why simulink is taking small integrator steps. Since the stepsize is depending on the local error of a integration it would be helpful to record the local error. Is this possible?
Have you tried using the Simulink Debugger? If you set breakpoint on the input and output of the block of interest, and run the simulation up to a time when the steps start to become small, then you may be able to determine what's happening.
You might also play around with zero crossing detection. A pretty good discussion of its basics can be found here.
This is possible from the debugger (as Phil Goddard hinted). Start model in with debugger (from matlab console):
>> sldebug mdl
Enable "Solver trace level 1"
>> strace 1
Enable "Break on failed integration step"
>> xbreak
Start simulation:
>> continue
The simulation will then break when the local error is too large. For example:
[TM = 0.035250948751817182 ] Start of Major Time Step
[Tm = 0.035250948751817182 ] [Hm = 0.0009443691112440444 ] Start of Solver Phase
[Tm = 0.03525094875181718 ] [Hm = 0.0009443691112440444 ] Begin Integration Step
[Tn = 0.03525094875181718 ] [Hn = 0.0009443691112440444 ] Begin Newton Iteration
[Tf = 0.03619531786306122 ] [Hf = 0.0009443691112440444 ] Fail [Er = 6.8210e+00 ] [Ix = 1]
Detected integation step failure. Interrupting model execution
The Er value is the local error, Ix is the state index. To find the corresponding block type:
>> states
With output
Continuous States:
Idx Value (system:block:element Name 'BlockName')
0 -7.96155746500428e-06 (0:0:0 CSTATE 'mdl/x')
1 1.630758262432841e-12 (0:1:0 CSTATE 'mdl/y')
Related
So.. I've been coding to make a GUI show the quantity of currency of a player, the datastore API works perfectly but the local script doesn't (it's local because else it would just update it each time a player's currency gets updated and would be a mess being the opposite of what I want to)
and well... sometimes it loads the currency into the GUI but other times it just stays on the original text: "Label" instead of my current currency (4600)
here's the proof
What normally happens and should always happen
What sometimes happens and shouldn't happen:
here's the script, I've tried putting waits on the start but the original code is inside the while true do..
wait(game.Players.LocalPlayer:WaitForChild("Data")
wait(game.Players.LocalPlayer.Data:WaitForChild("Bells"))
while true do
script.Parent.TextLabel.Text = game.Players.LocalPlayer:WaitForChild("Data"):WaitForChild("Bells").Value
wait() --wait is for not making the loop break and stop the whole script
end
well.. if you want to see if data is really in the player, here's the script, it requires a API (DataStore2)
--[Animal Crossing Roblox Edition Data Store]--
--Bryan99354--
--Module not mine--
--Made with a AlvinBlox tutorial--
--·.·.*[Get Data Store, do not erase]*.·.·--
local DataStore2 = require(1936396537)
--[Default Values]--
local DefaultValue_Bells = 300
local DefaultValue_CustomClothes = 0
--[Data Store Functions]--
game.Players.PlayerAdded:Connect(function(player)
--[Data stores]--
local BellsDataStore = DataStore2("Bells",player)
local Data = Instance.new("Folder",player)
Data.Name = "Data"
Bells = Instance.new("IntValue",Data)
Bells.Name = "Bells"
local CustomClothesDataStore = DataStore2("CustomClothes",player)
local CustomClothes = Instance.new("IntValue",Data)
CustomClothes.Name = "CustomClothes"
local function CustomClothesUpdate(UpdatedValue)
CustomClothes.Value = CustomClothesDataStore:Get(UpdatedValue)
end
local function BellsUpdate(UpdatedValue)
Bells.Value = BellsDataStore:Get(UpdatedValue)
end
BellsUpdate(DefaultValue_Bells)
CustomClothesUpdate(DefaultValue_CustomClothes)
BellsDataStore:OnUpdate(BellsUpdate)
CustomClothesDataStore:OnUpdate(CustomClothesUpdate)
end)
--[test and reference functions]--
workspace.TestDevPointGiver.ClickDetector.MouseClick:Connect(function(player)
local BellsDataStore = DataStore2("Bells",player)
BellsDataStore:Increment(50,DefaultValue_Bells)
end)
workspace.TestDevCustomClothesGiver.ClickDetector.MouseClick:Connect(function(player)
local CustomClothesDataStore = DataStore2("CustomClothes",player)
CustomClothesDataStore:Increment(50,DefaultValue_CustomClothes)
end)
the code that creates "Data" and "Bells" is located in the comment: Data Stores
the only script that gets the issue is the short one with no reason :<
I hope that you can help me :3
#Night94 I tryed your script but it also failed sometimes
The syntax in your LocalScript is a little off with the waits. With that fixed, it works every time. Also, I would use an event handler instead of updating the value with a loop:
game.Players.LocalPlayer:WaitForChild("Data"):WaitForChild("Bells").Changed:Connect(function(value)
script.Parent.TextLabel.Text = value
end)
This program is not complete but is a work in progress.
import speech_recognition as sr
import subprocess as sp
import time, os
r = sr.Recognizer()
print("Voice Recognition Software\n\n******************************************************************************\n")
while True:
r.energy_threshold = 8000
t = None
with sr.Microphone() as source:
success = False
print (">")
audio = r.listen(source)
try:
print("Processing...")
t = r.recognize_google(audio)
print (": " + t)
except sr.UnknownValueError:
print("Unknown input")
continue
except sr.RequestError as e:
print("An error occured at GAPI\nA common cause is lack of internet connection")
continue
if "open" in t:
t = t.replace("open","")
t = t.replace(" ","")
t = t + ".exe"
print (t)
for a,d,f in os.walk("C:\\"):
for files in f:
if files == t.lower() or files == t.capitalize() or files == t.upper():
pat = os.path.join(a,files)
print (pat)
sp.call([pat])
success = True
if success == True:
continue
The problem I'm facing is that after > or Processing the program sometimes stops responding. No error messages or anything, in the shell it just prints > or Processing and stays there.
This happens randomly, the program can function continuously for a long time but at any given moment for whatever reason it freezes. Usually after a minute or 2 it moves onto the next part and starts responding again but that isn't always the case.
I've attempted to create a sort of fail-safe so if it takes too long to respond the program closes and opens again but I was unsuccessful with that so now I'm trying to figure out the root cause of the problem.
Can someone with experience with this sort of thing help me understand why this is happening?
Edit:
I was able to solve the problem. Turns out there is a timeout parameter here r.listen(source).
I was able to solve the problem. Turns out there is a timeout parameter here r.listen(source).
I use the Spyder IDE. Usually, when I am running non-parallelized scripts, I tend to debug using print statements. Depending on which statements are printed (or not), I can see where errors are occurring.
For example:
print "Started while loop..."
doWhileLoop = False
while doWhileLoop == True:
print "Doing something important!"
time.sleep(5)
print "Finished while loop..."
Above, I am missing a line that changes doWhileLoop to False at some point, so I will be stuck perpetually in the while loop, but my print statements let me see where it is in my code that I have hung up.
However, when running scripts that are parallelized, I get no output to the console until after the process has finished. Normally, what I do in this case is attempt to debug with a single process (i.e. temporarily deparallelize the program by running only one task, for instance), but currently, I am dealing with an error that seems to occur only when I am running more than one task.
So, I am having trouble figuring out what this error is using my usual methods -- how should I change my usual debugging practice in order to efficiently debug scripts employing multiprocessing?
Like #roippi said, debugging parallel things is hard. Another tool is using logging over print. Logging gives you severity, timestamps, and most importantly which process is doing something.
Example code:
import logging, multiprocessing, Queue
def myproc(arg):
return arg*2
def worker(inqueue, outqueue):
mylog = multiprocessing.get_logger()
mylog.info('start')
for job in iter(inqueue.get, 'STOP'):
mylog.info('got %s', job)
try:
outqueue.put( myproc(job), timeout=1 )
except Queue.Full:
mylog.error('queue full!')
mylog.info('done')
def executive(inqueue):
total = 0
mylog = multiprocessing.get_logger()
for num in iter(inqueue.get, 'STOP'):
total += num
mylog.info('got {}\ttotal{}', job, total)
logger = multiprocessing.log_to_stderr(
level=logging.INFO,
)
logger.info('setup')
inqueue, outqueue = multiprocessing.Queue(), multiprocessing.Queue()
if 0: # debug 'queue full!' issues
outqueue = multiprocessing.Queue(maxsize=1)
# prefill with 3 jobs
for num in range(3):
inqueue.put(num)
# signal end of jobs
inqueue.put('STOP')
worker_p = multiprocessing.Process(
target=worker, args=(inqueue, outqueue),
name='worker',
)
worker_p.start()
worker_p.join()
logger.info('done')
Example output:
[INFO/MainProcess] setup
[INFO/worker] child process calling self.run()
[INFO/worker] start
[INFO/worker] got 0
[INFO/worker] got 1
[INFO/worker] got 2
[INFO/worker] done
[INFO/worker] process shutting down
[INFO/worker] process exiting with exitcode 0
[INFO/MainProcess] done
[INFO/MainProcess] process shutting down
I want to run two matlab scripts in parallel for a project and communicate between them. The purpose of this is to have one script do image analysis and sending the results to the other which will use it for more calculations (time consuming, but not related to the task of finding stuff in the images). Since both tasks are time consuming, and should preferably be done in real time, I believe that parallelization is necessary.
To get a feel for how this should be done I created a test script to find out how to communicate between the two scripts.
The first script takes a user input using the built in function input, and then using labSend sends it to the other, which recieves it, and prints it.
function [blarg] = inputStuff(blarg)
mpiInit(); %added because of error message, but do not work...
for i=1:2
labBarrier; % added because of error message
inp = input('Enter a number to write');
labSend(inp);
if (inp == 0)
break;
else
i = 1;
end
end
end
function [ blarg ] = testWrite( blarg )
mpiInit(); % added because of error message, but does not help
par = 0;
if ( blarg == 0)
par = 1;
end
for i = 1:10
if (par == 1)
labBarrier
delta = labReceive();
i = 1;
else
delta = input('Enter number to write');
end
if (delta == 0)
break;
end
s = strcat('This lab no', num2str(labindex), '. Delta is = ')
delta
end
end
%%This is the file test_parfor.m
funlist = {#inputStuff, #testWrite};
matlabpool(2);
mpiInit(); % added because of error message, but does not help
parfor i=1:2
funlist{i}(0);
end
matlabpool close;
Then, when the code is run, the following error message appears:
Starting matlabpool using the 'local' profile ... connected to 2 labs.
Error using parallel_function (line 589)
The MPI implementation has not yet been loaded. Please
call mpiInit.
Error stack:
testWrite.m at 11
Error in test_parfor (line 8)
parfor i=1:2
Calling the method mpiInit does not help... (Called as shown in the code above.)
And nowhere in the examples that mathworks have in the documentation, or on their website, show this error or what to do with it.
Any help is appreciated!
You would typically use constructs such as labSend, labRecieve and labBarrier within an spmd block, rather than a parfor block.
parfor is intended for implementing embarrassingly parallel algorithms, in other words algorithms that consist of multiple independent tasks that can be run in parallel, and do not require communication between tasks.
I'm stretching my knowledge here (perhaps someone more expert can correct me), but as I understand things, it does not set up an MPI ring for communication between workers, which is probably the explanation for the (rather uninformative) error message you're getting.
An spmd block enables communication between workers using labSend, labRecieve and labBarrier. There are quite a few examples of using them all in the documentation.
Sam is right that the MPI functionality is not enabled during parfor, only during spmd. You need to do something more like this:
spmd
funlist{labindex}(0);
end
(Sam is also quite right that the error message you saw is pretty unhelpful)
I am currently trying to modify a plugin for posting metrics to new-relic via AWS. I have successfully managed to make the plugin post metrics from swf to new relic (not originally in plugin), but have encountered a problem if the program runs for too long.
When the program runs for a bout 10 minutes I get the following error:
Error occurred in poll cycle: Rate exceeded
I believe this is coming from my polling swf for the workflow executions
domain.workflow_executions.each do |execution|
starttime = execution.started_at
endtime = execution.closed_at
isOpen = execution.open?
status = execution.status
if endtime != nil
running_workflow_runtime_total += (endtime - starttime)
number_of_completed_executions += 1
end
if status.to_s == "open"
openCount = openCount + 1
elsif status.to_s == "completed"
completedCount = completedCount + 1
elsif status.to_s == "failed"
failedCount = failedCount + 1
elsif status.to_s == "timed_out"
timed_outCount = timed_outCount + 1
end
end
This is called in a polling cycle every 60 seconds
Is there a way to set the polling rate? Or another way to get the workflow executions?
Thanks, here's a link to the ruby sdk for swf => link
The issue is likely that you are creating a large number of workflow executions and each iteration through the loop in workflow_executions is causing a lookup, which eventually is exceeding your rate limit.
This could also be getting a bit expensive, so be careful.
It's not clear what you're really trying to do, so I can't tell you how to fix it unless you post all your code (or the parts around calls to SWF).
You can see here:
https://github.com/aws/aws-sdk-ruby/blob/05d15cd1b6037e98f2db45f8c2597014ee376a59/lib/aws/simple_workflow/workflow_execution_collection.rb
That a call is made to SWF for each workflow in the collection.