How to only animate the even frames in Abaqus? - animation

I'm working on thermal simulations in Abaqus. I only need to animate the even frame numbers, so instead of the animation going 1,2,3,4,5 I need it to go 2,4,6,8,10.
Is there a way to only show the even frames? If so, how?

Go to Result --> Active Steps/Frames and deselect the frames that you don't want to be displayed and animated.

You can easily do this using Abaqus Python script.
Following is the overview of steps:
# getting current viewport object
vpName = session.currentViewportName
viewport = session.viewports[vpName]
# get odb object from viewport
odb= viewport.displayedObject
# Get all the steps available in the odb
stepNames = odb.steps.keys()
# Create animation object
ani = session.ImageAnimation(fileName='animation' ,format=AVI)
# add required frame to the animation object
for stepName in stepNames:
stpID = odb.steps[stepName].number - 1
nfrm = len(odb.steps[stp].frames)
for frmID in range(0, nfrm, 2): # 2 --> even frames will be added
viewport.odbDisplay.setFrame(step=stpID,frame=frmID)
ani.writeFrame(canvasObjects=(viewport, ))
ani.close()

Related

Python: update plot with image with slider weight

I loaded a .nii file in my application.
def show(self):
image = SimpleITK.ReadImage(file_name)
t1 = SimpleITK.GetArrayFromImage(image)
t2 = color.rgb2gray(t1[w.get()])
print(w.get())
print(t2.shape)
plt.ion()
fig = plt.figure()
ax = fig.add_subplot(111)
line1 = ax.imshow(t2, cmap='gray')
This function it is called when I move the slider and show me in a new figure the slice of brain.(the screenshot of application is attach here: [1]: https://i.stack.imgur.com/vzDJt.png)
I need to update the same figure/plot, it is possible?
That should work, but I would not be calling ReadImage and GetArrayFromImage every time show gets called. You don't want to be re-loading and converting the image each time your widget changes. Do those thing once, when the application starts.
If you look at the SimpleITK-Notebooks that's pretty much how images are displayed in Jupyter notebooks.
http://insightsoftwareconsortium.github.io/SimpleITK-Notebooks/Python_html/04_Image_Display.html
The section 'Inline display with matplotlib' uses imshow to display images.

How can I learn the starting time of each frame in a video?

It is very critical to learn the start time of each frame of a video.
I need to determine the starting point manually ( for example 848 here) by using below matlab code:
v = VideoReader('video1.avi','CurrentTime',848);
while hasFrame(v)
video_frame = readFrame(v);
counter=counter+1;
if counter==1
imshow(video_frame)
imhist(video_frame(:,:,1))
end
end
What I want is to distinguish some video frame from the others by using histogram. At the end my aim is to reach the exact showing time of the distinguished frames.
After editting:
This is frame histogram outputs:
Histogram size of the some frames are different from the previous one, do you know the reason?
difference=[difference sum(abs(histcounts(video_frame)-histcounts(lastframe)))];
Because of the taking the difference of the I had remove the different histogram sized frames but it causes missing some frames.
i havent found an video example that looks like what you discribe. please condsider always to have an example.
This example code calculates the differences in the histcounts. please notice that waitforbuttonpressis in the loop so you have to click for each frame while testing or remove it when the video is too long. Does this works on your file?
v = VideoReader('sample.avi','CurrentTime',1);
figure1=figure('unit','normalized','Position',[0.2 0.2 0.4 0.6]);
axes1=subplot(3,1,1);
axes2=subplot(3,1,2);
axes3 = subplot(3,1,3);
counter=0;
difference=[];
video_frame=readFrame(v);
while hasFrame(v)
lastframe=video_frame;
video_frame = readFrame(v);
counter=counter+1;
imshow(video_frame,'Parent',axes1);
[a,b]=histcounts(video_frame(:,:,1));
plot(b(1:end-1),a,'Parent',axes2);
difference=[difference sum(abs(histcounts(video_frame,0:255)-histcounts(lastframe,0:255)))];
bar(1:counter,difference,'Parent',axes3);
waitforbuttonpress
end
[~,onedistinguished]=max(difference);
%defining a threshold like every value that is bigger 4000
multidistinguished=find(difference>4000);
disp(['majorly changed at: ' num2str(distinguished)]);

How to extract frames at particular intervals from video using matlab

I am using matlab 2013a software for my project.
I face a problem while splitting video into individual frames.
I want to know how to get frames at a specific intervals from video.. i.e., i want to grab frames at the rate of one frame per second(frame/sec) .My input video has 50 frames/sec. In the code I have used step() to slice the video into frames.
The following is my code , basically a face detection code(detects multiple faces in a video) . This code captures every frame in the video(i.e 50fp approx) and processes it. I want to process frames at the rate of 1 fps. Please help me.
clear classes;
videoFileReader = vision.VideoFileReader('C:\Users\Desktop\project\05.mp4');
**videoFrame = step(videoFileReader);**
faceDetector = vision.CascadeObjectDetector(); % Finds faces by default
tracker = MultiObjectTrackerKLT;
videoPlayer = vision.VideoPlayer('Position',[200 100 fliplr(frameSize(1:2)+30)]);
bboxes = [];
while isempty(bboxes)
**framergb = step(videoFileReader);**
frame = rgb2gray(framergb);
bboxes = faceDetector.step(frame);
end
tracker.addDetections(frame, bboxes);
frameNumber = 0;
keepRunning = true;
while keepRunning
**framergb = step(videoFileReader);**
frame = rgb2gray(framergb);
if mod(frameNumber, 10) == 0
bboxes = 2 * faceDetector.step(imresize(frame, 0.5));
if ~isempty(bboxes)
tracker.addDetections(frame, bboxes);
end
else
% Track faces
tracker.track(frame);
end
end
%% Clean up
release(videoPlayer);
But this actually considers every frame. I want to grab 1fps.
It cannot be done directly in Matlab 2013a, because the video access library does not provide the feature you want. Writing the necessary code to implement an efficient frame skipping routine is not really possible using just Matlab code (you would need to look inside the video libraries)
Working around it, you have two basic options:
Do as little work as possible on frames that you do not want to process.
Where you currently have
framergb = step(videoFileReader);
Instead do something like
for i=1:49,
step(videoFileReader);
end
framergb = step(videoFileReader);
(NB this does not allow for going beyond end of input)
Pre-process your file with a tool like ffmpeg, and reduce the frame-rate before you use Matlab.
The ffmpeg command might look something like this:
ffmpeg -i 05.mp4 -r 1 05_at_1fps.mp4

psychopy polygon on top of image

using psychopy ver 1.81.03 on a mac I want to draw a polygon (e.g. a triangle) on top of an image.
So far, my image stays always on top and thus hides the polygon, no matter the order I put them in. This also stays true if I have the polygon start a frame later than the image.
e.g. see inn the code below (created with the Builder before compiling) how both a blue square and a red triangle are supposed to start at frame 0, but when you run it the blue square always covers the red triangle!?
Is there a way to have the polygon on top? Do I somehow need to merge the image and polygon before drawing them?
Thank you so much for your help!!
Sebastian
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
This experiment was created using PsychoPy2 Experiment Builder (v1.81.03), Sun Jan 18 20:44:26 2015
If you publish work using this script please cite the relevant PsychoPy publications
Peirce, JW (2007) PsychoPy - Psychophysics software in Python. Journal of Neuroscience Methods, 162(1-2), 8-13.
Peirce, JW (2009) Generating stimuli for neuroscience using PsychoPy. Frontiers in Neuroinformatics, 2:10. doi: 10.3389/neuro.11.010.2008
"""
from __future__ import division # so that 1/3=0.333 instead of 1/3=0
from psychopy import visual, core, data, event, logging, sound, gui
from psychopy.constants import * # things like STARTED, FINISHED
import numpy as np # whole numpy lib is available, prepend 'np.'
from numpy import sin, cos, tan, log, log10, pi, average, sqrt, std, deg2rad, rad2deg, linspace, asarray
from numpy.random import random, randint, normal, shuffle
import os # handy system and path functions
# Ensure that relative paths start from the same directory as this script
_thisDir = os.path.dirname(os.path.abspath(__file__))
os.chdir(_thisDir)
# Store info about the experiment session
expName = u'test_triangle_over_square' # from the Builder filename that created this script
expInfo = {'participant':'', 'session':'001'}
dlg = gui.DlgFromDict(dictionary=expInfo, title=expName)
if dlg.OK == False: core.quit() # user pressed cancel
expInfo['date'] = data.getDateStr() # add a simple timestamp
expInfo['expName'] = expName
# Data file name stem = absolute path + name; later add .psyexp, .csv, .log, etc
filename = _thisDir + os.sep + 'data/%s_%s_%s' %(expInfo['participant'], expName, expInfo['date'])
# An ExperimentHandler isn't essential but helps with data saving
thisExp = data.ExperimentHandler(name=expName, version='',
extraInfo=expInfo, runtimeInfo=None,
originPath=None,
savePickle=True, saveWideText=True,
dataFileName=filename)
#save a log file for detail verbose info
logFile = logging.LogFile(filename+'.log', level=logging.EXP)
logging.console.setLevel(logging.WARNING) # this outputs to the screen, not a file
endExpNow = False # flag for 'escape' or other condition => quit the exp
# Start Code - component code to be run before the window creation
# Setup the Window
win = visual.Window(size=(1280, 800), fullscr=True, screen=0, allowGUI=False, allowStencil=False,
monitor='testMonitor', color=[0,0,0], colorSpace='rgb',
blendMode='avg', useFBO=True,
)
# store frame rate of monitor if we can measure it successfully
expInfo['frameRate']=win.getActualFrameRate()
if expInfo['frameRate']!=None:
frameDur = 1.0/round(expInfo['frameRate'])
else:
frameDur = 1.0/60.0 # couldn't get a reliable measure so guess
# Initialize components for Routine "trial"
trialClock = core.Clock()
ISI = core.StaticPeriod(win=win, screenHz=expInfo['frameRate'], name='ISI')
square = visual.ImageStim(win=win, name='square',units='pix',
image=None, mask=None,
ori=0, pos=[0, 0], size=[200, 200],
color=u'blue', colorSpace='rgb', opacity=1,
flipHoriz=False, flipVert=False,
texRes=128, interpolate=True, depth=-1.0)
polygon = visual.ShapeStim(win=win, name='polygon',units='pix',
vertices = [[-[200, 300][0]/2.0,-[200, 300][1]/2.0], [+[200, 300][0]/2.0,-[200, 300][1]/2.0], [0,[200, 300][1]/2.0]],
ori=0, pos=[0, 0],
lineWidth=1, lineColor=[1,1,1], lineColorSpace='rgb',
fillColor=u'red', fillColorSpace='rgb',
opacity=1,interpolate=True)
# Create some handy timers
globalClock = core.Clock() # to track the time since experiment started
routineTimer = core.CountdownTimer() # to track time remaining of each (non-slip) routine
#------Prepare to start Routine "trial"-------
t = 0
trialClock.reset() # clock
frameN = -1
# update component parameters for each repeat
# keep track of which components have finished
trialComponents = []
trialComponents.append(ISI)
trialComponents.append(square)
trialComponents.append(polygon)
for thisComponent in trialComponents:
if hasattr(thisComponent, 'status'):
thisComponent.status = NOT_STARTED
#-------Start Routine "trial"-------
continueRoutine = True
while continueRoutine:
# get current time
t = trialClock.getTime()
frameN = frameN + 1 # number of completed frames (so 0 is the first frame)
# update/draw components on each frame
# *square* updates
if frameN >= 0 and square.status == NOT_STARTED:
# keep track of start time/frame for later
square.tStart = t # underestimates by a little under one frame
square.frameNStart = frameN # exact frame index
square.setAutoDraw(True)
# *polygon* updates
if frameN >= 0 and polygon.status == NOT_STARTED:
# keep track of start time/frame for later
polygon.tStart = t # underestimates by a little under one frame
polygon.frameNStart = frameN # exact frame index
polygon.setAutoDraw(True)
# *ISI* period
if t >= 0.0 and ISI.status == NOT_STARTED:
# keep track of start time/frame for later
ISI.tStart = t # underestimates by a little under one frame
ISI.frameNStart = frameN # exact frame index
ISI.start(0.5)
elif ISI.status == STARTED: #one frame should pass before updating params and completing
ISI.complete() #finish the static period
# check if all components have finished
if not continueRoutine: # a component has requested a forced-end of Routine
routineTimer.reset() # if we abort early the non-slip timer needs reset
break
continueRoutine = False # will revert to True if at least one component still running
for thisComponent in trialComponents:
if hasattr(thisComponent, "status") and thisComponent.status != FINISHED:
continueRoutine = True
break # at least one component has not yet finished
# check for quit (the Esc key)
if endExpNow or event.getKeys(keyList=["escape"]):
core.quit()
# refresh the screen
if continueRoutine: # don't flip if this routine is over or we'll get a blank screen
win.flip()
else: # this Routine was not non-slip safe so reset non-slip timer
routineTimer.reset()
#-------Ending Routine "trial"-------
for thisComponent in trialComponents:
if hasattr(thisComponent, "setAutoDraw"):
thisComponent.setAutoDraw(False)
win.close()
core.quit()
As per Jonas' comment above, PsychoPy uses a layering system in which subsequent stimuli are drawn on top of previous stimuli (as in his code examples).
In the graphical Builder environment, drawing order is represented by the vertical order of stimulus components: stimuli at the top are drawn first, and ones lower down are progressively layered upon them.
You can change the order of stimulus components by right-clicking on them and selecting "Move up", "move down", etc as required.
Sebastian, has, however, identified a bug here, in that the intended drawing order is not honoured between ImageStim and ShapeStim components. As a work-around, you might be able to replace your ShapeStim with a bitmap representation, displayed using an ImageStim. Multiple ImageStims should draw correctly (as do multiple ShapeStims). To get it to draw correctly on top of another image, be sure to save it as a .png file, which supports transparency. That way, only the actual shape will be drawn on top, as its background pixels can be set to be transparent and will not mask the the underlying image.
For a long-term solution, I've added your issue as a bug report to the PsychoPy GitHub project here:
https://github.com/psychopy/psychopy/issues/795
It turned out to be a bug in the Polygon component in Builder.
This is fixed in the upcoming release (1.82.00). The changes needed to make the fix can be seen at
https://github.com/psychopy/psychopy/commit/af1af9a7a85cee9b4ec8ad5e2ff1f03140bd1a36
which you can add to your own installation if you like.
cheers,
Jon

mayavi volume animation not updating

I’m trying to animate a Mayavi pipeline volume:
src = mlab.pipeline.volume(mlab.pipeline.scalar_field(data),vmin=.1*np.max(data),vmax=.2*np.max(data))
that is combined in the pipeline by another dataset represented as a cut plane.
However, I can’t get the volume visualization to update - only the first frame shows up. The animation is stepping through the data correctly (I get different values of the np.max(data[t]) below) but nothing in the visualization changes.
My understanding is that mlab_source_set should re-render correctly, and there’s nothing on the web anywhere that describes this (as far as I can tell).
The animation looks like:
#mlab.show
#mlab.animate(delay=250,ui=True)
def anim(src,data,tax,fig):
"""Animate."""
t = 0
nt = len(tax)
while 1:
vmin = .1*np.max(data[t])
vmax = .2*np.max(data[t])
print 'animation t = ',tax[t],', max = ',np.max(data[t])
src.mlab_source.set(scalar = mlab.pipeline.scalar_field(data[t]), vmin=vmin,vmax=vmax)
t = mod(t+1,nt)
yield
Any thoughts?

Resources