I am trying to use the L293D Motor Shield for Node MCU, controlling it with micropython. I've only found one code example for micropython, and it does not appear to work. Does anyone have any code examples I can start with (that works)?
Can you tell why my code example does not function as expected? Should I conclude the the problem is that my motor shield is dead/defective?
I've connected everything and checked that the wire connections are ok, and that the motor works. I've uploaded my code and it runs without errors.
from machine import Pin, PWM
import time
print("hello")
""" nodemcu pins from the motor shield """
pin1 = Pin(5, Pin.OUT) # D1
pin2 = Pin(4, Pin.OUT) # D2
pin3 = Pin(0, Pin.OUT) # D3
pin4 = Pin(2, Pin.OUT) # D4
""" named after the L9110 h-bridge pins """
BIN1 = PWM(pin1, freq=750)
BIN2 = PWM(pin3, freq=750)
AIN1 = PWM(pin2, freq=750)
AIN2 = PWM(pin4, freq=750)
""" TODO: variable speed """
speed = 950
def stop_all():
for each in (BIN1, BIN2, AIN1, AIN2):
each.duty(0)
def forward():
BIN1.duty(speed)
BIN2.duty(speed)
AIN1.duty(speed)
AIN2.duty(speed)
print("inside forward")
forward()
time.sleep(5)
stop_all()
It's just stone dead. No voltage on the output of the motor shield (with our without the motor connected) and not even the slightest hum from the motor.
Related
I'm running a buildroot linux environment on a STM32MP157 dev board. I have a button with an internal pullup on pin B12. I want to fire an interrupt once the line goes low. On other linux boards like the RPi, I've been able to call gpio_to_irq(<gpio#>) and get the IRQ for that pin. Done, simple. However, on this board, there are only 16 external interrupts connected to the EXTI peripheral; they are configurable in a sense that any port may be connected to the EXTI, but the pin numbers cannot overlap. For example GPIO A12 and B12 may NOT be connected to the EXTI at the same time. I have ensured that no other devices are using and GPIO port pin 12.
I have edited my DTS file to reflect that I want my GPIO B12 connected to the EXTI controller. But so far I have had no luck in making that happen. Here is the documentation for the interrupts provided by ST. If someone can explain how to fix the device tree such that I can request the B12 interrupt from my driver I would really appreciate it.
Here's my DTS file:
/dts-v1/;
#include "stm32mp157.dtsi"
#include "stm32mp15xa.dtsi"
#include "stm32mp15-pinctrl.dtsi"
#include "stm32mp15xxac-pinctrl.dtsi"
#include "stm32mp15xx-dkx.dtsi"
/ {
model = "STMicroelectronics STM32MP157A-DK1 Discovery Board";
compatible = "st,stm32mp157a-dk1", "st,stm32mp157";
chosen {
stdout-path = "serial0:115200n8";
};
button {
compatible = "test,button";
input-gpios = <&gpiob 12 (GPIO_ACTIVE_LOW | GPIO_PULL_UP)>; //Works with pull-up once the driver is loaded.
interrupts-extended = <&gpiob 12 IRQ_TYPE_EDGE_FALLING>;
interrupt-names = "qwerty";
status = "okay";
};
led {
extern-led {
compatible = "test,led";
gpios = <&gpiob 10 GPIO_ACTIVE_HIGH>;
linux,default-trigger = "cpu";
};
};
};
I have tried the following:
interrupts-extended = <&exti 28 IRQ_TYPE_EDGE_FALLING>; (This SOC only has 16 pins per GPIO bank, so B12 is global GPIO 28)
interrupts-extended = <&gpiob 12 IRQ_TYPE_EDGE_FALLING>;
interrupt-parent = <&gpiob>;
interrupts = <12
IRQ_TYPE_EDGE_FALLING>;
Lastly, my stretch goal is to be able to request the IRQ by name, from the interrupt-name property in the device tree. Something like request_irq("qwerty"). Is that possible?
EDIT: I have temporarily connected my pushbutton to GPIO A12, and it successfully fires the interrupt, confirming that the EXTI #12 interrupt is connected to GPIO bank A. How can I go about changing this from within the device tree? Thank you in advance.
Okay I have solved this. Apparently iterating through your GPIO pins with the gpio*_to_irq() functions was the problem. When the function was called, the kernel would immediately configure the EXTI interface for that pin. I thought it was defaulting to Port A, but that was actually caused by iterating through all the GPIO pins looking for the interrupt number starting at GPIO 0, aka Port A Pin 0. So by only calling the gpio_to_irq or gpiod_to_irq function for the pins you need, the kernel will properly configure the EXTI interface for the requested pins.
Introduction:
Okay so to start I just want to say that the sensor does send its data when commanded as I've tested this on Python connected to a COMPORT on a pc. I will include the Python Code I created that works with the sensor, so that all information is available to you guys. I also will include a link to the PJRC Forum that I've asked the same question on, because I've already gotten responses on the issue, but it still persists, and I want you guys to have what they've said at your disposal.
(Python Code & PJRC Link will be at the very bottom of the post)
Problem:
So, my problem is I cannot figure out how to properly send ASCII commands from the Teensy 3.5 and in return read the output of the Flowmeter with the Teensy 3.5. I am afraid that the hardware is connected wrong or I'm just going about something wrong.
The Serial Console will stay blank meaning nothing is available to be read in
What I've Tried - Software:
This is basic code I was given that should work for my use:
char s;
void setup() {
// put your setup code here, to run once:
Serial.begin(9600);
while (!Serial && (millis() < 5000)) {};
Serial1.begin(115200);
delay(1000);
Serial1.print("?\r\n");
}
void loop() {
// put your main code here, to run repeatedly:
while (Serial1.available()){
s = Serial1.read();
Serial.print(s);
}
}
What I've Tried - Hardware:
Image of TSI FlowMeter 5130 w/Cables
Black Wire - USB_C to USB_A - connected to a 5v power supply
Blue/White Wire - USB_A to MALE DB9
Image of Cables that connect the Flowmeter & Teensy 3.5
Blue/White Wire - Male DB9
Tan Serial Gender Converter - Female DB9 to Female DB9
Black Converter Board - Male DB9 to 4-Wire TTL (Red - VCC, Yellow - Transmit, Blue - Receive, Black - GND)
Image of RS232 to TTL Wiring
Yellow Wire - Teensy Transmit Pin 1
Blue Wire - Teensy Receive Pin 0
Red Wire - Currently Set to 5v, but I've tried 3.3v to no avail
Black Wire - GND
Image of LEDs Wired into Rx/Tx of Teensy to watch for data being sent
Blue LED - (Yellow - Teensy Receive Pin 0, Orange - GND)
Green LED - (Red - Teensy Transmit Pin 1, Brown - GND)
Image - 5v Power Supply
White Wire - Teensy 5v
Purple Wire - Teensy GND
Python Code:
import serial
import time
index = 0
total = 0
i = 0
avg = 0
# Serial Connection
time.sleep(.5)
ser = serial.Serial(
port="COM2", baudrate = 115200,
parity=serial.PARITY_NONE, stopbits=serial.STOPBITS_ONE,
bytesize=serial.EIGHTBITS, timeout=1)
# Write ASCII Commands To TSI 5300 Flow Sensor
ser.write(b'?\r\n') # Ask Sensor if it is getting a signal (Returns "OK")
ser.write(b'SUS\r\n') # Set Flow type to SLPM (Returns "OK")
ser.write(b'SG0\r\n') # Set Flow Gas to Air (Returns "OK")
ser.write(b'SSR0005\r\n') # Set Sample Rate to 5ms (Returns "OK")
ser.write(b'LPZ\r\n') # Zero Low Pressure Sensor
# Read serial output to remove all 'OK's from buffer
while (i <= 4):
OK = ser.readline() # Read one line of serial and discard it
print(OK)
i += 1
# Ask for 5 Flow readings
ser.write(b'DAFxxxxx0005\r\n') # Read 5 sensor Flow Reading
ser.readline() # Read one line of serial data and discard it
byte = ser.readline() # Read one line of serial data and store it
print("Unfiltered Bytes: " + str(byte))
string = byte.decode('utf-8') # Convert from BYTE to STRING
array = string.split(',') # Convert from STRING to STRING ARRAY
print("String Array of all 5 readings: " + str(array))
# Convert each element of the ARRAY to FLOAT then add them together
for data in array:
index += 1
data = float(data)
total += data
avg = total / index # Find the average Flow in LPM
print("Average Flow Rate: " + str(avg) + " LPM")
time.sleep(1)
ser.close()
PJRC LINK:
https://forum.pjrc.com/threads/69679-Sending-ASCII-Commands-to-a-Teensy-3-5-Via-RS232-to-TTL-Converter
Yes, you should be able to connect it to the second USB port of the Teensy. This port acts as Host. Whether it works of course depends on which USB interface your flowmeter implements. If it implements some standard (e.g. CDC aka virtual serial or some HID interface) the USB Host lib can probably communicate with it. If they did a proprietary interface you would need to write a corresponding driver first...
I assume they implemented a CDC interface. You can easily check: if you connect the flowmeter to a PC a COM Port (Windows) should appear in the device manager.
I found the solution! It didn't matter which serial it was on (serial1 or serial2), however the problem is I had to start the teensy before the flowmeter and give the flowmeter 20sec to boot up before letting the teensy send any commands! This sensor is so slow though, it takes 50 seconds to fully boot up to the test screen! I just used a 5v relay to delay the flowmeter turning on. Thanks for your help!
I am facing one problem. I have code which read temperature and humidity using DHT11 sensor. I uploaded following code using Arduino via USB serial, I can read values of temp, humidity. Values are being read as long as Arduino is connected to same laptop via USB.
TEMPERATURE AND HUMIDITY are being read as 0 when I power on Arduino using DC12v, 700MA adapter.
I want to deploy Arduino with DHT sensors connected with it in Greenhouse to read greenhouse environmental condition but when I power on using DC adapter or battery, it is giving "0" output. Note: values are verified when values are transferred via Ethernet to the webserver.
PLEASE HELP TO SOLVE THIS PROBLEM.
DHT dht(DHTPIN, DHTTYPE);
void setup() {
Serial.begin(9600);
Serial.println("DHTxx test!");
dht.begin();
}
void loop() {
// Wait a few seconds between measurements.
delay(2000);
// Reading temperature or humidity takes about 250 milliseconds!
// Sensor readings may also be up to 2 seconds 'old' (its a very slow sensor)
float h = dht.readHumidity();
// Read temperature as Celsius (the default)
float t = dht.readTemperature();
// Read temperature as Fahrenheit (isFahrenheit = true)
float f = dht.readTemperature(true);
I am developing an application in Python to control ONVIF-compatible cameras.
Software: Debian Wheezy, Python 2.7, Quatanium python-onvif client
Hardware: Raspberry Pi 2 B, IP camera on local router, wifi/VNC for development
The required PTZ functions include Absolute Move, Relative Move, Continuous Move, Stop and using Preset positions. With the extracted test code below, I have all of that working except Absolute and Relative Moves. All of this code executes without any errors but the camera does not move for the Absolute or Relative Moves. I hope someone can suggest the problem with those two functions. The example is a bit long but I have tried to include enough code to show the contrast between working and non-working (with upper-case comments) portions for reference and test.
A test sketch:
#!/usr/bin/python
#-------------------------------------------------------------------------------
#Test of Python and Quatanium Python-ONVIF with NETCAT camera PT-PTZ2087
#ONVIF Client implementation is in Python
#For IP control of PTZ, the camera should be compliant with ONVIF Profile S
#The PTZ2087 reports it is ONVIF 2.04 but is actually 2.4 (Netcat said text not changed after upgrade)
#------------------------------------------------------------------------------
import onvifconfig
if __name__ == '__main__':
#Do all setup initializations
ptz = onvifconfig.ptzcam()
#*****************************************************************************
# IP camera motion tests
#*****************************************************************************
print 'Starting tests...'
#Set preset
ptz.move_pan(1.0, 1) #move to a new home position
ptz.set_preset('home')
# move right -- (velocity, duration of move)
ptz.move_pan(1.0, 2)
# move left
ptz.move_pan(-1.0, 2)
# move down
ptz.move_tilt(-1.0, 2)
# Move up
ptz.move_tilt(1.0, 2)
# zoom in
ptz.zoom(8.0, 2)
# zoom out
ptz.zoom(-8.0, 2)
#Absolute pan-tilt (pan position, tilt position, velocity)
#DOES NOT RESULT IN CAMERA MOVEMENT
ptz.move_abspantilt(-1.0, 1.0, 1.0)
ptz.move_abspantilt(1.0, -1.0, 1.0)
#Relative move (pan increment, tilt increment, velocity)
#DOES NOT RESULT IN CAMERA MOVEMENT
ptz.move_relative(0.5, 0.5, 8.0)
#Get presets
ptz.get_preset()
#Go back to preset
ptz.goto_preset('home')
exit()
The referenced class:
#*****************************************************************************
#IP Camera control
#Control methods:
# rtsp video streaming via OpenCV for frame capture
# ONVIF for PTZ control
# ONVIF for setup selections
#
# Starting point for this code was from:
# https://github.com/quatanium/python-onvif
#*****************************************************************************
import sys
sys.path.append('/usr/local/lib/python2.7/dist-packages/onvif')
from onvif import ONVIFCamera
from time import sleep
class ptzcam():
def __init__(self):
print 'IP camera initialization'
#Several cameras that have been tried -------------------------------------
#Netcat camera (on my local network) Port 8899
self.mycam = ONVIFCamera('192.168.1.10', 8899, 'admin', 'admin', '/etc/onvif/wsdl/')
#This is a demo camera that anyone can use for testing
#Toshiba IKS-WP816R
#self.mycam = ONVIFCamera('67.137.21.190', 80, 'toshiba', 'security', '/etc/onvif/wsdl/')
print 'Connected to ONVIF camera'
# Create media service object
self.media = self.mycam.create_media_service()
print 'Created media service object'
print
# Get target profile
self.media_profile = self.media.GetProfiles()[0]
# Use the first profile and Profiles have at least one
token = self.media_profile._token
#PTZ controls -------------------------------------------------------------
print
# Create ptz service object
print 'Creating PTZ object'
self.ptz = self.mycam.create_ptz_service()
print 'Created PTZ service object'
print
#Get available PTZ services
request = self.ptz.create_type('GetServiceCapabilities')
Service_Capabilities = self.ptz.GetServiceCapabilities(request)
print 'PTZ service capabilities:'
print Service_Capabilities
print
#Get PTZ status
status = self.ptz.GetStatus({'ProfileToken':token})
print 'PTZ status:'
print status
print 'Pan position:', status.Position.PanTilt._x
print 'Tilt position:', status.Position.PanTilt._y
print 'Zoom position:', status.Position.Zoom._x
print 'Pan/Tilt Moving?:', status.MoveStatus.PanTilt
print
# Get PTZ configuration options for getting option ranges
request = self.ptz.create_type('GetConfigurationOptions')
request.ConfigurationToken = self.media_profile.PTZConfiguration._token
ptz_configuration_options = self.ptz.GetConfigurationOptions(request)
print 'PTZ configuration options:'
print ptz_configuration_options
print
self.requestc = self.ptz.create_type('ContinuousMove')
self.requestc.ProfileToken = self.media_profile._token
self.requesta = self.ptz.create_type('AbsoluteMove')
self.requesta.ProfileToken = self.media_profile._token
print 'Absolute move options'
print self.requesta
print
self.requestr = self.ptz.create_type('RelativeMove')
self.requestr.ProfileToken = self.media_profile._token
print 'Relative move options'
print self.requestr
print
self.requests = self.ptz.create_type('Stop')
self.requests.ProfileToken = self.media_profile._token
self.requestp = self.ptz.create_type('SetPreset')
self.requestp.ProfileToken = self.media_profile._token
self.requestg = self.ptz.create_type('GotoPreset')
self.requestg.ProfileToken = self.media_profile._token
print 'Initial PTZ stop'
print
self.stop()
#Stop pan, tilt and zoom
def stop(self):
self.requests.PanTilt = True
self.requests.Zoom = True
print 'Stop:'
#print self.requests
print
self.ptz.Stop(self.requests)
print 'Stopped'
#Continuous move functions
def perform_move(self, timeout):
# Start continuous move
ret = self.ptz.ContinuousMove(self.requestc)
print 'Continuous move completed', ret
# Wait a certain time
sleep(timeout)
# Stop continuous move
self.stop()
sleep(2)
print
def move_tilt(self, velocity, timeout):
print 'Move tilt...', velocity
self.requestc.Velocity.PanTilt._x = 0.0
self.requestc.Velocity.PanTilt._y = velocity
self.perform_move(timeout)
def move_pan(self, velocity, timeout):
print 'Move pan...', velocity
self.requestc.Velocity.PanTilt._x = velocity
self.requestc.Velocity.PanTilt._y = 0.0
self.perform_move(timeout)
def zoom(self, velocity, timeout):
print 'Zoom...', velocity
self.requestc.Velocity.Zoom._x = velocity
self.perform_move(timeout)
#Absolute move functions --NO ERRORS BUT CAMERA DOES NOT MOVE
def move_abspantilt(self, pan, tilt, velocity):
self.requesta.Position.PanTilt._x = pan
self.requesta.Position.PanTilt._y = tilt
self.requesta.Speed.PanTilt._x = velocity
self.requesta.Speed.PanTilt._y = velocity
print 'Absolute move to:', self.requesta.Position
print 'Absolute speed:',self.requesta.Speed
ret = self.ptz.AbsoluteMove(self.requesta)
print 'Absolute move pan-tilt requested:', pan, tilt, velocity
sleep(2.0)
print 'Absolute move completed', ret
print
#Relative move functions --NO ERRORS BUT CAMERA DOES NOT MOVE
def move_relative(self, pan, tilt, velocity):
self.requestr.Translation.PanTilt._x = pan
self.requestr.Translation.PanTilt._y = tilt
self.requestr.Speed.PanTilt._x = velocity
ret = self.requestr.Speed.PanTilt._y = velocity
self.ptz.RelativeMove(self.requestr)
print 'Relative move pan-tilt', pan, tilt, velocity
sleep(2.0)
print 'Relative move completed', ret
print
#Sets preset set, query and and go to
def set_preset(self, name):
self.requestp.PresetName = name
self.requestp.PresetToken = '1'
self.preset = self.ptz.SetPreset(self.requestp) #returns the PresetToken
print 'Set Preset:'
print self.preset
print
def get_preset(self):
self.ptzPresetsList = self.ptz.GetPresets(self.requestc)
print 'Got preset:'
print self.ptzPresetsList[0]
print
def goto_preset(self, name):
self.requestg.PresetToken = '1'
self.ptz.GotoPreset(self.requestg)
print 'Going to Preset:'
print name
print
#Ottavio, Sorry that I did not make it clear that the camera I used for this test, a Netcat PT-PTZ2084XM-A reported via ONVIF query that it did support Absolute and Relative moves. I have subsequently verified via the onvif.org site that this camera has not been tested and verified to meet onvif standards. I also have verified that the above code does work correctly with a Amcrest IP2M-841B ptz camera. The upshot of all of this is to never trust the claim that a camera is ONVIF 2.x compatible without testing it. Even the Amcrest has problems with both ONVIF and cgi commands for zoom. Neither Netcat nor Amcrest have been very helpful in resolving these technical problems.
AbsoluteMake and RelativeMove in the Profile S specifications are CONDITIONAL MANDATORY, thus it is not guaranteed a-priori that they are supported.
You need to check the camera's features.
using psychopy ver 1.81.03 on a mac I want to draw a polygon (e.g. a triangle) on top of an image.
So far, my image stays always on top and thus hides the polygon, no matter the order I put them in. This also stays true if I have the polygon start a frame later than the image.
e.g. see inn the code below (created with the Builder before compiling) how both a blue square and a red triangle are supposed to start at frame 0, but when you run it the blue square always covers the red triangle!?
Is there a way to have the polygon on top? Do I somehow need to merge the image and polygon before drawing them?
Thank you so much for your help!!
Sebastian
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
This experiment was created using PsychoPy2 Experiment Builder (v1.81.03), Sun Jan 18 20:44:26 2015
If you publish work using this script please cite the relevant PsychoPy publications
Peirce, JW (2007) PsychoPy - Psychophysics software in Python. Journal of Neuroscience Methods, 162(1-2), 8-13.
Peirce, JW (2009) Generating stimuli for neuroscience using PsychoPy. Frontiers in Neuroinformatics, 2:10. doi: 10.3389/neuro.11.010.2008
"""
from __future__ import division # so that 1/3=0.333 instead of 1/3=0
from psychopy import visual, core, data, event, logging, sound, gui
from psychopy.constants import * # things like STARTED, FINISHED
import numpy as np # whole numpy lib is available, prepend 'np.'
from numpy import sin, cos, tan, log, log10, pi, average, sqrt, std, deg2rad, rad2deg, linspace, asarray
from numpy.random import random, randint, normal, shuffle
import os # handy system and path functions
# Ensure that relative paths start from the same directory as this script
_thisDir = os.path.dirname(os.path.abspath(__file__))
os.chdir(_thisDir)
# Store info about the experiment session
expName = u'test_triangle_over_square' # from the Builder filename that created this script
expInfo = {'participant':'', 'session':'001'}
dlg = gui.DlgFromDict(dictionary=expInfo, title=expName)
if dlg.OK == False: core.quit() # user pressed cancel
expInfo['date'] = data.getDateStr() # add a simple timestamp
expInfo['expName'] = expName
# Data file name stem = absolute path + name; later add .psyexp, .csv, .log, etc
filename = _thisDir + os.sep + 'data/%s_%s_%s' %(expInfo['participant'], expName, expInfo['date'])
# An ExperimentHandler isn't essential but helps with data saving
thisExp = data.ExperimentHandler(name=expName, version='',
extraInfo=expInfo, runtimeInfo=None,
originPath=None,
savePickle=True, saveWideText=True,
dataFileName=filename)
#save a log file for detail verbose info
logFile = logging.LogFile(filename+'.log', level=logging.EXP)
logging.console.setLevel(logging.WARNING) # this outputs to the screen, not a file
endExpNow = False # flag for 'escape' or other condition => quit the exp
# Start Code - component code to be run before the window creation
# Setup the Window
win = visual.Window(size=(1280, 800), fullscr=True, screen=0, allowGUI=False, allowStencil=False,
monitor='testMonitor', color=[0,0,0], colorSpace='rgb',
blendMode='avg', useFBO=True,
)
# store frame rate of monitor if we can measure it successfully
expInfo['frameRate']=win.getActualFrameRate()
if expInfo['frameRate']!=None:
frameDur = 1.0/round(expInfo['frameRate'])
else:
frameDur = 1.0/60.0 # couldn't get a reliable measure so guess
# Initialize components for Routine "trial"
trialClock = core.Clock()
ISI = core.StaticPeriod(win=win, screenHz=expInfo['frameRate'], name='ISI')
square = visual.ImageStim(win=win, name='square',units='pix',
image=None, mask=None,
ori=0, pos=[0, 0], size=[200, 200],
color=u'blue', colorSpace='rgb', opacity=1,
flipHoriz=False, flipVert=False,
texRes=128, interpolate=True, depth=-1.0)
polygon = visual.ShapeStim(win=win, name='polygon',units='pix',
vertices = [[-[200, 300][0]/2.0,-[200, 300][1]/2.0], [+[200, 300][0]/2.0,-[200, 300][1]/2.0], [0,[200, 300][1]/2.0]],
ori=0, pos=[0, 0],
lineWidth=1, lineColor=[1,1,1], lineColorSpace='rgb',
fillColor=u'red', fillColorSpace='rgb',
opacity=1,interpolate=True)
# Create some handy timers
globalClock = core.Clock() # to track the time since experiment started
routineTimer = core.CountdownTimer() # to track time remaining of each (non-slip) routine
#------Prepare to start Routine "trial"-------
t = 0
trialClock.reset() # clock
frameN = -1
# update component parameters for each repeat
# keep track of which components have finished
trialComponents = []
trialComponents.append(ISI)
trialComponents.append(square)
trialComponents.append(polygon)
for thisComponent in trialComponents:
if hasattr(thisComponent, 'status'):
thisComponent.status = NOT_STARTED
#-------Start Routine "trial"-------
continueRoutine = True
while continueRoutine:
# get current time
t = trialClock.getTime()
frameN = frameN + 1 # number of completed frames (so 0 is the first frame)
# update/draw components on each frame
# *square* updates
if frameN >= 0 and square.status == NOT_STARTED:
# keep track of start time/frame for later
square.tStart = t # underestimates by a little under one frame
square.frameNStart = frameN # exact frame index
square.setAutoDraw(True)
# *polygon* updates
if frameN >= 0 and polygon.status == NOT_STARTED:
# keep track of start time/frame for later
polygon.tStart = t # underestimates by a little under one frame
polygon.frameNStart = frameN # exact frame index
polygon.setAutoDraw(True)
# *ISI* period
if t >= 0.0 and ISI.status == NOT_STARTED:
# keep track of start time/frame for later
ISI.tStart = t # underestimates by a little under one frame
ISI.frameNStart = frameN # exact frame index
ISI.start(0.5)
elif ISI.status == STARTED: #one frame should pass before updating params and completing
ISI.complete() #finish the static period
# check if all components have finished
if not continueRoutine: # a component has requested a forced-end of Routine
routineTimer.reset() # if we abort early the non-slip timer needs reset
break
continueRoutine = False # will revert to True if at least one component still running
for thisComponent in trialComponents:
if hasattr(thisComponent, "status") and thisComponent.status != FINISHED:
continueRoutine = True
break # at least one component has not yet finished
# check for quit (the Esc key)
if endExpNow or event.getKeys(keyList=["escape"]):
core.quit()
# refresh the screen
if continueRoutine: # don't flip if this routine is over or we'll get a blank screen
win.flip()
else: # this Routine was not non-slip safe so reset non-slip timer
routineTimer.reset()
#-------Ending Routine "trial"-------
for thisComponent in trialComponents:
if hasattr(thisComponent, "setAutoDraw"):
thisComponent.setAutoDraw(False)
win.close()
core.quit()
As per Jonas' comment above, PsychoPy uses a layering system in which subsequent stimuli are drawn on top of previous stimuli (as in his code examples).
In the graphical Builder environment, drawing order is represented by the vertical order of stimulus components: stimuli at the top are drawn first, and ones lower down are progressively layered upon them.
You can change the order of stimulus components by right-clicking on them and selecting "Move up", "move down", etc as required.
Sebastian, has, however, identified a bug here, in that the intended drawing order is not honoured between ImageStim and ShapeStim components. As a work-around, you might be able to replace your ShapeStim with a bitmap representation, displayed using an ImageStim. Multiple ImageStims should draw correctly (as do multiple ShapeStims). To get it to draw correctly on top of another image, be sure to save it as a .png file, which supports transparency. That way, only the actual shape will be drawn on top, as its background pixels can be set to be transparent and will not mask the the underlying image.
For a long-term solution, I've added your issue as a bug report to the PsychoPy GitHub project here:
https://github.com/psychopy/psychopy/issues/795
It turned out to be a bug in the Polygon component in Builder.
This is fixed in the upcoming release (1.82.00). The changes needed to make the fix can be seen at
https://github.com/psychopy/psychopy/commit/af1af9a7a85cee9b4ec8ad5e2ff1f03140bd1a36
which you can add to your own installation if you like.
cheers,
Jon