Python-ONVIF PTZ control -- Absolute and Relative Move - ip-camera

I am developing an application in Python to control ONVIF-compatible cameras.
Software: Debian Wheezy, Python 2.7, Quatanium python-onvif client
Hardware: Raspberry Pi 2 B, IP camera on local router, wifi/VNC for development
The required PTZ functions include Absolute Move, Relative Move, Continuous Move, Stop and using Preset positions. With the extracted test code below, I have all of that working except Absolute and Relative Moves. All of this code executes without any errors but the camera does not move for the Absolute or Relative Moves. I hope someone can suggest the problem with those two functions. The example is a bit long but I have tried to include enough code to show the contrast between working and non-working (with upper-case comments) portions for reference and test.
A test sketch:
#!/usr/bin/python
#-------------------------------------------------------------------------------
#Test of Python and Quatanium Python-ONVIF with NETCAT camera PT-PTZ2087
#ONVIF Client implementation is in Python
#For IP control of PTZ, the camera should be compliant with ONVIF Profile S
#The PTZ2087 reports it is ONVIF 2.04 but is actually 2.4 (Netcat said text not changed after upgrade)
#------------------------------------------------------------------------------
import onvifconfig
if __name__ == '__main__':
#Do all setup initializations
ptz = onvifconfig.ptzcam()
#*****************************************************************************
# IP camera motion tests
#*****************************************************************************
print 'Starting tests...'
#Set preset
ptz.move_pan(1.0, 1) #move to a new home position
ptz.set_preset('home')
# move right -- (velocity, duration of move)
ptz.move_pan(1.0, 2)
# move left
ptz.move_pan(-1.0, 2)
# move down
ptz.move_tilt(-1.0, 2)
# Move up
ptz.move_tilt(1.0, 2)
# zoom in
ptz.zoom(8.0, 2)
# zoom out
ptz.zoom(-8.0, 2)
#Absolute pan-tilt (pan position, tilt position, velocity)
#DOES NOT RESULT IN CAMERA MOVEMENT
ptz.move_abspantilt(-1.0, 1.0, 1.0)
ptz.move_abspantilt(1.0, -1.0, 1.0)
#Relative move (pan increment, tilt increment, velocity)
#DOES NOT RESULT IN CAMERA MOVEMENT
ptz.move_relative(0.5, 0.5, 8.0)
#Get presets
ptz.get_preset()
#Go back to preset
ptz.goto_preset('home')
exit()
The referenced class:
#*****************************************************************************
#IP Camera control
#Control methods:
# rtsp video streaming via OpenCV for frame capture
# ONVIF for PTZ control
# ONVIF for setup selections
#
# Starting point for this code was from:
# https://github.com/quatanium/python-onvif
#*****************************************************************************
import sys
sys.path.append('/usr/local/lib/python2.7/dist-packages/onvif')
from onvif import ONVIFCamera
from time import sleep
class ptzcam():
def __init__(self):
print 'IP camera initialization'
#Several cameras that have been tried -------------------------------------
#Netcat camera (on my local network) Port 8899
self.mycam = ONVIFCamera('192.168.1.10', 8899, 'admin', 'admin', '/etc/onvif/wsdl/')
#This is a demo camera that anyone can use for testing
#Toshiba IKS-WP816R
#self.mycam = ONVIFCamera('67.137.21.190', 80, 'toshiba', 'security', '/etc/onvif/wsdl/')
print 'Connected to ONVIF camera'
# Create media service object
self.media = self.mycam.create_media_service()
print 'Created media service object'
print
# Get target profile
self.media_profile = self.media.GetProfiles()[0]
# Use the first profile and Profiles have at least one
token = self.media_profile._token
#PTZ controls -------------------------------------------------------------
print
# Create ptz service object
print 'Creating PTZ object'
self.ptz = self.mycam.create_ptz_service()
print 'Created PTZ service object'
print
#Get available PTZ services
request = self.ptz.create_type('GetServiceCapabilities')
Service_Capabilities = self.ptz.GetServiceCapabilities(request)
print 'PTZ service capabilities:'
print Service_Capabilities
print
#Get PTZ status
status = self.ptz.GetStatus({'ProfileToken':token})
print 'PTZ status:'
print status
print 'Pan position:', status.Position.PanTilt._x
print 'Tilt position:', status.Position.PanTilt._y
print 'Zoom position:', status.Position.Zoom._x
print 'Pan/Tilt Moving?:', status.MoveStatus.PanTilt
print
# Get PTZ configuration options for getting option ranges
request = self.ptz.create_type('GetConfigurationOptions')
request.ConfigurationToken = self.media_profile.PTZConfiguration._token
ptz_configuration_options = self.ptz.GetConfigurationOptions(request)
print 'PTZ configuration options:'
print ptz_configuration_options
print
self.requestc = self.ptz.create_type('ContinuousMove')
self.requestc.ProfileToken = self.media_profile._token
self.requesta = self.ptz.create_type('AbsoluteMove')
self.requesta.ProfileToken = self.media_profile._token
print 'Absolute move options'
print self.requesta
print
self.requestr = self.ptz.create_type('RelativeMove')
self.requestr.ProfileToken = self.media_profile._token
print 'Relative move options'
print self.requestr
print
self.requests = self.ptz.create_type('Stop')
self.requests.ProfileToken = self.media_profile._token
self.requestp = self.ptz.create_type('SetPreset')
self.requestp.ProfileToken = self.media_profile._token
self.requestg = self.ptz.create_type('GotoPreset')
self.requestg.ProfileToken = self.media_profile._token
print 'Initial PTZ stop'
print
self.stop()
#Stop pan, tilt and zoom
def stop(self):
self.requests.PanTilt = True
self.requests.Zoom = True
print 'Stop:'
#print self.requests
print
self.ptz.Stop(self.requests)
print 'Stopped'
#Continuous move functions
def perform_move(self, timeout):
# Start continuous move
ret = self.ptz.ContinuousMove(self.requestc)
print 'Continuous move completed', ret
# Wait a certain time
sleep(timeout)
# Stop continuous move
self.stop()
sleep(2)
print
def move_tilt(self, velocity, timeout):
print 'Move tilt...', velocity
self.requestc.Velocity.PanTilt._x = 0.0
self.requestc.Velocity.PanTilt._y = velocity
self.perform_move(timeout)
def move_pan(self, velocity, timeout):
print 'Move pan...', velocity
self.requestc.Velocity.PanTilt._x = velocity
self.requestc.Velocity.PanTilt._y = 0.0
self.perform_move(timeout)
def zoom(self, velocity, timeout):
print 'Zoom...', velocity
self.requestc.Velocity.Zoom._x = velocity
self.perform_move(timeout)
#Absolute move functions --NO ERRORS BUT CAMERA DOES NOT MOVE
def move_abspantilt(self, pan, tilt, velocity):
self.requesta.Position.PanTilt._x = pan
self.requesta.Position.PanTilt._y = tilt
self.requesta.Speed.PanTilt._x = velocity
self.requesta.Speed.PanTilt._y = velocity
print 'Absolute move to:', self.requesta.Position
print 'Absolute speed:',self.requesta.Speed
ret = self.ptz.AbsoluteMove(self.requesta)
print 'Absolute move pan-tilt requested:', pan, tilt, velocity
sleep(2.0)
print 'Absolute move completed', ret
print
#Relative move functions --NO ERRORS BUT CAMERA DOES NOT MOVE
def move_relative(self, pan, tilt, velocity):
self.requestr.Translation.PanTilt._x = pan
self.requestr.Translation.PanTilt._y = tilt
self.requestr.Speed.PanTilt._x = velocity
ret = self.requestr.Speed.PanTilt._y = velocity
self.ptz.RelativeMove(self.requestr)
print 'Relative move pan-tilt', pan, tilt, velocity
sleep(2.0)
print 'Relative move completed', ret
print
#Sets preset set, query and and go to
def set_preset(self, name):
self.requestp.PresetName = name
self.requestp.PresetToken = '1'
self.preset = self.ptz.SetPreset(self.requestp) #returns the PresetToken
print 'Set Preset:'
print self.preset
print
def get_preset(self):
self.ptzPresetsList = self.ptz.GetPresets(self.requestc)
print 'Got preset:'
print self.ptzPresetsList[0]
print
def goto_preset(self, name):
self.requestg.PresetToken = '1'
self.ptz.GotoPreset(self.requestg)
print 'Going to Preset:'
print name
print
#Ottavio, Sorry that I did not make it clear that the camera I used for this test, a Netcat PT-PTZ2084XM-A reported via ONVIF query that it did support Absolute and Relative moves. I have subsequently verified via the onvif.org site that this camera has not been tested and verified to meet onvif standards. I also have verified that the above code does work correctly with a Amcrest IP2M-841B ptz camera. The upshot of all of this is to never trust the claim that a camera is ONVIF 2.x compatible without testing it. Even the Amcrest has problems with both ONVIF and cgi commands for zoom. Neither Netcat nor Amcrest have been very helpful in resolving these technical problems.

AbsoluteMake and RelativeMove in the Profile S specifications are CONDITIONAL MANDATORY, thus it is not guaranteed a-priori that they are supported.
You need to check the camera's features.

Related

Open a video with alpha and the webcam input in the same window, using pyQT

Open a video with alpha and the input from the webcam in the same window, using pyQT I have a video with alpha channel, it is in .flv or .mov format.
The code below can open this video and control its execution:
from PyQt5.QtWidgets import QApplication, QWidget, QPushButton, QHBoxLayout, QVBoxLayout
from PyQt5.QtWidgets import QSizePolicy, QFileDialog, QLabel, QSlider, QStyle
from PyQt5.QtGui import QIcon, QPalette, QPixmap
from PyQt5.QtMultimedia import QMediaPlayer, QMediaContent
from PyQt5.QtMultimediaWidgets import QVideoWidget
from PyQt5.QtCore import Qt, QUrl
import sys
class Window(QWidget):
def __init__(self, windowsize):
super().__init__()
self.setWindowTitle("Rodolfo Nogueira Player")
# Posição e tamanho da janela
self.setGeometry(350, 100, 700, 500)
self.setWindowIcon(QIcon('player.png'))
self.windowsize = windowsize
p = self.palette()
p.setColor(QPalette.Window, Qt.black)
self.setPalette(p)
self.init_ui()
self.show()
def init_ui(self):
# Criação do objeto Media Player
self.mediaPlayer = QMediaPlayer(None, QMediaPlayer.VideoSurface)
#Criação do objeto videoObject
videowidget = QVideoWidget()
#Criação do botão de abrir
openBtn = QPushButton('Abrir Video')
openBtn.clicked.connect(self.open_file)
# Criação do botão de play
self.playBtn = QPushButton()
self.playBtn.setEnabled(False)
self.playBtn.setIcon(self.style().standardIcon(QStyle.SP_MediaPlay))
self.playBtn.clicked.connect(self.play_video)
# Criação do slider
self.slider =QSlider(Qt.Horizontal)
self.slider.setRange(0, 0)
self.slider.sliderMoved.connect(self.set_position)
# Criando um rótulo - Receberá o erro, caso ocorra
self.label = QLabel()
# Definindo a política de tamanho
self.label.setSizePolicy(QSizePolicy.Preferred, QSizePolicy.Maximum)
# Criando um Layout hBox
hboxLayout = QHBoxLayout()
hboxLayout.setContentsMargins(0, 0, 0, 0)
# Configurando os elementos do bBox
hboxLayout.addWidget(openBtn)
hboxLayout.addWidget(self.playBtn)
hboxLayout.addWidget(self.slider)
# Criando um vBox layout
vboxLayout = QVBoxLayout()
vboxLayout.addWidget(videowidget)
vboxLayout.addLayout(hboxLayout)
vboxLayout.addWidget(self.label) # Receberá o erro, caso ocorra
self.setLayout(vboxLayout)
self.mediaPlayer.setVideoOutput(videowidget)
# Sinais do mediaPlayer
self.mediaPlayer.stateChanged.connect(self.media_state_change)
self.mediaPlayer.positionChanged.connect(self.position_changed)
self.mediaPlayer.durationChanged.connect(self.duration_changed)
def open_file(self):
filename, _ = QFileDialog.getOpenFileName(self, "Open Video")
filename = str(filename).replace('/', '\\')
if filename != '':
self.mediaPlayer.setMedia(QMediaContent(QUrl.fromLocalFile(filename)))
self.playBtn.setEnabled(True)
def play_video(self):
print('tentando o Play')
if self.mediaPlayer.state() == QMediaPlayer.PlayingState:
print('tentando o Play')
self.mediaPlayer.pause()
else:
print('tentando o Pause')
self.mediaPlayer.play()
def media_state_change(self, state):
if self.mediaPlayer.state() == QMediaPlayer.PlayingState:
self.playBtn.setIcon(self.style().standardIcon(QStyle.SP_MediaPause))
else:
self.playBtn.setIcon(self.style().standardIcon(QStyle.SP_MediaPlay))
def position_changed(self, position):
self.slider.setValue(position)
def duration_changed(self, duration):
self.slider.setRange(0, duration)
def set_position(self, position):
self.mediaPlayer.setPosition(position)
def handre_errors(self):
self.playBtn.setEnabled(False)
self.label.setText("Erro: " + self.mediaPlayer.errorString())
app = QApplication(sys.argv)
screensize = app.desktop().availableGeometry().size()
window = Window(screensize)
sys.exit(app.exec_())
Now I want to open the input of a webcam in the background, in the same window. Preferably when I play the video I want the cam input to open in the background.
I even got this code that uses pyQT to open the cam, but I can't get the two together in the same window:
class MainWindow(QMainWindow):
# constructor
def __init__(self):
super().__init__()
# setting geometry
self.setGeometry(100, 100,
800, 600)
# setting style sheet
self.setStyleSheet("background : lightgrey;")
# getting available cameras
self.available_cameras = QCameraInfo.availableCameras()
# if no camera found
if not self.available_cameras:
# exit the code
sys.exit()
# creating a status bar
self.status = QStatusBar()
# setting style sheet to the status bar
self.status.setStyleSheet("background : white;")
# adding status bar to the main window
self.setStatusBar(self.status)
# path to save
self.save_path = ""
# creating a QCameraViewfinder object
self.viewfinder = QCameraViewfinder()
# showing this viewfinder
self.viewfinder.show()
# making it central widget of main window
self.setCentralWidget(self.viewfinder)
# Set the default camera.
self.select_camera(0)
# creating a tool bar
toolbar = QToolBar("Camera Tool Bar")
# adding tool bar to main window
self.addToolBar(toolbar)
# creating a photo action to take photo
click_action = QAction("Click photo", self)
# adding status tip to the photo action
click_action.setStatusTip("This will capture picture")
# adding tool tip
click_action.setToolTip("Capture picture")
# adding action to it
# calling take_photo method
click_action.triggered.connect(self.click_photo)
# adding this to the tool bar
toolbar.addAction(click_action)
# similarly creating action for changing save folder
change_folder_action = QAction("Change save location",
self)
# adding status tip
change_folder_action.setStatusTip("Change folder where picture will be saved saved.")
# adding tool tip to it
change_folder_action.setToolTip("Change save location")
# setting calling method to the change folder action
# when triggered signal is emitted
change_folder_action.triggered.connect(self.change_folder)
# adding this to the tool bar
toolbar.addAction(change_folder_action)
# creating a combo box for selecting camera
camera_selector = QComboBox()
# adding status tip to it
camera_selector.setStatusTip("Choose camera to take pictures")
# adding tool tip to it
camera_selector.setToolTip("Select Camera")
camera_selector.setToolTipDuration(2500)
# adding items to the combo box
camera_selector.addItems([camera.description()
for camera in self.available_cameras])
# adding action to the combo box
# calling the select camera method
camera_selector.currentIndexChanged.connect(self.select_camera)
# adding this to tool bar
toolbar.addWidget(camera_selector)
# setting tool bar stylesheet
toolbar.setStyleSheet("background : white;")
# setting window title
self.setWindowTitle("PyQt5 Cam")
# showing the main window
self.show()
# method to select camera
def select_camera(self, i):
# getting the selected camera
self.camera = QCamera(self.available_cameras[i])
# setting view finder to the camera
self.camera.setViewfinder(self.viewfinder)
# setting capture mode to the camera
self.camera.setCaptureMode(QCamera.CaptureStillImage)
# if any error occur show the alert
self.camera.error.connect(lambda: self.alert(self.camera.errorString()))
# start the camera
self.camera.start()
# creating a QCameraImageCapture object
self.capture = QCameraImageCapture(self.camera)
# showing alert if error occur
self.capture.error.connect(lambda error_msg, error,
msg: self.alert(msg))
# when image captured showing message
self.capture.imageCaptured.connect(lambda d,
i: self.status.showMessage("Image captured : "
+ str(self.save_seq)))
# getting current camera name
self.current_camera_name = self.available_cameras[i].description()
# initial save sequence
self.save_seq = 0
# method to take photo
def click_photo(self):
# time stamp
timestamp = time.strftime("%d-%b-%Y-%H_%M_%S")
# capture the image and save it on the save path
self.capture.capture(os.path.join(self.save_path,
"%s-%04d-%s.jpg" % (
self.current_camera_name,
self.save_seq,
timestamp
)))
# increment the sequence
self.save_seq += 1
# change folder method
def change_folder(self):
# open the dialog to select path
path = QFileDialog.getExistingDirectory(self,
"Picture Location", "")
# if path is selected
if path:
# update the path
self.save_path = path
# update the sequence
self.save_seq = 0
# method for alerts
def alert(self, msg):
# error message
error = QErrorMessage(self)
# setting text to the error message
error.showMessage(msg)
Can anyone give me a hint on how to put these two codes together?

GODOT - Full Attack Animation Not Playing

I have this simple game i made in Godot Game Engine, and i have implemented some animations in my game's main character.
1. Run
This is a simple run animation i've added, which is played when the character is moving
2. Idle
This animation is more like a single image which is played when the character is not moving
3. Attack
This animation is played when user presses Left Mouse Button.
I am having my issue in attack animation, when i press Left Mouse Button my Animation doesn't play, instead i get first frame of the animation and then character goes back to idle Animation.
This is how it looks like:
This is my Character's Code:
extends KinematicBody2D
var _inputVec = Vector2.ZERO
var VELOCITY = Vector2.ZERO
var LAST_INPUT = Vector2.ZERO
const MAX_SPEED = 70
const ACCELERATION = 500
const FRICTION = 500
onready var animationPlayer = $AnimationPlayer
func _ready():
print("game started!")
func _physics_process(delta):
_inputVec.x = Input.get_action_strength("ui_right") - Input.get_action_strength("ui_left")
_inputVec.y = Input.get_action_strength("ui_down") - Input.get_action_strength("ui_up")
_inputVec = _inputVec.normalized()
if _inputVec != Vector2.ZERO:
if _inputVec.x > 0:
animationPlayer.play("playerRunRight")
elif _inputVec.y < 0:
animationPlayer.play("playerRunUp")
elif _inputVec.y > 0:
animationPlayer.play("playerRunDown")
else:
animationPlayer.play("playerRunLeft")
VELOCITY = VELOCITY.move_toward(_inputVec * MAX_SPEED, ACCELERATION * delta)
LAST_INPUT = _inputVec
else:
VELOCITY = VELOCITY.move_toward(Vector2.ZERO, FRICTION * delta)
if Input.is_action_just_pressed("ui_lmb"):
if LAST_INPUT.x > 0:
animationPlayer.play("playerAttackRight")
elif LAST_INPUT.y < 0:
animationPlayer.play("playerAttackUp")
elif LAST_INPUT.y > 0:
animationPlayer.play("playerAttackDown")
else:
animationPlayer.play("playerAttackLeft")
else:
if LAST_INPUT.x > 0:
animationPlayer.play("playerIdleRight")
elif LAST_INPUT.y < 0:
animationPlayer.play("playerIdleUp")
elif LAST_INPUT.y > 0:
animationPlayer.play("playerIdleDown")
else:
animationPlayer.play("playerIdleLeft")
VELOCITY = move_and_slide(VELOCITY)
Full Project is Available at my Github Repo
Remember that _physics_process runs once per (physics) frame.
So, one frame you pressed the left mouse button, and this line got to execute:
animationPlayer.play("playerAttackRight")
But next (physics) frame, you had not just pressed the left mouse button, so this conditional is false:
if Input.is_action_just_pressed("ui_lmb"):
And then this line get to execute:
animationPlayer.play("playerIdleRight")
As a result, you only see about one frame of the "playerAttackRight" animation.
You are going to need to keep track of the current state (running, attacking, idle). The rule is that you can change from running to idle immediately, but you can only change from attacking to idle, when the attack animation ends.
You can keep track of the current state with a variable, of course. You can take input and the value of the state of the variable to decide the new state. Then separately read the state variable and decide which animation to play. You may also want to set the state variable when some animations end.
And to do something when an animation ends, you can either resource to yield:
yield(animationPlayer, "animation_finished")
Which will have your code resume after it receives the "animation_finished" signal.
Or, otherwise you can connect to the "animation_finished" signal.
By the way, you can also queue animations:
animationPlayer.queue("name_of_some_animation")
While using AnimationPlayer like you do is OK. When it gets complex, there is another tool you should consider: AnimationTree.
Create an AnimationTree node, give it your animation player, and set the root to a new AnimationNodeStateMachine. There you can create your state machine, and configure if the transition between them is immediate or at the end.
Then, from code, you can get the state machine playback object, like this:
var state_machine = $AnimationTree.get("parameters/playback")
You can ask it what the current state is with:
var state:String = state_machine.get_current_node()
Which you can use as part of the decision of which state you want to go to. And then tell it you want it to go to a different state, like this:
state_machine.travel("name_of_some_state")
And using that it will respect the transitions you set, so you do not have to worry about it in code.
You can find more information about using AnimationTree at:
Using AnimationTree
Controlling Animation States

Pepper robot: on which body frame is "LandmarkDetection" based?

For a while I have tried to figure out that on which coordinate or body frame the ”LandmarkDetection“ Module is based on . Because I need the (0, 0, 0) position for the calculation to achieve the self localization for the robot with the help of "Landmark detection", I think it's already important...
I have letted it working together with the Pepper robot under NAOqi 2.5.5 which I hope it's compatible.
You can read the whole code of "LandmarkDetection.py" here:
"LandmarkDetection.py":
#! /usr/bin/env python
# -*- encoding: UTF-8 -*-
"""Example: Demonstrates a way to localize the robot with
ALLandMarkDetection"""
import qi
import time
import sys
import argparse
import math
import almath
class LandmarkDetector(object):
"""
We first instantiate a proxy to the ALLandMarkDetection module
Note that this module should be loaded on the robot's naoqi.
The module output its results in ALMemory in a variable
called "LandmarkDetected".
We then read this ALMemory value and check whether we get
interesting things.
After that we get the related position of the landmark compared to robot.
"""
def __init__(self, app):
"""
Initialisation of qi framework and event detection.
"""
super(LandmarkDetector, self).__init__()
app.start()
session = app.session
# Get the service ALMemory.
self.memory = session.service("ALMemory")
# Connect the event callback.
# Get the services ALMotion & ALRobotPosture.
self.motion_service = session.service("ALMotion")
self.posture_service = session.service("ALRobotPosture")
self.subscriber = self.memory.subscriber("LandmarkDetected")
print "self.subscriber = self.memory.subscriber(LandmarkDetected)"
self.subscriber.signal.connect(self.on_landmark_detected)
print "self.subscriber.signal.connect(self.on_landmark_detected)"
# Get the services ALTextToSpeech, ALLandMarkDetection and ALMotion.
self.tts = session.service("ALTextToSpeech")
self.landmark_detection = session.service("ALLandMarkDetection")
# print "self.landmark_detection" is repr(self.landmark_detection)
self.motion_service = session.service("ALMotion")
self.landmark_detection.subscribe("LandmarkDetector", 500, 0.0 )
print "self.landmark_detection.subscribe(LandmarkDetector, 500, 0.0 )"
self.got_landmark = False
# Set here the size of the landmark in meters.
self.landmarkTheoreticalSize = 0.06 #in meters 0 #.05 or 0.06?
# Set here the current camera ("CameraTop" or "CameraBottom").
self.currentCamera = "CameraTop"
def on_landmark_detected(self, markData):
"""
Callback for event LandmarkDetected.
"""
while markData == [] : # empty value when the landmark disappears
self.got_landmark = False
self.motion_service.moveTo(0, 0, 0.1 * math.pi)
if not self.got_landmark: # only speak the first time a landmark appears
self.got_landmark = True
#stop.motion_service.moveTo
print "Ich sehe eine Landmarke! "
self.tts.say("Ich sehe eine Landmarke! ")
# Retrieve landmark center position in radians.
wzCamera = markData[1][0][0][1]
wyCamera = markData[1][0][0][2]
# Retrieve landmark angular size in radians.
angularSize = markData[1][0][0][3]
# Compute distance to landmark.
distanceFromCameraToLandmark = self.landmarkTheoreticalSize / ( 2 * math.tan( angularSize / 2))
# Get current camera position in NAO space.
transform = self.motion_service.getTransform(self.currentCamera, 2, True)
transformList = almath.vectorFloat(transform)
robotToCamera = almath.Transform(transformList)
# Compute the rotation to point towards the landmark.
cameraToLandmarkRotationTransform = almath.Transform_from3DRotation(0, wyCamera, wzCamera)
# Compute the translation to reach the landmark.
cameraToLandmarkTranslationTransform = almath.Transform(distanceFromCameraToLandmark, 0, 0)
# Combine all transformations to get the landmark position in NAO space.
robotToLandmark = robotToCamera * cameraToLandmarkRotationTransform *cameraToLandmarkTranslationTransform
# robotTurnAroundAngle = almath.rotationFromAngleDirection(0, 1, 1, 1)
# print "robotTurnAroundAngle = ", robotTurnAroundAngle
print "x " + str(robotToLandmark.r1_c4) + " (in meters)"
print "y " + str(robotToLandmark.r2_c4) + " (in meters)"
print "z " + str(robotToLandmark.r3_c4) + " (in meters)"
def run(self):
"""
Loop on, wait for events until manual interruption.
"""
# Wake up robot
self.motion_service.wakeUp()
# Send robot to Pose Init
self.posture_service.goToPosture("StandInit", 0.5)
# Example showing how to get a simplified robot position in world.
useSensorValues = False
result = self.motion_service.getRobotPosition(useSensorValues)
print "Robot Position", result
# Example showing how to use this information to know the robot's diplacement.
useSensorValues = False
# initRobotPosition = almath.Pose2D(self.motion_service.getRobotPosition(useSensorValues))
# Make the robot move
for i in range(1, 12, 1):
self.motion_service.moveTo(0, 0, 0.1 * math.pi)
print "self.motion_service.moveTo(0, 0, (0.1)*math.pi)"
print "Starting LandmarkDetector"
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
print "Interrupted by user, stopping LandmarkDetector"
self.landmark_detection.unsubscribe("LandmarkDetector")
#stop
sys.exit(0)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--ip", type=str, default="10.0.0.10",
help="Robot IP address. On robot or Local Naoqi: use
'10.0.0.10'.")
parser.add_argument("--port", type=int, default=9559,
help="Naoqi port number")
args = parser.parse_args()
try:
# Initialize qi framework.
connection_url = "tcp://" + args.ip + ":" + str(args.port)
app = qi.Application(["LandmarkDetector", "--qi-url=" + connection_url])
except RuntimeError:
print ("Can't connect to Naoqi at ip \"" + args.ip + "\" on port " + str(args.port) +".\n"
"Please check your script arguments. Run with -h option for help.")
sys.exit(1)
landmark_detector = LandmarkDetector(app)
landmark_detector.run()
At line 69 like the picture shows it says NAO space, but after googling and looking after this concept at the offcial Aledebaran site, I did't find it.
At line 70 from the function "getTransform()" of " transform = self.motion_service.getTransform(self.currentCamera, 2, True)" , combining with the information of the Aldebaran website: Cartesian control API: the parameter "2" could represent "FRAME_ROBOT ".
Due to the official documentation,"FRAME_ROBOT"(I think for NAO) is one of the 3 spatial references used by ALMotion. This is average of the two feet positions projected around a vertical z axis. This space is useful, because the x axis is always forwards, so provides a natural ego-centric reference.
But which from the following body frames is the FRAME coordinate for Pepper?
Body frames - Aldebaran
And basically the sensor values should be based on the Coordinate like it in this picture:
I have just tracked the values of the transformation:
('wzCamera = markData[1][0][0][1] = :', 0.20727014541625977)
('wyCamera = markData[1][0][0][2] = :', 0.13570936024188995)
robotToCamera = [[0.984479, 0.00163947, -0.175494, 0.0191678]
[-0.00241521, 0.999988, -0.00420686, -0.00231434]
[0.175485, 0.00456542, 0.984471, 1.16607]]
cameraToLandmarkRotationTransform =[[0.969599, -0.205789, 0.132397, 0]
[0.203897, 0.978596, 0.0278419, 0]
[-0.135293, 0, 0.990806, 0]]
cameraToLandmarkTranslationTransform=[[1, 0, 0, 1.32545]
[0, 1, 0, 0]
[0, 0, 1, 0]]
robotToLandmark = [[0.978627, -0.200991, -0.0434926, 1.31629]
[0.202122, 0.979082, 0.0233536, 0.265588]
[0.0378889, -0.0316453, 0.998781, 1.21629]]
Combining these with the information of the following picture
from the video: Robotics: 3D World to First Person Transformation
I am really wondering whether the “robot“ of “robotToCamera” means the original cemara point… Has anyone idea how to understand these values of matrix from “robotToCamera”?
Observing the values of “robotTocamera”, it changes a little bit when the robot rotates himself(the whole body + head, he doesn’t nod at the same time) to look for the landmarks on the walls like this:
1. robotToCamera =
[[0.979136, 0.0719852, -0.190025, 0.0157459]
[-0.0700786, 0.997401, 0.0167426, -0.00282726]
[0.190737, -0.00307653, 0.981636, 1.16688]]
2. robotToCamera =
[[0.981156, 0.0583434, -0.184198, 0.0168686]
[-0.057959, 0.998291, 0.00747475, -0.00518941]
[0.184319, 0.00334202, 0.982861, 1.16652]]
3. robotToCamera =
[[0.981628, 0.0598889, -0.18116, 0.0196037]
[-0.0594456, 0.9982, 0.0078808, -0.00525115]
[0.181306, 0.00303314, 0.983422, 1.16649]]
4. robotToCamera =
[[0.982268, 0.0583533, -0.178169, 0.0191384]
[-0.0579817, 0.998291, 0.0072969, -0.00522032]
[0.17829, 0.00316301, 0.983973, 1.16627] ]
5. robotToCamera =
[[0.984, 0.021, -0.178, 0.016]
[-0.021, 1.00, 0.0027, -0.0019]
[0.178, 0.001, 0.984, 1.17] ]
Here are couple of my questions:
Could we just calculate the position of “Robot” with help of “robotToCamera”?
Could we get a value for “CameraToRobot” using the values of “robotToCamera”, e.g. calculating the inverse matrix or adjoint matrix ?
For the fifth matrix from “robotToCamera”, if we calculate the inverse matrix for the part
[[0.984, 0.021, -0.178]
[-0.021, 1.00, 0.0027,]
[0.178, 0.001, 0.984] ]
would be
[[0.984, -0.021,-0.178]
[0.021, 1.00, -0.001,]
[-0.178, 0.0027, 0.984] ]
equal to its adjoint matrix. Then it should be a Orthogonal matrix due to Wikipedia.
What info can I get from this ?
At the time 6:10 of the video, the teacher continues to explain something about lens:
In my situation, there is no need to learn about lens(L), or?
Landmark detection is detecting the NAOMarks with its top camera. The doc explains that you get a "Shape info" in the event value such as:
ShapeInfo = [ 1, alpha, beta, sizeX, sizeY, heading]
alpha and beta represent the Naomark’s location in terms of camera
angles
sizeX and sizeY are the mark’s size in camera angles
the heading angle describes how the Naomark is oriented about the
vertical axis with regards to the robot’s head.
The code then calls getTransform which will give you the direction of the camera, so you can relate the detected mark angles to an actual direction (X,Y,Z) in front of the robot (becoming the robot frame).
marks are here if needed: http://doc.aldebaran.com/2-5/naoqi/vision/allandmarkdetection.html

How to set the gravity on a GTK3+ window in python

I run python 2.7.13 on windows 7.
I am creating a window with Gtk (from pygobject 3.18.2).
I am running windows 7 with a custom shell and I am trying to make a toolbar at the bottom of the screen.
I use a grid to divide the window in a top and a bottom part.
The bottom part is always visible.
The top part must show above the bottom part on mouse enter and hide on mouse leave without moving the bottom part.
The default positioning of a window uses the top-left corner of the window, but this will cause the bottom part to shift up to the position of the top part when the top part is hidden.
I think I understand that I have to use
set_gravity(Gdk.Gravity.SOUTH_WEST)
to change this behaviour
I do not get errors, but it seems this setting is ignored. The placement of the window is not affected at all.
What am I missing?
Anything wrong in the way I call set_gravity()?
Is set_gravity the right way to achieve this?
I read Set window gravity in PyGObject?, but this question is still not answered
Here is the code I try to get working
import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk, Gdk
class MyWindow(Gtk.Window):
def __init__(self):
Gtk.Window.__init__(self, title="Test")
self.set_decorated(0)
self.screen = Gdk.Screen.get_default()
self.connect("destroy", self.destroy)
self.connect("enter-notify-event", self.mouseenter)
self.connect("leave-notify-event", self.mouseleave)
self.label1 = Gtk.Label("Label1\n line1\n line2")
self.label2 = Gtk.Label("Label2")
self.label1.set_hexpand(True)
self.label2.set_hexpand(True)
self.maingrid = Gtk.Grid()
self.add(self.maingrid)
self.maingrid.attach(self.label1, 0, 0, 1, 1)
self.maingrid.attach(self.label2, 0, 1, 1, 1)
self.set_gravity(Gdk.Gravity.SOUTH_WEST) # looks like this is ignored
print self.get_gravity()
def mouseleave(self, widget, data=None):
print "mouse leave"
self.label1.hide()
label2_height = self.label2.get_allocation().height
self.resize(self.screen.width(), label2_height)
def mouseenter(self, widget, data=None):
print "mouse enter"
label1_height = self.label1.get_allocation().height
label2_height = self.label2.get_allocation().height
self.resize(self.screen.width(), label1_height + label2_height)
self.label1.show()
# Here I expect label2 to stay where it is at the bottom of the screen and label1 to be drawn above label2.
# But label2 is pushed down to make space for label1
# (normal behaviour if Gdk.Gravity.SOUTH_WEST is not set)
def destroy(self, widget, data=None):
print "destroy signal occurred"
Gtk.main_quit()
win = MyWindow()
win.show_all()
win.label1.hide()
height = win.label2.get_allocation().height
win.resize(win.screen.width(), height)
#win.move(0, win.screen.height()) # I expect this to place the window at the bottom of the screen
# if Gdk.Gravity.SOUTH_WEST is set, but it is placed offscreen
# (normal behaviour if Gdk.Gravity.SOUTH_WEST is not set)
win.move(0, win.screen.height() - 200) # shift it up 200 pixels to see what is happening
Gtk.main()
Here is a working version where I move the window to it's proper position after resizing. Moving the window makes the window flicker and it also generates the leave-notify-event and the enter-notify-event.
import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk, Gdk
class MyWindow(Gtk.Window):
def __init__(self):
Gtk.Window.__init__(self, title="Test")
self.set_decorated(0)
self.screen = Gdk.Screen.get_default()
# self.set_gravity(Gdk.Gravity.SOUTH_WEST)
self.connect("destroy", self.destroy)
self.connect("enter-notify-event", self.mouseenter)
self.connect("leave-notify-event", self.mouseleave)
self.label1 = Gtk.Label("Label1\n line1\n line2")
self.label2 = Gtk.Label("Label2")
self.label1.set_hexpand(True)
self.label2.set_hexpand(True)
self.maingrid = Gtk.Grid()
self.add(self.maingrid)
self.maingrid.attach(self.label1, 0, 0, 1, 1)
self.maingrid.attach(self.label2, 0, 1, 1, 1)
self.ismoving = 0
def mouseleave(self, widget, data=None):
print "mouse leave"
if self.ismoving:
print "window is moving"
else:
self.label1.hide()
label2_height = self.label2.get_allocation().height
self.resize(self.screen.width(), label2_height)
self.move(0, self.screen.height() - label2_height)
def mouseenter(self, widget, data=None):
print "mouse enter"
if self.ismoving: # moving the window generates a leave-notify-event and a enter-notify-event
self.ismoving = 0 # ignore these events when moving the window
else:
self.ismoving = 1
label1_height = self.label1.get_allocation().height
label2_height = self.label2.get_allocation().height
self.resize(self.screen.width(), label1_height + label2_height)
self.move(0, self.screen.height()-label1_height - label2_height)
self.label1.show()
def destroy(self, widget, data=None):
print "destroy signal occurred"
Gtk.main_quit()
win = MyWindow()
win.show_all()
win.label1.hide()
height = win.label2.get_allocation().height
win.resize(win.screen.width(), height)
win.move(0, win.screen.height() - height)
Gtk.main()
Based on AlexB's comment i assume my code is correct, but it is not working for me. I don't see any reason why it will not run under python 2. Maybe there is an issue with the window manager. I'll investigate
Did anyone succesfully use set_gravity() on windows?
Documentation indicates it may or may not work, depending on Window Manager. It doesn't for me on Xubuntu 18.04

psychopy polygon on top of image

using psychopy ver 1.81.03 on a mac I want to draw a polygon (e.g. a triangle) on top of an image.
So far, my image stays always on top and thus hides the polygon, no matter the order I put them in. This also stays true if I have the polygon start a frame later than the image.
e.g. see inn the code below (created with the Builder before compiling) how both a blue square and a red triangle are supposed to start at frame 0, but when you run it the blue square always covers the red triangle!?
Is there a way to have the polygon on top? Do I somehow need to merge the image and polygon before drawing them?
Thank you so much for your help!!
Sebastian
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
This experiment was created using PsychoPy2 Experiment Builder (v1.81.03), Sun Jan 18 20:44:26 2015
If you publish work using this script please cite the relevant PsychoPy publications
Peirce, JW (2007) PsychoPy - Psychophysics software in Python. Journal of Neuroscience Methods, 162(1-2), 8-13.
Peirce, JW (2009) Generating stimuli for neuroscience using PsychoPy. Frontiers in Neuroinformatics, 2:10. doi: 10.3389/neuro.11.010.2008
"""
from __future__ import division # so that 1/3=0.333 instead of 1/3=0
from psychopy import visual, core, data, event, logging, sound, gui
from psychopy.constants import * # things like STARTED, FINISHED
import numpy as np # whole numpy lib is available, prepend 'np.'
from numpy import sin, cos, tan, log, log10, pi, average, sqrt, std, deg2rad, rad2deg, linspace, asarray
from numpy.random import random, randint, normal, shuffle
import os # handy system and path functions
# Ensure that relative paths start from the same directory as this script
_thisDir = os.path.dirname(os.path.abspath(__file__))
os.chdir(_thisDir)
# Store info about the experiment session
expName = u'test_triangle_over_square' # from the Builder filename that created this script
expInfo = {'participant':'', 'session':'001'}
dlg = gui.DlgFromDict(dictionary=expInfo, title=expName)
if dlg.OK == False: core.quit() # user pressed cancel
expInfo['date'] = data.getDateStr() # add a simple timestamp
expInfo['expName'] = expName
# Data file name stem = absolute path + name; later add .psyexp, .csv, .log, etc
filename = _thisDir + os.sep + 'data/%s_%s_%s' %(expInfo['participant'], expName, expInfo['date'])
# An ExperimentHandler isn't essential but helps with data saving
thisExp = data.ExperimentHandler(name=expName, version='',
extraInfo=expInfo, runtimeInfo=None,
originPath=None,
savePickle=True, saveWideText=True,
dataFileName=filename)
#save a log file for detail verbose info
logFile = logging.LogFile(filename+'.log', level=logging.EXP)
logging.console.setLevel(logging.WARNING) # this outputs to the screen, not a file
endExpNow = False # flag for 'escape' or other condition => quit the exp
# Start Code - component code to be run before the window creation
# Setup the Window
win = visual.Window(size=(1280, 800), fullscr=True, screen=0, allowGUI=False, allowStencil=False,
monitor='testMonitor', color=[0,0,0], colorSpace='rgb',
blendMode='avg', useFBO=True,
)
# store frame rate of monitor if we can measure it successfully
expInfo['frameRate']=win.getActualFrameRate()
if expInfo['frameRate']!=None:
frameDur = 1.0/round(expInfo['frameRate'])
else:
frameDur = 1.0/60.0 # couldn't get a reliable measure so guess
# Initialize components for Routine "trial"
trialClock = core.Clock()
ISI = core.StaticPeriod(win=win, screenHz=expInfo['frameRate'], name='ISI')
square = visual.ImageStim(win=win, name='square',units='pix',
image=None, mask=None,
ori=0, pos=[0, 0], size=[200, 200],
color=u'blue', colorSpace='rgb', opacity=1,
flipHoriz=False, flipVert=False,
texRes=128, interpolate=True, depth=-1.0)
polygon = visual.ShapeStim(win=win, name='polygon',units='pix',
vertices = [[-[200, 300][0]/2.0,-[200, 300][1]/2.0], [+[200, 300][0]/2.0,-[200, 300][1]/2.0], [0,[200, 300][1]/2.0]],
ori=0, pos=[0, 0],
lineWidth=1, lineColor=[1,1,1], lineColorSpace='rgb',
fillColor=u'red', fillColorSpace='rgb',
opacity=1,interpolate=True)
# Create some handy timers
globalClock = core.Clock() # to track the time since experiment started
routineTimer = core.CountdownTimer() # to track time remaining of each (non-slip) routine
#------Prepare to start Routine "trial"-------
t = 0
trialClock.reset() # clock
frameN = -1
# update component parameters for each repeat
# keep track of which components have finished
trialComponents = []
trialComponents.append(ISI)
trialComponents.append(square)
trialComponents.append(polygon)
for thisComponent in trialComponents:
if hasattr(thisComponent, 'status'):
thisComponent.status = NOT_STARTED
#-------Start Routine "trial"-------
continueRoutine = True
while continueRoutine:
# get current time
t = trialClock.getTime()
frameN = frameN + 1 # number of completed frames (so 0 is the first frame)
# update/draw components on each frame
# *square* updates
if frameN >= 0 and square.status == NOT_STARTED:
# keep track of start time/frame for later
square.tStart = t # underestimates by a little under one frame
square.frameNStart = frameN # exact frame index
square.setAutoDraw(True)
# *polygon* updates
if frameN >= 0 and polygon.status == NOT_STARTED:
# keep track of start time/frame for later
polygon.tStart = t # underestimates by a little under one frame
polygon.frameNStart = frameN # exact frame index
polygon.setAutoDraw(True)
# *ISI* period
if t >= 0.0 and ISI.status == NOT_STARTED:
# keep track of start time/frame for later
ISI.tStart = t # underestimates by a little under one frame
ISI.frameNStart = frameN # exact frame index
ISI.start(0.5)
elif ISI.status == STARTED: #one frame should pass before updating params and completing
ISI.complete() #finish the static period
# check if all components have finished
if not continueRoutine: # a component has requested a forced-end of Routine
routineTimer.reset() # if we abort early the non-slip timer needs reset
break
continueRoutine = False # will revert to True if at least one component still running
for thisComponent in trialComponents:
if hasattr(thisComponent, "status") and thisComponent.status != FINISHED:
continueRoutine = True
break # at least one component has not yet finished
# check for quit (the Esc key)
if endExpNow or event.getKeys(keyList=["escape"]):
core.quit()
# refresh the screen
if continueRoutine: # don't flip if this routine is over or we'll get a blank screen
win.flip()
else: # this Routine was not non-slip safe so reset non-slip timer
routineTimer.reset()
#-------Ending Routine "trial"-------
for thisComponent in trialComponents:
if hasattr(thisComponent, "setAutoDraw"):
thisComponent.setAutoDraw(False)
win.close()
core.quit()
As per Jonas' comment above, PsychoPy uses a layering system in which subsequent stimuli are drawn on top of previous stimuli (as in his code examples).
In the graphical Builder environment, drawing order is represented by the vertical order of stimulus components: stimuli at the top are drawn first, and ones lower down are progressively layered upon them.
You can change the order of stimulus components by right-clicking on them and selecting "Move up", "move down", etc as required.
Sebastian, has, however, identified a bug here, in that the intended drawing order is not honoured between ImageStim and ShapeStim components. As a work-around, you might be able to replace your ShapeStim with a bitmap representation, displayed using an ImageStim. Multiple ImageStims should draw correctly (as do multiple ShapeStims). To get it to draw correctly on top of another image, be sure to save it as a .png file, which supports transparency. That way, only the actual shape will be drawn on top, as its background pixels can be set to be transparent and will not mask the the underlying image.
For a long-term solution, I've added your issue as a bug report to the PsychoPy GitHub project here:
https://github.com/psychopy/psychopy/issues/795
It turned out to be a bug in the Polygon component in Builder.
This is fixed in the upcoming release (1.82.00). The changes needed to make the fix can be seen at
https://github.com/psychopy/psychopy/commit/af1af9a7a85cee9b4ec8ad5e2ff1f03140bd1a36
which you can add to your own installation if you like.
cheers,
Jon

Resources