I am making screenshots of videos with HTML5 canvas, the video is hosted elsewhere, everything works except toDataURL() because the canvas is dirty. So, I am wondering, is there any way I can save this canvas image on the computer?
I assume the answer is no but hoping for some hack to get this done, well any idea apart from downloading the video to my server and serving it from there...
The short answer is "No."
The longer answer might be yes.
Maybe your server can download the video and host it, then play it from your same domain?
If you control the server that is hosting the video you could enable cors.
(Or you could combine the two and have the video uploaded to a cors-enabled site that is not your own.)
Otherwise, you're out of luck.
What about, I haven't tried it yet, if you redraw the original canvas on another canvas which you then save to an image. (and use css or position the canvases over each other to "hide" the second canvas).
Will the second canvas be dirty?
(thinking about a technique like thisone)
I tried the copying of the canvas but this just returned the same dirty canvas error.
In the end to get this work I implemented a small service that would extract remote sources (videos) and make them look as though they were local i.e. by reading the source server side and writing out to my HTML/JS page. Once this was done it all worked fine.
I used Python / Flask to do this, here is the snippet. Not perfect in regards to handle partial content requests but should get someone going.
To use it I access my videos using: /remote?url=
from datetime import timedelta
from flask import make_response, request, current_app, Flask, url_for, render_template, Response
from functools import update_wrapper
import requests
import logging
import json
from werkzeug.datastructures import Headers
import httplib
import os
import subprocess
import base64
httplib.HTTPConnection.debuglevel = 1
app = Flask(__name__)
logging.basicConfig()
logging.getLogger().setLevel(logging.DEBUG)
requests_log = logging.getLogger("requests.packages.urllib3")
requests_log.setLevel(logging.DEBUG)
requests_log.propagate = True
def crossdomain(origin=None, methods=None, headers=None,
max_age=21600, attach_to_all=True,
automatic_options=True):
if methods is not None:
methods = ', '.join(sorted(x.upper() for x in methods))
if headers is not None and not isinstance(headers, basestring):
headers = ', '.join(x.upper() for x in headers)
if not isinstance(origin, basestring):
origin = ', '.join(origin)
if isinstance(max_age, timedelta):
max_age = max_age.total_seconds()
def get_methods():
if methods is not None:
return methods
options_resp = current_app.make_default_options_response()
return options_resp.headers['allow']
def decorator(f):
def wrapped_function(*args, **kwargs):
if automatic_options and request.method == 'OPTIONS':
resp = current_app.make_default_options_response()
else:
resp = make_response(f(*args, **kwargs))
if not attach_to_all and request.method != 'OPTIONS':
return resp
h = resp.headers
h['Access-Control-Allow-Origin'] = origin
h['Access-Control-Allow-Methods'] = get_methods()
h['Access-Control-Max-Age'] = str(max_age)
if headers is not None:
h['Access-Control-Allow-Headers'] = headers
return resp
f.provide_automatic_options = False
return update_wrapper(wrapped_function, f)
return decorator
def stream_remote(url, headers=None):
logging.debug(headers)
range = headers["Range"]
logging.debug(range)
r = requests.get(url, stream=True, headers={"range":range})
logging.debug(r.headers)
for block in r.iter_content(1024):
if not block:
break
yield block
#app.route('/remote/')
def get_remote():
# Gets a remote file to make it look like it is local for CORS purposes
url = request.args.get("url", None)
resp_headers = Headers()
resp_headers.add('Accept-Ranges','bytes')
if url is None:
return "Error. No URL provided"
else:
headers = request.headers
logging.debug(headers)
return Response(stream_remote(url, headers),mimetype='video/mp4',headers=resp_headers)
if __name__ == '__main__':
app.debug = True
app.run(host="127.0.0.1", port=9001)
Related
I have flask api response picture:
FORMAT = {'image/jpeg':'JPEG', 'image/bmp':'BMP', 'image/png':'PNG', 'image/gif': 'GIF'}
#app.route('/api/image/<id>/<str_size>', methods=['get'])
def show_thumbnail(id, str_size):
size = int(str_size)
with get_db().cursor() as cur:
cur.callproc('getimage', (id,))
result = cur.fetchone()
buf = BytesIO(result[1])
if(size>0):
im = Image.open(buf)
im.thumbnail((size, size))
buf = BytesIO(b'')
im.save(buf, format=FORMAT[result[0].lower()])
fw = open('w03.jpg', 'wb')
fw.write(buf.getbuffer())
fw.close()
resp = Response(buf)
resp.headers.set('Content-Type', result[0].lower())
return resp
ps:
result[0] = 'image/jpeg'
result[1] is the bytes array of jpeg picture.
If I set the size(str_size) = 0, I mean I do not run PIL Image thumbnail code part. I can get the correct picture in response.
If I set the size(str_size) = 256 for instance, I find the 'w03.jpg' is correct and I can get the correct resize image, but the response is black for the reason is the image contains error.
im.save(buf) puts the buffer to the end. You need to rewind this before building resp Do this with buf.seek(0). I suspect buf.getbuffer doesn't change the stream position in the same way, which would explain why w03.jpg is correct in the second test:
You can also use a with block to minimize some of the code (this auto closes the file):
# ...
with open('w03.jpg', 'wb') as fw:
fw.write(buf.getbuffer())
buf.seek(0)
resp = Response(buf)
# ...
I'm working on an application that uses the Microsoft Cognitive services Speech-to-Text API. I'm trying to create a GUI where the transcribed text should show up in a textbox once the start button is pushed and the transcription is stopped once a stop-button is pressed. I'm pretty new to creating GUI's and have been using PyQt5. I have divided the application according to MVC (Model-View-Controller). The code for the GUI is as follows:
import sys
import time
from functools import partial
import azure.cognitiveservices.speech as speechsdk
from PyQt5.QtCore import *
from PyQt5.QtWidgets import *
from PyQt5.QtGui import *
class test_view(QMainWindow):
def __init__(self):
super().__init__()
self.generalLayout = QVBoxLayout()
self._centralWidget = QWidget(self)
self.setCentralWidget(self._centralWidget)
self._centralWidget.setLayout(self.generalLayout)
self._createApp()
def _createApp(self):
self.startButton = QPushButton('Start')
self.stopButton = QPushButton('Stop')
buttonLayout = QHBoxLayout()
self.startButton.setFixedWidth(220)
self.stopButton.setFixedWidth(220)
buttonLayout.addWidget(self.startButton)
buttonLayout.addWidget(self.stopButton)
self.text_box = QTextEdit()
self.text_box.setReadOnly(True)
self.text_box.setFixedSize(1500, 400)
layout_text = QHBoxLayout()
layout_text.addWidget(self.text_box)
layout_text.setAlignment(Qt.AlignCenter)
self.generalLayout.addLayout(buttonLayout)
self.generalLayout.addLayout(layout_text)
def appendText(self, text):
self.text_box.append(text)
self.text_box.setFocus()
def clearText(self):
return self.text_box.setText('')
class test_ctrl:
def __init__(self, view):
self._view = view
def main():
application = QApplication(sys.argv)
view = test_view()
view.showMaximized()
test_ctrl(view=view)
sys.exit(application.exec_())
if __name__ == "__main__":
main()
The Speech-to-Text Transcribe code is:
import azure.cognitiveservices.speech as speechsdk
import time
def setupSpeech():
speech_key, service_region = "speech_key", "service_region"
speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region)
speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config)
return speech_recognizer
def main():
speech_recognizer = setupSpeech()
done = False
def stop_cb(evt):
print('CLOSING on {}'.format(evt))
speech_recognizer.stop_continuous_recognition()
nonlocal done
done = True
all_results = []
def handle_final_result(evt):
all_results.append(evt.result.text)
speech_recognizer.recognizing.connect(lambda evt: print(evt))
speech_recognizer.recognized.connect(handle_final_result)
speech_recognizer.session_stopped.connect(stop_cb)
speech_recognizer.canceled.connect(stop_cb)
speech_recognizer.start_continuous_recognition()
while not done:
time.sleep(.5)
print(all_results)
if __name__ == "__main__":
main()
I know for sure that both of the pieces of code work, but I'm not sure how to build the speech-to-text code into the MVC code. I think it should work with a model and it should be connected through the controller to the view. I tried doing this in multiple ways but I just can't figure it out. I also figured I need some kind of threading to keep the code from freezing the GUI. I hope someone can help me with this.
You need to replace this part
print(all_results)
and push all_results asynchronously to ur code for processing the text.
If not, expose a button in the UI to invoke the speech_recognizer.start_continuous_recognition() as a separate function and pick the results to process. This way you can avoid freezing the UI
I'm having trouble getting this to work. I'd like to be able to place an animated GIF in a specific spot on my QSplashScreen.
The GIF has to be animated using multiprocessing and the onNextFrame method so that it will play during the initial load (otherwise it just freezes on the first frame).
I've tried inserting self.move(500,500) everywhere but nothing is working (well not working well enough). Right now, the GIF will play in the spot I want, but then it will snap back to screen center on the next frame, then back to the spot I want, etc. Inserting the move method every possible place hasn't fixed this issue.
Here's the code:
from PySide import QtCore
from PySide import QtGui
from multiprocessing import Pool
class Form(QtGui.QDialog):
def __init__(self, parent=None):
super(Form, self).__init__(parent)
self.browser = QtGui.QTextBrowser()
self.setWindowTitle('Just a dialog')
self.move(500,500)
class MySplashScreen(QtGui.QSplashScreen):
def __init__(self, animation, flags):
# run event dispatching in another thread
QtGui.QSplashScreen.__init__(self, QtGui.QPixmap(), flags)
self.movie = QtGui.QMovie(animation)
self.movie.frameChanged.connect(self.onNextFrame)
#self.connect(self.movie, SIGNAL('frameChanged(int)'), SLOT('onNextFrame()'))
self.movie.start()
self.move(500, 500)
def onNextFrame(self):
pixmap = self.movie.currentPixmap()
self.setPixmap(pixmap)
self.setMask(pixmap.mask())
self.move(500, 500)
# Put your initialization code here
def longInitialization(arg):
time.sleep(args)
return 0
if __name__ == "__main__":
import sys, time
app = QtGui.QApplication(sys.argv)
# Create and display the splash screen
# splash_pix = QPixmap('a.gif')
splash = MySplashScreen('S:\_Studio\_ASSETS\Tutorials\Maya\Coding\Python\_PySide\GIF\dragonGif.gif',
QtCore.Qt.WindowStaysOnTopHint)
# splash.setMask(splash_pix.mask())
#splash.raise_()
splash.move(500, 500)
splash.show()
# this event loop is needed for dispatching of Qt events
initLoop = QtCore.QEventLoop()
pool = Pool(processes=1)
pool.apply_async(longInitialization, [2], callback=lambda exitCode: initLoop.exit(exitCode))
initLoop.exec_()
form = Form()
form.show()
splash.finish(form)
app.exec_()
I'm looking for a way to refresh camera miniature images from snapshots. I have this piece of code, but after first refresh (not the one in the reloadMiniatures thread) I get nothing (black screen).
I have tried other solutions but showing 6x mjpeg streams was to heavy (and I don't really need high framerate). Had some success with AsyncImage and saving images to file, but it wasn't very efficient and I had this loading_image to get rid of.
from kivy.app import App
from kivy.uix.image import Image
import time
import threading
import urllib
from kivy.core.image import Image as CoreImage
from io import BytesIO
class TestApp(App):
def reloadMiniatures(self):
while True:
data = BytesIO(urllib.urlopen("http://10.0.13.206:9000/?action=snapshot").read())
time.sleep(3)
self.image.texture = CoreImage(data, ext='jpg').texture
def build(self):
data = BytesIO(urllib.urlopen("http://10.0.13.206:9000/?action=snapshot").read())
self.image = Image()
self.image.texture = CoreImage(data, ext='jpg').texture
miniatures = threading.Thread(target=self.reloadMiniatures)
miniatures.daemon = True
miniatures.start()
return self.image
TestApp().run()
You could try using Loader instead:
def load_miniatures(self, *args):
proxy = Loader.image('http://10.0.13.206:9000/?action=snapshot')
proxy.bind(on_load=self.receive_miniatures)
def receive_miniatures(self, proxy):
if proxy.image.texture:
self.image.texture = proxy.image.texture
Clock.schedule_once(self.load_miniatures, 0.1)
def build(self):
self.image = Image()
self.load_miniatures()
return self.image
I am building an application to continuously display an image fetched from an IP camera. I have figured out how to fetch the image, and how to also display the image using Tkinter. But I cannot get it to continuously refresh the image. Using Python 2.7+.
Here is the code I have so far.
import urllib2, base64
from PIL import Image,ImageTk
import StringIO
import Tkinter
URL = 'http://myurl.cgi'
USERNAME = 'myusername'
PASSWORD = 'mypassword'
def fetch_image(url,username,password):
# this code works fine
request = urllib2.Request(url)
base64string = base64.encodestring('%s:%s' % (username, password)).replace('\n', '')
request.add_header("Authorization", "Basic %s" % base64string)
result = urllib2.urlopen(request)
imgresp = result.read()
img = Image.open(StringIO.StringIO(imgresp))
return img
root = Tkinter.Tk()
img = fetch_image(URL,USERNAME,PASSWORD)
tkimg = ImageTk.PhotoImage(img)
Tkinter.Label(root,image=tkimg).pack()
root.mainloop()
How should I edit the code so that the fetch_image is called repeatedly and its output updated in the Tkinter window?
Note that I am not using any button-events to trigger the image refresh, rather it should be refreshed automatically, say, every 1 second.
Here is a solution that uses Tkinter's Tk.after function, which schedules future calls to functions. If you replace everything after your fetch_image definition with the snipped below, you'll get the behavior you described:
root = Tkinter.Tk()
label = Tkinter.Label(root)
label.pack()
img = None
tkimg = [None] # This, or something like it, is necessary because if you do not keep a reference to PhotoImage instances, they get garbage collected.
delay = 500 # in milliseconds
def loopCapture():
print "capturing"
# img = fetch_image(URL,USERNAME,PASSWORD)
img = Image.new('1', (100, 100), 0)
tkimg[0] = ImageTk.PhotoImage(img)
label.config(image=tkimg[0])
root.update_idletasks()
root.after(delay, loopCapture)
loopCapture()
root.mainloop()