I'm using python=3.5 and OpenCV 3.1.4 to establish a connection with my IP camera via RTSP protocol using wifi on Raspberry Pi 3 B
Here's my code
import cv2
from imutils.video import WebcamVideoStream
vs = WebcamVideoStream(src="rtsp://192.168.1.4").start()
while 1:
frame = vs.read()
cv2.imshow("video",frame)
if cv2.waitkey(1) & 0xFF == ord('q'):
break
vs.release()
Which raises the error:
OpenCV(3.4.1) Error: Assertion failed (size.width>0 && size.height>0) in imshow, file /home/pi/opencv-3.4.1/modules/highgui/src/window.cpp, line 356
Traceback (most recent call last):
File "ip_cam.py", line 8, in <module>
cv2.imshow("video",frame)
cv2.error: OpenCV(3.4.1) /home/pi/opencv-3.4.1/modules/highgui/src/window.cpp:356: error: (-215) size.width>0 && size.height>0 in function imshow
From the error it's clear that frames are not being captured from the supplied connection which means connection is not properly established. This code was working perfectly fine when I ran it on ubuntu. When I ran it on raspberry pi rasbian stretch kernel version 4.14 I get this error.
Can anyone guide me through how can I read and process frames from IP camera in python and openCV connected through WiFi?
Related
I am trying to use pyarrow on Windows but I'm getting the following error with fs.HadoopFileSystem() :
OSError Traceback (most recent call last)
Cell In[1], line 2
1 from pyarrow import fs
----> 2 hdfs = fs.HadoopFileSystem(host='localhost', port=9870)
File c:\prj\study\.venv\lib\site-packages\pyarrow\_hdfs.pyx:96, in pyarrow._hdfs.HadoopFileSystem.__init__()
File c:\prj\study\.venv\lib\site-packages\pyarrow\error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status()
File c:\prj\study\.venv\lib\site-packages\pyarrow\error.pxi:115, in pyarrow.lib.check_status()
OSError: Unable to load libhdfs: 指定されたモジュールが見つかりません。
I followed the steps on this site to install Hadoop using binaries from Apache and I am able to use it through cmd. However when I checked lbhdfs.so in lib/native, it shows as a 0 kb file. Is this normal, or do I have to compile Hadoop source on my own so I could get the correct libhdfs.so?
I'm currently working an IOT project requiring the transfer of sensor data between an ESP32 (a wESP32 to be exact) and a Raspberry Pi configured as a broker. From what I've read so far the MQTT protocol seems to fit my needs perfectly, I'm thus running a Mosquitto broker on the Pi as well as the MQTT simple client library provided on the micropython's GitHub repository.
The first tests performed in the MicroPython WebREPL have been successful as I've been able to receive data published from the ESP using the following code:
Welcome to MicroPython!
Password:
WebREPL connected
>>> from umqtt.simple import MQTTClient
>>> c = MQTTClient("umqtt_client", "rapsberrypi")
>>> c.connect()
0
>>> c.publish(b"sensors/temperature", "{:.1f}".format(21.35))
>>> c.disconnect()
>>>
However as soon as I try running the same code on boot in the main.py file or through the serial port using ether screen or rshell I get the following error.
Started webrepl in normal mode
MicroPython v1.12 on 2019-12-20; ESP32 module with ESP32
Type "help()" for more information.
>>> I (4379) ethernet: LAN cable connected
I (5359) event: eth ip: 192.168.1.62, mask: 255.255.255.0, gw: 192.168.1.1
I (5359) ethernet: Got IP
from umqtt.simple import MQTTClient
>>> c = MQTTClient("umqtt_client", "rapsberrypi")
>>> c.connect()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "umqtt/simple.py", line 57, in connect
IndexError: list index out of range
>>>
For some context here is the 57th line of the umqtt/simple.py file:
55 def connect(self, clean_session=True):
56 self.sock = socket.socket()
57 addr = socket.getaddrinfo(self.server, self.port)[0][-1]
58 self.sock.connect(addr)
If tou have any clue of what's going on here please let me know!
In this line of code:
c = MQTTClient("umqtt_client", "rapsberrypi")
the second argument to the constructor - "raspberrypi" - identifies the broker (server).
This is likely to be defined in such a way that only software running on your Raspberry Pi will be able to resolve it. The name won't be visible to software running in places other than the Pi.
In the code running on your ESP32, replace "raspberrypi" with the IP address or a resolvable (fully qualified domain name) name for your Raspberry Pi. Note that 127.0.0.1 is the loopback interface's IP address and is not accessible by software that's not running on the Pi.
You can use the ifconfig command to list your network interfaces; look for one named something like wlan0 for wifi or eth0 for wired ethernet and use the IP address associated with that interface.
Once you've done that if you still can't reach the broker then the broker is likely to be configured to not respond to requests that don't originate on the same machine as it's running.
I´m newbie in this matter, and I don´t know why my application just work and runs in a open network, when is behind a proxy I have a return error.
I´m using a raspberry zero, with raspbian Stretch, using azure-iot-sdk-python and proxy squid
I already try this things:
The proxy allow HTTPS connection, and it has all PORT are available and without any restriction and the address *****. azure-devices.net is put inside a whitelist in
$ nano / etc / squid / whitelist
Beyond that I set the proxy in the operate system, raspbian Stretch, in the
$ nano / etc / environment
the follow configurations:
export http_proxy = "http://192.168.2.254:3128/"
export https_proxy = "https://192.168.2.254:3128/"
export no_proxy = "localhost, 127.0.0.1"
And also in
$ nano ~ / .bashrc
export http_proxy = http: //192.168.2.254:3128
export https_proxy = https: //192.168.2.254:3128
export no_proxy = localhost, 127.0.0.1
And,
$ nano /etc/apt/apt.conf.d/90proxy
Acquire :: http :: Proxy "http://192.168.2.254:3128/";
Acquire :: https :: Proxy "https://192.168.2.254:3128/";
from iothub_client import IoTHubClient, IoTHubTransportProvider, IoTHubMessage
import time
CONNECTION_STRING = "HostName=******.azure-devices.net;DeviceId=***;SharedAccessKey=*********"
PROTOCOL = IoTHubTransportProvider.MQTT
def send_confirmation_callback(message, result, user_context):
print("Confirmation received for message with result = %s" % (result))
if __name__ == '__main__':
client = IoTHubClient(CONNECTION_STRING, PROTOCOL)
message = IoTHubMessage("test message")
client.send_event_async(message, send_confirmation_callback, None)
print("Message transmitted to IoT Hub")
while True:
time.sleep(1)
Error: File: /usr/sdk/src/c/c-utility/adapters/socketio_berkeley.c Func: lookup_address_and_initiate_socket_connection Line: 282 Failure: getaddrinfo failure -3.
Error: File: /usr/sdk/src/c/c-utility/adapters/socketio_berkeley.c Func: socketio_open Line: 765 lookup_address_and_connect_socket failed
Error: File: /usr/sdk/src/c/c-utility/adapters/tlsio_openssl.c Func: on_underlying_io_open_complete Line: 760 Invalid tlsio_state. Expected state is TLSIO_STATE_OPENING_UNDERLYING_IO.
Error: File: /usr/sdk/src/c/c-utility/adapters/tlsio_openssl.c Func: tlsio_openssl_open Line: 1258 Failed opening the underlying I / O.
Error: File: /usr/sdk/src/c/umqtt/src/mqtt_client.c Func: mqtt_client_connect Line: 1000 Error: io_open failed
Error: File: /usr/sdk/src/c/iothub_client/src/iothubtransport_mqtt_common.c Func: SendMqttConnectMsg Line: 2122 failure connecting
You can not use a HTTP proxy with (native) MQTT, they are 2 totally separate protocols.
If you can use MQTT over WebSockets then you should be able to use a HTTP proxy as WebSockets are initially established by upgrading a HTTP connection.
If you have a SOCKS proxy available on your network, then you may be able to use that with native MQTT. The following question has hints on how to use a SOCKS proxy with Python. How can I use a SOCKS 4/5 proxy with urllib2?
Hi I am getting the following error while connecting SenseHat module to raspberry pi.
File "/usr/lib/python3/dist-packages/sense_hat/sense_hat.py", line 39, in __init__
raise OSError('Cannot detect %s device' % self.SENSE_HAT_FB_NAME)
OSError: Cannot detect RPi-Sense FB device
Kindly help out.
Check if you have enabled I2C via raspi-config.
Update Raspberry Pi with command rpi-update.
If there is still same error, try to edit /boot/config.txt and add this line to the end of file: dtoverlay=rpi-sense. Save, reboot and try.
I've been trying to install a few ports ( wget, autoconf, coreutils, ... etc ) but it seems impossible !!! Here's what I have done step by step :
I'm using OS X 10.9.1 Mavericks and I've downloaded and installed macports using installation package (.pkg) from macports website. I had Xcode 5.0.2 already installed so I logged in my Apple iOS developer account, and downloaded command_line_tools_os_x_mavericks_for_xcode__late_october_2013.dmg and installed the package !
When I use
sudo port install coreutils I get the following error: Error: Port coreutils not found
I thought (And Googled of course) it must be because I haven't updated macports. Then I tried using self update using : sudo port -v selfupdate which by the way was not successful and I got the following error log :
---> Updating MacPorts base sources using rsync
rsync: failed to connect to rsync.macports.org: Operation timed out (60)
rsync error: error in socket IO (code 10) at /SourceCache/rsync/rsync42/rsync/clientserver.c(105) [receiver=2.6.9]
Command failed: /usr/bin/rsync -rtzv --delete-after rsync://rsync.macports.org/release/tarballs/base.tar /opt/local/var/macports/sources/rsync.macports.org/release/tarballs
Exit code: 10
Error: Error synchronizing MacPorts sources: command execution failed
To report a bug, follow the instructions in the guide:
http://guide.macports.org/#project.tickets
Error: /opt/local/bin/port: port selfupdate failed: Error synchronizing MacPorts sources: command execution failed`
According to failed to connect to server message, I thought it may be caused because of restrictions and sanctions applied to my IP Address which by the way is currently from Iran (I figured that out because I cannot even open macports website directly without using a proxy server) ! I used the instructions in the following URL to reroute the connection and make Macports connect through a proxy server :
http://samkhan13.wordpress.com/2012/06/15/make-macports-work-behind-proxy/
The instruction above tries to connect and fetch the port tree using a .tar.gz archive over HTTP ! I didn't got that connection error anymore but I got some Could not access the file error, so I downloaded that file manually, set up an Apache web server locally, and replaced that HTTP URL with my localhost link.
Everything seemed to be fine by using
sudo port -v sync instead of sudo port -v selfupdate
Here's how the log started :
---> Updating the ports tree
Synchronizing local ports tree from http://localhost/ports.tar.gz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 24.6M 100 24.6M 0 0 98.9M 0 --:--:-- --:--:-- --:--:-- 99.1M
x ports/
x ports/gnome/
x ports/gnome/gnofract4d/
x ports/gnome/gnofract4d/Portfile
x ports/gnome/gnofract4d/files/
x ports/gnome/gnofract4d/files/patch-setup.py.diff
x ports/gnome/gnofract4d/files/patch-win.diff
x ports/gnome/gnofract4d/files/patch-fract4d_fractconfig.py.diff
x ports/gnome/gnofract4d/files/patch-fract4d-c-imageIO.cpp.diff
x ports/gnome/libchamplain/
x ports/gnome/libchamplain/Portfile
x ports/gnome/gconf/
x ports/gnome/gconf/Portfile
x ports/gnome/goocanvas/
x ports/gnome/goocanvas/Portfile
x ports/gnome/gstreamer1-gst-libav/
.
.
.
But in the end, I got some errors :
.
.
.
x ports/net/daemonlogger/Portfile
x ports/net/dibbler/
x ports/net/dibbler/Portfile
x ports/net/dibbler/files/
x ports/net/dibbler/files/0-enable-prefix.patch
x ports/net/dibbler/files/1-correct-man-pages.patch
x ports/PortIndex_darwin_11_i386/
x ports/PortIndex_darwin_11_i386/PortIndex.quick: gzip decompression failed
tar: Error exit delayed from previous errors.
Command failed: cd /opt/local/var/macports/sources/localhost/ports/.. && /usr/bin/tar -v -z -xf ports.tar.gz
Exit code: 1
Error: Extracting http://localhost/ports.tar.gz failed (command execution failed)
port sync failed: Synchronization of 1 source(s) failed
Now, I still cannot install any ports, and if I revert that default link in /opt/local/etc/macports/sources.conf to its original RSYNC one, everything returns to the way it was ( all errors, all messages, etc ... )
If I don't revert and go on using the file I have put on my localhost ( or using file:// to address the file directly ) , here's what happens when I try to install a port ( for example, Using sudo port install coreutils ) :
Port extract failed: ports/PortIndex_darwin_11_i386/PortIndex.quick: gzip decompression failed
tar: Error exit delayed from previous errors.
while executing
"macports::fetch_port $path 1"
(procedure "macports::getportdir" line 12)
invoked from within
"macports::getportdir $source"
(procedure "macports::getindex" line 4)
invoked from within
"macports::getindex $source"
(procedure "_mports_load_quickindex" line 11)
invoked from within
"_mports_load_quickindex"
(procedure "mportinit" line 577)
invoked from within
"mportinit ui_options global_options global_variations"
Error: /opt/local/bin/port: Failed to initialize MacPorts, Port extract failed: ports/PortIndex_darwin_11_i386/PortIndex.quick: gzip decompression failed
tar: Error exit delayed from previous errors.
I have Googled and read almost every solution that is suggested but NONE has worked out and I'm really stuck with this :(
Any NEW solution is really appreciated.
No replies, and I Found the solution myself!
The only way to redirect RSYNC requests through a proxy server is to tunnel over an L2TP VPN connection ( not PPTP ). That's the only way to make Macports work behind a proxy server.
Hope this can help other guys who are stuck with this weird connection method.
Instead of the main MacPorts mirror (which is sponsored by MacOSForge, which is run by Apple, which is thus bound to US law and export restrictions to Iran), you can use an alternate rsync mirror from the list at http://trac.macports.org/wiki/Mirrors.
If none of the rsync mirrors work for you, also read the FAQ entry for this very question: http://trac.macports.org/wiki/FAQ#selfupdatefails.