ROS and Gazebo simulation and UI web socket to simulate map - websocket

In actual production, a robot equipped with camera captures real time map and stream to website through web socket.
Can I simuate the map streaming without the real robot and camera?
I am thinking a ROS docker, a perhaps Gazebo or any simulation tool? I am completely new to simulation.
Then link the simulator to web socket?
Please guide me how to achieve it.

Related

How to get image data with DJI OSDK using DJI matrice 600 pro

I'm trying to do some image-processing using matrice 600pro(drone) with a jetson Xavier(mini computer) attached. A camera which has HDMI output function is attached to ronin-MX(gimbal) and data will be transmitted through SRW-60G(wireless video link using hdmi port). I thought using some functions in onboard sdk such like "/dji_sdk/main_camera_images (sensor_msgs/Image)"
(http://wiki.ros.org/dji_sdk)
can get image data easily, but I found that those functions are only available for M210 so I may cannot use these on my matrice 600pro.
Using a HDMI-USB converter may could solve this problem(make the camera-transmitter-receiver as a usb camera) but the converter is quite expensive and I'm not sure if there's better way to do this.
Any clue will be very helpful. Thank you!
as far as i know, osdk does not support video streaming for M600 series. The thing you can do is to use Ronnin gimbal with a 3rd party camera e.g ids/flir/matrixvision camera to direct connect to xavier. then based on this stream, do your processing.
If you need to stream the 3rd party video source down, its easier. Just use opencv imshow in full screen for current frame. This will output to desktop hdmi. Connect HDMI to the m600 video port. it will be stream down to your remote controller. THis is like a cheap version of the PSDK
Hope this will help you with your development work
Regards
Dr Yuan Shenghai

Am I able to use an external RFID reader USB-C

does someone have experience with external devices or especially with Rfid readers? I ordered an external one and still couldn't test because I cannot connect it to my tablet (I would need an USB-A to USB-C converter) but it is necessary for me to know if it's possible or not. If it is not I won't waste my time and send the external reader back.
The reader has a Java development kit (It could theoretically be used in Nativescript but I don't know if NS works with external devices) and there are also some example applications but not based on NativeScript.
Thank you guys in advance.
NativeScript is nothing but a JavaScript Runtime, in a nutshell it allows you to write the same Java or Kotlin / Objective C code in JavaScript.
So if your RFID Reader supports a Java Interface, then it's very much possible to read from Android device so NativeScript too.

Computer vision google tango

Tango is developed by google which has api that used for motion tracking on mobile devices. I was wondering if it could be applied to stand alone java application without android (for java-SE). If not then I was wondering are there any api out there are similar to tango where it tracks motion and depth perceptions.
I am trying to capture the motion data from a video, not camera/web cam. If this was possible at all.
Googles Tango API is only compatible with Tango enabled devices only. So it does not work on all mobile devices only devices that are Tango enabled. If you try to use the API with a device that is not Tango Enabled it wont work.
I think you should research a bit into OpenCV its an Open Source Computer Vision Library that is compatible with Java and many other languages. It lets you analyze videos without the need for that many sensors (like Raw Depth Sensors which are primarily used on Tango enabled Devices).
The Tango API is only available on Tango-enabled devices, which there aren't that many of. That being said, it is possible to create your own motion-tracking and depth-sensitive app with standard Java.
For motion-tracking all you need is a accelerometer and gyroscope, which most phones come equipped with nowadays as standard. All you basically then do is integrate those readings over time and you should have an idea of the device's position and orientation. Note that the accuracy will depend on your hardware and implementation, but be ready for it to be fairly inaccurate thanks to sensor drift and integration errors (see the answer here).
Depth-perception is more complex and would depend on your hardware setup. I'd recommend you look into the excellent OpenCV library which has Java bindings for you already and make sure you have a good grasp on the basics of computer vision (calibration, camera matrix, pinhole model, etc.). The first two answers in this SO question should get you started on how to go about determining depth using a single camera.

Raspberry Pi Embedded application

I am developing a computer vision system to control orientation of two mirrors to track stimuli in field of view.We are sending coordinates to motor over network and trying to track as smoothly as possible.
I have two questions regarding this :
1.Is Python suitable for this kind of project . I have already coded it in Python and find it very easy to use.
I am running Raspbian on raspberry Pi but found that it's not a real time os. We are sending command every 20 ms to the server built on raspberry Pi. Should I switch to arduino or patch the Linux kernel for this application.
Python, combined with OpenCV, is one of the best candidates for this task.
As mentioned in the comment above, the "real-time" issue is OS related. I personally recommend an Arduino-based solution, even though that puts more burden on the hardware design. You could also check the new IoT solutions from Intel, they have a wide range of boards.

Real-time Android application

I would like to build an Android app that process acceleration data and return result every 0.5 seconds. Are there any way to deal with the problem with out using native code?
P/s: I'm a newbie so please go easy on me!
Currently, there is no official support for real-time Java on Android.
But, there has been some research/academic projects focusing on bringing real-time capabilies into the Android world, you should check out:
RTDroid: A Design for Real-Time Android
Non-Blocking Garbage Collection for Real-Time Android

Resources