OSDK Interconnection problems - dji-sdk

I'm trying to connet from the osdk (Raspberry Pi 4 with Linux) to MSDK, but when the next sentence are executing nothing happends:
vehicle->mopServer->accept((PipelineID)MOBILE_PIPELINE_ID, UNRELIABLE, mobile_Pipeline)
In the console appear this:
[1185439.581]STATUS/1 # accept, L50: /*! 0.Find whether the pipeline object is existed or not */
[1185439.581]STATUS/1 # accept, L56: /*! 1.Create handler for binding */
[1185439.581]STATUS/1 # accept, L64: /*! 2.Do binding */
[1185441.581]STATUS/1 # accept, L77: /*! 3.Do accepting */
[1185441.581]STATUS/1 # accept, L78: Do accepting blocking for channel [49154]
But never ends, the program stay like "sleeping". I waited for many minutes and nothing.
Please, can anyone help me?
Thanks.

Related

How to log MaxHeaderBytes header issue in golang [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I ran http server in golang with this code
// Setup HTTP server.
var server = &http.Server{
Addr: ":" + viper.GetString("SERVER_PORT"),
Handler: routes.Routers(),
MaxHeaderBytes: 1, // 1 byte
}
Here I set MaxHeaderBytes to 1 byte and it is working fine and I am getting 413 error code if I pass too many headers but unable to log this error, Even middleware is not working for logging
I'm having difficulty understanding the question. Do you want to somehow log the fact a client's request was rejected due to its header being larger than the configured limit?
If yes, look at the ErrorLog and ConnState callbacks of the net/http.Server type.
I do not quite get what you mean by
Even middleware is not working for logging
but if you mean none of the HTTP request handlers you installed (the object returned by routes.Routers() in your example) get called, this can be easily explained: a request is only dispatched to the user-provided handlers if it's completely correct from the point of view of the net/http machinery.
Hence if you specifically tell the server to reject requests not meeting certain criteria, such requests won't ever be considered correct and reach the user-defined handlers, so your "middleware" has no chance to act on it.

usb: why my f_uvc doesn't answer GET_DEF request?

I need to implement uvc1.5 spec in my device, I choose linux3.4 as my kernel, and I want to use the drivers/usb/gadget/webcam.c
as my function driver. But it doesn't function properly.
According to the signals captured by wireshark, when the host sends the GET_DEF request to the device, my device answer -ENOENT which results in a failure to the enumeration.
I find out that when the composite.c receives this kind of requests, it will forward them to f->set_up to continue.
The main part of f->set_up is:
uvc->event_setup_out = !(ctrl->bRequestType & USB_DIR_IN);
uvc->event_length = le16_to_cpu(ctrl->wLength);
memset(&v4l2_event, 0, sizeof(v4l2_event));
v4l2_event.type = UVC_EVENT_SETUP;
memcpy(&uvc_event->req, ctrl, sizeof(uvc_event->req));
v4l2_event_queue(&uvc->vdev, &v4l2_event);
The call of v4l2_event_queue is what puzzles me: who will handle this event?
I didn't see any code doing such event related initialization work.....
And my question is how to handle this event properly, so I can answer the GET_DEF request ?
It's a V4L2 event you should deal with in another place. You can receive v4l2 event through
rt = ioctl(dev->fd, VIDIOC_DQEVENT,&v4l2_event);
Then you can parse this v4l2_event, it maybe GET_CUR, GER_LEN,etc. So you can response those requests by yourself define.

Windows compatibility with the Sony Camera Remote API

In the "How to develop an app using the Camera Remote API" toturial it states "The Camera Remote API uses JSON-RPC over HTTP. You can therefore use the Camera Remote APIs with any operating system, such as Android, IOS or Microsoft® Windows®." This stands to reason since the protocols are platform-agnostic. However, in the camera compatibility chart on this page:http://developer.sony.com/develop/cameras/ it states that the Sony Smart Remote Control App must be installed in order to "enable the use of the APIs." Since that app is only iOS and Android, does that mean that the APIs cannot be used on Windows?
I am keenly interested in developing a remote control app for Windows 8 tablets, and then for the Windows 8 phone. But if I cannot control the A5000, A7R, A7, NEX-6, NEX-5R, or NEX-5T, then it becomes far less interesting.
Is it possible to control those cameras with the plain HTTP JSON communication?
Thank you
I don't know if you solved your problem but I have the same issue and I managed to make it work somehow with C++. It took me some time to figure out what I had to do, I have never done any HTTP stuff, even less developed plug and play drivers so I will explain how I did it step by step, as I wish I had been explained.
At the end of the message I have given a link to my entire file, feel free to try it.
I am using boost asio library for every network related issue, and more (everything asynchronous really, this is a great library but very hard to grasp for ignorant people like me...). Most of my functions are partially copy-pasted from the examples in the documentation, this explains why my code is awkward at places. Here is my main function, nothing fancy I instanciate an asio::io_service, create my object (that I wrongly named multicast_manager) and then run the service:
#include <bunch_of_stuff>
using namespace std;
namespace basio = boost::asio;
int main(int argc, char* argv[]) {
try {
basio::io_service io_service;
multicast_manager m(io_service, basio::ip::address::from_string("239.255.255.250"));
io_service.run();
m.parse_description();
m.start_liveview();
io_service.reset();
io_service.run();
m.get_live_image();
io_service.reset();
io_service.run();
} catch (const std::exception& e) {
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}
Discovering the camera over ssdp
First, we have to connect to the camera using its upnp (universal plug and play) feature. The principle is that every upnp device is listening to the multicast port 230.255.255.250:1900 for M-SEARCH request. It means that if you send the proper message to this address, the device will answer by telling you it exists, and give you information to use it. The proper message is given in the documentation. I ran into two pitfalls doing that: first, I omitted to add the newline at the end of my message, as specified in the http standard. So the message you want to send can be build like that:
multicast_manager(basio::io_service& io_service, const basio::ip::address& multicast_address)
: endpoint_(multicast_address, 1900),
socket_(io_service, endpoint_.protocol())
{
stringstream os;
os << "M-SEARCH * HTTP/1.1\r\n";
os << "HOST: 239.255.255.250:1900\r\n";
os << "MAN: \"ssdp:discover\"\r\n";
os << "MX: 4\r\n";
os << "ST: urn:schemas-sony-com:service:ScalarWebAPI:1\r\n";
os << "\r\n";
message_ = os.str();
// ...
The second thing important in this part is to check that the message is sent to the right network interface. In my case, even when it was disabled, it went out through my ethernet card until I changed the right option in the socket, and I solved this issue with the following code:
// ...
socket_.set_option(basio::ip::multicast::outbound_interface(
basio::ip::address_v4::from_string("10.0.1.1")));
socket_.async_send_to(
basio::buffer(message_), endpoint_,
boost::bind(&multicast_manager::handle_send_to, this,
basio::placeholders::error));
}
Now we listen. We listen from where you might ask if you are like me? What port, what address? Well, we don't care: The thing is, when we sent our message, we defined a destination ip and port (in the endpoint constructor). We didn't necessarily define any local address, it is our own ip address (as a matter of fact, we did define it, but only so that it would know which network interface to choose from); and we didn't define any local port, it is in fact chosen automatically (by the OS I guess?). Anyway, the important part is that anyone listening to the multicast group will get our message and know its source, and will respond directly to the correct ip and port. So no need to specify anything here, no need to create a new socket, we just listen to the same socket we sent our message in a bottle:
void handle_send_to(const boost::system::error_code& error)
{
if (!error) {
socket_.async_receive(asio::buffer(data_),
boost::bind(&multicast_manager::handle_read_header, this,
basio::placeholders::error,
basio::placeholders::bytes_transferred));
}
}
If everything goes right, the answer goes along the line of:
HTTP/1.1 200 OK
CACHE-CONTROL: max-age=1800
EXT:
LOCATION: http://10.0.0.1:64321/DmsRmtDesc.xml
SERVER: UPnP/1.0 SonyImagingDevice/1.0
ST: urn:schemas-sony-com:service:ScalarWebAPI:1
USN: uuid:00000000-0005-0010-8000-10a5d09bbeda::urn:schemas-sony-com:service:ScalarWebAPI:1
X-AV-Physical-Unit-Info: pa=""; pl=;
X-AV-Server-Info: av=5.0; hn=""; cn="Sony Corporation"; mn="SonyImagingDevice"; mv="1.0";
To parse this message, I reused the parsing from the boost http client example, except I did it in one go because for some reason I couldn't do an async_read_until with a UDP socket. Anyway, the important part is that the camera received our message; The other important part is the location of the description file DmsRmtDesc.xml.
Retrieving and reading the description file
We need to get DmsRmtDesc.xml. This time we will send a GET request directly to the camera, at the ip address and port specified. This request is something like:
GET /DmsRmtDesc.xml HTTP/1.1
Host: 10.0.0.1
Accept: */*
Connection: close
Don't forget the extra empty line. I don't know what the Connection:close means. The accept line specify the application type of the answer you accept, here we will take any answer. I got the file using the boost http client example, basically I open a socket to 10.0.0.1:64321 and receive the HTPP header which is followed by the content of the file. Now we have a xml file with the address of the webservice we want to use. Let's parse it using boost again, we want to retrieve the camera service address, and maybe the liveview stream address:
namespace bpt = boost::property_tree;
bpt::ptree pt;
bpt::read_xml(content, pt);
liveview_url = pt.get<string>("root.device.av:X_ScalarWebAPI_DeviceInfo.av:X_ScalarWebAPI_ImagingDevice.av:X_ScalarWebAPI_LiveView_URL");
for (bpt::ptree::value_type &v : pt.get_child("root.device.av:X_ScalarWebAPI_DeviceInfo.av:X_ScalarWebAPI_ServiceList")) {
string service = v.second.get<string>("av:X_ScalarWebAPI_ServiceType");
if (service == "camera")
camera_service_url = v.second.get<string>("av:X_ScalarWebAPI_ActionList_URL");
}
Once this is done, we can start sending actual commands to the camera, and using the API.
Sending a command to the camera
The idea is quite simple, we build our command using the json format provided in the documentation, and we send it with a POST http request to the camera service. We will launch the liveview mode, so we send out POST request (we will eventually have to use boost property_tree to build our json string, here I did it manually):
POST /sony/camera HTTP/1.1
Accept: application/json-rpc
Content-Length: 70
Content-Type: application/json-rpc
Host:http://10.0.0.1:10000/sony
{"method": "startLiveview","params" : [],"id" : 1,"version" : "1.0"}
We send it to 10.0.0.1:10000 and wait for the answer:
HTTP/1.1 200 OK
Connection: close
Content-Length: 119
Content-Type: application/json
{"id":1,"result":["http://10.0.0.1:60152/liveview.JPG?%211234%21http%2dget%3a%2a%3aimage%2fjpeg%3a%2a%21%21%21%21%21"]}
We get the liveview url a second time, I don't know which one is better, they are identical...
Anyway, now we know how to send a command to the camera and retrieve its answer, we still have to fetch the image stream.
Fetching an image from the liveview stream
We have the liveview url, and we have the specification in the API reference guide. First thing first, we ask the camera to send us the stream, so we send a GET request to 10.0.0.1:60152:
GET /liveview.JPG?%211234%21http%2dget%3a%2a%3aimage%2fjpeg%3a%2a%21%21%21%21%21 HTTP/1.1
Accept: image/jpeg
Host: 10.0.0.1
And we wait for the answer, that should not take long. The answer begins with the usual HTTTP header:
HTTP/1.1 200 OK
Transfer-Encoding: chunked
Pragma: no-cache
CACHE-CONTROL: no-cache
Content-Type: image/jpeg
transferMode.dlna.org: Interactive
Connection: Keep-Alive
Date: Wed, 09 Jul 2014 14:13:13 GMT
Server: UPnP/1.0 SonyImagingDevice/1.0
According to the documentation, this should be directly followed by the liveview data stream wich consists in theory in:
8 bytes of common header specifying if we are indeed in liveview mode.
128 bytes of payload data giving the size of the jpg data.
n bytes of jpeg data.
And then we get the common header again, indefinitely until we close the socket.
In my case, the common header started with "88\r\n" so I had to discard it, and the jpg data was followed by 10 extra bytes before switching to the next frame, so I had to take that into account. I also had to detect automatically the start of the jpg image because the jpg data started with a text containing a number whose signification I ignore. Most probably these error are due to something I did wrong, or something I don't understand about the technologies I use here.
My code works right now but the last bits are very ad hoc and it definitely need some better checking.
It also needs much refactoring to be usable, but it shows how each step works I guess...
Here is the entire file if you want to try it out.
And here is a working VS project on github.
Thank you for your inquiry.
In the A5000, A7R, A7, NEX-6, NEX-5T, NEX-5R cameras, install the below app.
https://www.playmemoriescameraapps.com/portal/usbdetail.php?eid=IS9104-NPIA09014_00-F00002
This app is to be installed IN the camera and started.
Now you can use "Camera Remote API" to control the above camera from any OS.

Libevent does not echo properly when there is a delay

Based on the following code, I built a version of an echo server, but with a threaded delay. This was built because I've noticed that upon initial connection, my first send is sent back to the client, but the client does not receive it until a second send. My real-world use case is that I need to send messages to the server, do a lot of processing, and then send the result back... say 10-30 seconds later (could be hours in some cases).
http://www.wangafu.net/~nickm/libevent-book/Ref8_listener.html
So here is my code. For brevity's sake, I have only included the libevent-related code; not the threading code or other stuff. When debugging, a new connection is set up, the string buffer is filled properly, and debugging reveals that the writes go successfully.
http://pastebin.com/g02S2RTi
But I only receive the echo from the send-before-last. I send from the client numbers to validate this and when I send a 1 from the client, I receive nothing from the server via echo... even though the server is definitely writing to the buffer using evbuffer_add ( I have also tried this using bufferevent_write_buffer).
From the client when I send a 2, I then receive the 1 from the previous send. It's like my writes are being cached.... I have turned off nagle.
So, my question is: Does libevent cache sends using the following method?
evbuffer_add( outputBuffer, buffer, length );
Is there a way to flush this cache? Is there some other method to mark the cache as finished or complete? Can I force a send? It never sends on it's own... I have even put in delays. Replacing evbuffer_add with "send" works perfectly every time.
Most likely you are affected by Nagle algorithm - basically it buffers outgoing data, before sending it to the network. Take a look at this article: TCP/IP options for high-performance data transmission.
Here is an example how to disable buffering:
int flag = 1;
int result = setsockopt(sock, /* socket affected */
IPPROTO_TCP, /* set option at TCP level */
TCP_NODELAY, /* name of option */
(char *) &flag, /* the cast is historical
cruft */
sizeof(int)); /* length of option value */

APNs error handling in ruby

I want to send notifications to apple devices in batches (1.000 device tokens in batch for example). Ant it seems that I can't know for sure that message was delivered to APNs.
Here is the code sample:
ssl_connection(bundle_id) do |ssl, socket|
device_tokens.each do |device_token|
ssl.write(apn_message_for device_token)
# I can check if there is an error response from APNs
response_has_an_error = IO.select([socket],nil,nil,0) != nil
# ...
end
end
The main problem is if network is down after the ssl_connection is established
ssl.write(...)
will never raise an error. Is there any way to ckeck that connection still works?
The second problem is in delay between ssl.write and ready error answer from APNs. I can pass timeout parameter to IO.select after last messege was sent. Maybe It's OK to wait for a few seconds for 1.000 batch, but wat if I have to send 1.000 messages for differend bundle_ids?
At https://zeropush.com, we use a gem named grocer to handle our communication with Apple and we had a similar problem. The solution we found was to use the socket's read_non_block method before each write to check for incoming data on the socket which would indicate an error.
It makes the logic a bit funny because read_non_block throws IO::WaitReadable if there is no data to read. So we call read_non_block and catch IO::WaitReadable before continuing as normal. In our case, catching the exception is the happy case. You may be able to use a similar approach rather than using IO.select(...).
One issue to be aware of is that Apple may not respond immediately and any notifications sent between a failing notification and reading from the socket will be lost.
You can see the code we are using in production at https://github.com/SymmetricInfinity/grocer/blob/master/lib/grocer/connection.rb#L30.

Resources