I'm using the oscP5 library in Processing. I've already looked in the javadoc for oscP5 and I've browsed through the source but I can't figure it out.
When I get debug info like this:
### new Client # netP5.TcpClient#2515
What does the value 2515 represent? I know it is not the port the client is using. Is it a unique id for the client? Is it a variable I can access in the TcpClient class?
Thanks.
It is the objects (TcpClient) address in memory. You find the source code at
src/netP5/AbstractTcpServer.java
TcpClient t = new TcpClient(this, _myServerSocket.accept(),
_myTcpPacketListener, _myPort, _myMode);
if (NetP5.DEBUG) {
System.out.println("### new Client # " + t);
}
This means, that your number is the String representation of TcpClient. Since nothing is implemented to return this - its the default behaviour: objects address. You can access this TcpClient object and its members as shown in to following example. I assume here for simpleness, that we look at the first object in the clients list.
if (oscP5tcpServer.tcpServer().getClients().length>0) {
TcpClient tcpClient = (TcpClient)oscP5tcpServer.tcpServer().getClient(0);
print (tcpClient); // address - same as your printed output
print (tcpClient.netAddress()); // string with "ip:port"
print (tcpClient.socket()); // Socket object
}
Please note, that most of the interesting information is contained in the base object AbstractTcpClient (as shown in the example).
Related
I'm implementing a policy enforcement point between a client and a server that receives messages from the server, and, if the client doesn't have adequate authorization to see some parts of the message deletes those parts before sending them to the client.
message {
string not_sensitive = 1;
optional string sensitive = 2;
}
pseudo code
from_server >> my_msg;
if (!authorized) {
my_msg.delete("sensitive");
}
to_client << my_msg;
Yes.
As I understand current v3 pb schema language, all fields are optional. But regardless of that, a field marked optional in v2 is something that need not be there. So expanding your pseudo code to, say, C++ (see here), one can see that the generated class would end up with a has_sensitive() method and a clear_sensitive() method. Calling the latter and then serialising the object would result in wire format data that omitted the sensitive field.
For example if I changed from:
message Request {
int foo = 1;
}
to
message Request {
int bar = 1;
int foo = 2;
}
Is it safe to change foo from 1 to 2? Docs say not to: These numbers are used to identify your fields in the message binary format, and should not be changed once your message type is in use., but I'd like to know why.
If you have a serialized version of the message generated with the first version, you will no be able to deserialize with the second version of the message.
If you use protobuf to generate a model to store in a DB or to be published in a broker like Apache Kafka, you need to follow the convention. If you use proto buffer to generate model and service for online usage, you should do not break anything (if you will regenerate all the models)
See also the reserved keyword in order to do not reuse an old numer. Further reading also here
I am using Akka in Play Controller and performing ask() to a actor by name publish , and internal publish actor performs ask to multiple actors and passes reference of sender. The controller actor needs to wait for response from multiple actors and create a list of response.
Please find the code below. but this code is only waiting for 1 response and latter terminating. Please suggest
// Performs ask to publish actor
Source<Object,NotUsed> inAsk = Source.fromFuture(ask(publishActor,service.getOfferVerifyRequest(request).getPayloadData(),1000));
final Sink<String, CompletionStage<String>> sink = Sink.head();
final Flow<Object, String, NotUsed> f3 = Flow.of(Object.class).map(elem -> {
log.info("Data in Graph is " +elem.toString());
return elem.toString();
});
RunnableGraph<CompletionStage<String>> result = RunnableGraph.fromGraph(
GraphDSL.create(
sink , (builder , out) ->{
final Outlet<Object> source = builder.add(inAsk).out();
builder
.from(source)
.via(builder.add(f3))
.to(out); // to() expects a SinkShape
return ClosedShape.getInstance();
}
));
ActorMaterializer mat = ActorMaterializer.create(aSystem);
CompletionStage<String> fin = result.run(mat);
fin.toCompletableFuture().thenApply(a->{
log.info("Data is "+a);
return true;
});
log.info("COMPLETED CONTROLLER ");
If you have several responses ask won't cut it, that is only for a single request-response where the response ends up in a Future/CompletionStage.
There are a few different strategies to wait for all answers:
One is to create an intermediate actor whose only job is to collect all answers and then when all partial responses has arrived respond to the original requestor, that way you could use ask to get a single aggregate response back.
Another option would be to use Source.actorRef to get an ActorRef that you could use as sender together with tell (and skip using ask). Inside the stream you would then take elements until some criteria is met (time has passed or elements have been seen). You may have to add an operator to mimic the ask response timeout to make sure the stream fails if the actor never responds.
There are some other issues with the code shared, one is creating a materializer on each request, these have a lifecycle and will fill up your heap over time, you should rather get a materializer injected from play.
With the given logic there is no need whatsoever to use the GraphDSL, that is only needed for complex streams with multiple inputs and outputs or cycles. You should be able to compose operators using the Flow API alone (see for example https://doc.akka.io/docs/akka/current/stream/stream-flows-and-basics.html#defining-and-running-streams )
I have been playing around lately with GRPC and Protocol Buffers in order to get familiar with both frameworks in C++.
I wanted to experiment with the reflection functionality, so I have set up a very simple service where the (reflection-enabled) server exposes the following interface file:
syntax = "proto3";
package helloworld;
service Server {
rpc Add (AddRequest) returns (AddReply) {}
}
message AddRequest {
int32 arg1 = 1;
int32 arg2 = 2;
}
message AddReply {
int32 sum = 1;
}
On the client side I have visibility of the previous method thanks to the grpc::ProtoReflectionDescriptorDatabase. Therefore, I am able to create a message by means of a DynamicMessageFactory. However, I haven't been able to actually send the message to the server, nor, find any specific details in the documentation. Maybe it's too obvious and I'm completely lost...
Any hints will be deeply appreciated!
using namespace google::protobuf;
void demo()
{
std::shared_ptr<grpc::Channel> channel = grpc::CreateChannel("localhost:50051", grpc::InsecureChannelCredentials());
// Inspect exposed method
grpc::ProtoReflectionDescriptorDatabase reflection_database(channel);
std::vector<std::string> output;
reflection_database.GetServices(&output);
DescriptorPool reflection_database_pool(&reflection_database);
const ServiceDescriptor* service = reflection_database_pool.FindServiceByName(output[0]);
const MethodDescriptor* method = service->method(0);
// Create request message
const Descriptor* input_descriptor = method->input_type();
FileDescriptorProto input_proto;
input_descriptor->file()->CopyTo(&input_proto);
DescriptorPool pool;
const FileDescriptor* input_file_descriptor = pool.BuildFile(input_proto);
const Descriptor* input_message_descriptor = input_file_descriptor->FindMessageTypeByName(input_descriptor->name());
DynamicMessageFactory factory;
Message* request = factory.GetPrototype(input_message_descriptor)->New();
// Fill request message (sum 1 plus 2)
const Reflection* reflection = request->GetReflection();
const FieldDescriptor* field1 = input_descriptor->field(0);
reflection->SetInt32(request, field1, 1);
const FieldDescriptor* field2 = input_descriptor->field(1);
reflection->SetInt32(request, field2, 2);
// Create response message
const Descriptor* output_descriptor = method->output_type();
FileDescriptorProto output_proto;
output_descriptor->file()->CopyTo(&output_proto);
const FileDescriptor* output_file_descriptor = pool.BuildFile(output_proto);
const Descriptor* output_message_descriptor = output_file_descriptor->FindMessageTypeByName(output_descriptor->name());
Message* response = factory.GetPrototype(output_message_descriptor)->New();
// How to create a call...?
// ...is grpc::BlockingUnaryCall the way to proceed?
}
It's been a few years, but since you didn't get an answer, I'll make an attempt. You also didn't tag your question with a specific language, but it looks like you are using CPP. I can't provide a solution for CPP, but I can for JVM languages.
First of all, the following is taken from an open-source library I'm developing called okgrpc. It's the first of its kind attempt to create a dynamic gRPC client/CLI in Java.
Here are the general steps to make a call using DynamicMessage:
Get all the DescriptorProtos.FileDescriptorProto for the service you want to call using gRPC reflection.
Create indices for all types and methods in that service.
Find the Descriptors.MethodDescriptor corresponding to the method you want to call.
Convert your input to DynamicMessage. How to do this will depend on the input, of course. If JSON string, you can use the JsonFormat class.
Build an io.grpc.MethodDescriptor with method name, type (unary etc), request and response marshallers. You'll need to write your own DynamicMessage marshaller.
Use ClientCalls API to execute the RPC.
Obviously, the devil is in the details. If using Java, you can use my library, and let it be my problem. If using another language, good luck.
I would like to write different types of messages to a chronicle-queue, and process messages in consumers depending on their types.
How can I do that?
Chronicle-Queue provides low level building blocks you can use to write any kind of message so it is up to you to choose the right data structure.
As example, you can prefix the data you write to a chronicle with a small header with some meta-data and then use it as discriminator for data processing.
To achieve this I use Wire
try (DocumentContext dc = appender.writingDocument())
{
final Wire wire = dc.wire();
final ValueOut v = wire.getValueOut();
valueOut.typePrefix(m.getClass());
valueOut.marshallable(m);
}
When reading back I:
try (DocumentContext dc = tailer.readingDocument())
{
final Wire wire = dc.wire();
final ValueIn valueIn = wire.getValueIn();
final Class clazz = valueIn.typePrefix();
// msgPool is a prealloacted hashmap containing the messages I read
final ReadMarshallable readObject = msgPool.get(clazz);
valueIn.readMarshallable(readObject)
// readObject can now be used
}
You can also write/read a generic object. This will be slightly slower than using your own scheme, but is it a simple way to always read the type you wrote.