protoc command not generating all base classes (java) - protocol-buffers

I have been trying to generate the basic gRPC client and server interfaces from a .proto service definition here from the grpc official repo.
The relevant service defined in that file (from the link above) is below:
service RouteGuide {
rpc GetFeature(Point) returns (Feature) {}
rpc ListFeatures(Rectangle) returns (stream Feature) {}
rpc RecordRoute(stream Point) returns (RouteSummary) {}
rpc RouteChat(stream RouteNote) returns (stream RouteNote) {}
}
The command I run is protoc --java_out=${OUTPUT_DIR} path/to/proto/file
According to the grpc site (specifically here), a RouteGuideGrpc.java which contains a base class RouteGuideGrpc.RouteGuideImplBase, with all the methods defined in the RouteGuide service is supposed to have been generated from the protoc command above, but that file does not get generated for me.
Has anyone faced similar issues? Is the official documentation simply incorrect? And would anyone have any suggestion as to what I can do to generate that missing class?

This may help someone else in the future so I'll answer my own question.
I believe the java documentation for gRPC code generation is not fully up to date and the information is scattered amongst different official repositories.
So turns out that in order to generate all the gRPC java service base classes as expected, you need to specify an additional flag to the protoc cli like so grpc-java_out=${OUTPUT_DIR}. But in order for that additional flag to work, you need to have a few extra things:
The binary for the protoc plugin for gRPC Java protoc-gen-grpc-java: you can get the relevant one for your system from maven central here (the link is for v1.17.1). If there isn't a prebuilt binary available for your system, you can compile one yourself from the github repo instructions here.
Make sure the binary location is added to your PATH environment variable and the binary is renamed to "protoc-gen-grpc-java" exactly (that is the name the protoc cli expects to have in the path).
Finally, you are ready to run the correct command protoc --java_out=${OUTPUT_DIR} --grpc-java_out=${OUTPUT_DIR} path/to/proto/file and now the service base classes like RouteGuideGrpc.RouteGuideImplBase should be generated when it previously was not.
I hope this explanation helps someone else out in the future.

Thank you very much for this investigation. Indeed, the doc is incomplete, and people use Maven to compile everything without understanding of how it really works. Yp

Related

Is systemProp.https.nonProxyHosts a valid setting for Gradle?

I ran into some issues recently configuring the proxy settings for a Gradle project... specifically with the systemProp.https.nonProxyHosts.
Going through the Gradle documentation this setting: systemProp.https.nonProxyHosts = *.nonproxyrepos.com|localhost is valid, but the Networking Properties section from the Java SDK states that systemProp.https.nonProxyHosts does not exist and the non-https sibling will be used instead.
Bottom line: systemProp.https.nonProxyHosts didn't work for me. As soon as I deleted that line the script worked (I tested several times toggling it on/off).
So can anyone confirm if that's outdated on the Gradle documentation, or if Gradle does indeed uses it and maps them internally, but the version I was using (Gradle 6.1.1) had a bug in that regard? Is there a task (or something similar by default) to test/visualize if Gradle is indeed connecting via HTTP or HTTPS so I can corroborate that's working as intended?
It's really hard to tell in retrospect what the issue was in your case.
but the Networking Properties section from the Java SDK states that systemProp.https.nonProxyHosts does not exist and the non-https sibling will be used instead.
First and foremost, this is true. For reference, the DefaultProxySelector in JDK 8 reads
static class NonProxyInfo {
static final String defStringVal = "localhost|127.*|[::1]|0.0.0.0|[::0]";
static NonProxyInfo httpNonProxyInfo = new NonProxyInfo("http.nonProxyHosts", null, null, defStringVal);
}
public java.util.List<Proxy> select(URI uri) {
//....
} else if ("https".equalsIgnoreCase(protocol)) {
// HTTPS uses the same property as HTTP, for backward
// compatibility
pinfo = NonProxyInfo.httpNonProxyInfo;
}
//....
}
So can anyone confirm if that's outdated on the Gradle documentation, or if Gradle does indeed uses it and maps them internally, but the version I was using (Gradle 6.1.1) had a bug in that regard?
Gradle indeed reads https.nonProxyHosts and maps it internally. See JavaSystemPropertiesSecureHttpProxySettings for more details. I'm not aware of any bugs. The commit log's last relevant change was from 2014. The property is used for example when resolving dependencies from remote Maven repositories.
Is there a task (or something similar by default) to test/visualize if Gradle is indeed connecting via HTTP or HTTPS so I can corroborate that's working as intended?
I'm not aware of any task, be it built-in or contributed by 3rd party plugins. You may get more details on what's going on by inspecting --info and --debug output.

how-to configure a bgp-ls peer

I have brought up an Opendaylight instance in order to establish a BGP-LS session with our network and get topology information.
So, the objective is to configure 1 peering with a router and get node, link,network information.
I have concluded that the documentation is more accurate over here: https://github.com/opendaylight/docs/blob/master/docs/user-guide/bgpcep-guide/bgp/bgp-user-guide-linkstate-family.rst
I have done this step https://github.com/opendaylight/docs/blob/master/docs/user-guide/bgpcep-guide/bgp/bgp-user-guide-running-bgp.rst and works.
Unfortunately, there are no such files like it is described here https://github.com/opendaylight/docs/blob/stable/lithium/manuals/user-guide/src/main/asciidoc/bgpcep/odl-bgpcep-bgp-all-user.adoc
Meaning, I cannot locate 31-bgp.xml and 41-bgp-example.xml
I would like to ask if there are or can be derived somehow .xml files that describe fully all the parameters available, so that I can configure OpenDaylight by using these .xml files.
The examples mentioned in the first link are targeting a few specific parameters, and it is not about a nearly complete example.
Could you please advise how to:
get the necessary bgp-ls configuration .xml files that fully
configure ODL as BGP speaker and a peering ?
Many Thanks.
These files has been removed as a part of task BGPCEP-685.
You can still find them in previous release like here or here
There is also wiki about configuring BGP peer.

Idiomatic approach to a Go plugin-based system

I have a Go project I would like to open source but there are certain elements which are not suitable for OSS, e.g. company specific logic etc.
I have conceived of the following approach:
interfaces are defined in the core repository.
Plugins can then be standalone repositories whose types implement the interfaces defined in core. This allows the plugins to be housed in completely separate modules and therefore have their own CI jobs etc.
Plugins are compiled into the final binary via symlinks.
This would result in a directory structure something like the following:
|- $GOPATH
|- src
|- github.com
|- jabclab
|- core-system
|- plugins <-----|
|- xxx |
|- plugin-a ------>| ln -s
|- yyy |
|- plugin-b ------>|
With an example workflow of:
$ go get git#github.com:jabclab/core-system.git
$ go get git#github.com:xxx/plugin-a.git
$ go get git#github.com:yyy/plugin-b.git
$ cd $GOPATH/src/github.com
$ ln -s ./xxx/plugin-a/*.go ./jabclab/core-system/plugins
$ ln -s ./yyy/plugin-b/*.go ./jabclab/core-system/plugins
$ cd jabclab/core-system
$ go build
The one issue I'm not sure about is how to make the types defined in plugins available at runtime in core. I'd rather not use reflect but can't think of a better way at the moment. If I was doing the code in one repo I would use something like:
package plugins
type Plugin interface {
Exec(chan<- string) error
}
var Registry map[string]Plugin
// plugin_a.go
func init() { Registry["plugin_a"] = PluginA{} }
// plugin_b.go
func init() { Registry["plugin_b"] = PluginB{} }
In addition to the above question would this overall approach be considered idiomatic?
This is one of my favorite issues in Go. I have an open source project that has to deal with this as well (https://github.com/cpg1111/maestrod), it has pluggable DB and Runtime (Docker, k8s, Mesos, etc) clients. Prior to the plugin package that is in the master branch of Go (so it should be coming to a stable release soon) I just compiled all of the plugins into the binary and allowed configuration decide which to use.
As of the plugin package, https://tip.golang.org/pkg/plugin/, you can use dynamic linking for plugins, so similar to C's dlopen() in its loading, and the behavior of go's plugin package is pretty well outlined in the documentation.
Additionally I recommend taking a look at how Hashicorp addresses this by doing RPC over a local unix socket. https://github.com/hashicorp/go-plugin
The added benefit of running a plugin as a separate process like Hashicorp's model does, is that you get great stability in the event that the plugin fails but the main process is able to handle that failure.
I should also mention Docker does its plugins in Go similarly, except Docker uses HTTP instead of RPC. Additionally, a Docker engineer has posted about embedding a Javascript interpreter for dynamic logic in the past http://crosbymichael.com/category/go.html.
The issue I wanted to point out with the sql package's pattern that was mentioned in the comments is that that's not a plugin architecture really, you're still limited to whatever is in your imports, so you could have multiple main.go's but that's not a plugin, the point of a plugin is such that the same program can run either one piece of code or another. What you have with things like the sql package is flexibility where a separate package determines what DB driver to use. Nonetheless, you end up modifying code to change what driver you are using.
I want to add, with all of these plugin patterns, aside from the compiling into the same binary and using configuration to choose, each can have its own build, test and deployment (i.e. their own CI/CD) but not necessarily.

How can I batch Kafka reads to Elasticsearch

I'm not too familiar with Kafka but I would like to know what's the best way to
read data in batches from Kafka so I can use Elasticsearch Bulk Api to load the data faster and reliably.
Btw, am using Vertx for my Kafka consumer
Thank you,
I cannot tell if this is the best approach or not, but when I started looking for similar functionality I could not find any readily available frameworks. I found this project:
https://github.com/reachkrishnaraj/kafka-elasticsearch-standalone-consumer/tree/branch2.0
and started contributing to it as it was not doing everything I wanted, and was also not easily scalable. Now the 2.0 version is quite reliable and we use it in production in our company processing/indexing 300M+ events per day.
This is not a self-promotion :) - just sharing how we do the same type of work. There might be other options right now as well, of course.
https://github.com/confluentinc/kafka-connect-elasticsearch
Or You can try this source
https://github.com/reachkrishnaraj/kafka-elasticsearch-standalone-consumer
Running as a standard Jar
**1. Download the code into a $INDEXER_HOME dir.
**2. cp $INDEXER_HOME/src/main/resources/kafka-es-indexer.properties.template /your/absolute/path/kafka-es-indexer.properties file - update all relevant properties as explained in the comments
**3. cp $INDEXER_HOME/src/main/resources/logback.xml.template /your/absolute/path/logback.xml
specify directory you want to store logs in:
adjust values of max sizes and number of log files as needed
**4. build/create the app jar (make sure you have MAven installed):
cd $INDEXER_HOME
mvn clean package
The kafka-es-indexer-2.0.jar will be created in the $INDEXER_HOME/bin. All dependencies will be placed into $INDEXER_HOME/bin/lib. All JAR dependencies are linked via kafka-es-indexer-2.0.jar manifest.
**5. edit your $INDEXER_HOME/run_indexer.sh script: -- make it executable if needed (chmod a+x $INDEXER_HOME/run_indexer.sh) -- update properties marked with "CHANGE FOR YOUR ENV" comments - according to your environment
**6. run the app [use JDK1.8] :
./run_indexer.sh
I used spark streaming and the it was quite a simple implementation using Scala.

How to install applications to a WebSphere 7.0 cluster using wsadmin?

I want to deploy to all four processes on a Websphere cluster with two nodes. Is there a way of doing this with one Jython command or do I have to call 'AdminControl.invoke' on each one?
Easiest way to install an application using wsadmin is with AdminApp and not AdminControl.
I suggest you download wsadminlib.py (Got the link from here)
it has a lot of functions, one of them is installApplication which works also with cluster.
Edit:
Lately I found out about AdminApplication which is a script library included in WAS 7 (/opt/IBM/WebSphere/AppServer/scriptLibraries/application/V70)
The docuemntation is not great in the info center but its a .py file you can look inside to see what it does.
It is imported automatically to wsadmin and you can use it without any imports or other configuration.
Worth a check.
#aviram-segal is right, wsadminlib is really helpful for this.
I use the following syntax:
arg = ["-reloadEnabled", "-reloadInterval '0'", "-cell "+self.cellName, "-node "+self.nodeName, "-server '"+ self.serverName+"'", "-appname "+ name, '-MapWebModToVH',[['.*', '.*', self.virtualHost]]]
AdminApp.install(path, arg)
Where path is the location of your EAR/WAR file.
You can find documentation here

Resources