Is systemProp.https.nonProxyHosts a valid setting for Gradle? - gradle

I ran into some issues recently configuring the proxy settings for a Gradle project... specifically with the systemProp.https.nonProxyHosts.
Going through the Gradle documentation this setting: systemProp.https.nonProxyHosts = *.nonproxyrepos.com|localhost is valid, but the Networking Properties section from the Java SDK states that systemProp.https.nonProxyHosts does not exist and the non-https sibling will be used instead.
Bottom line: systemProp.https.nonProxyHosts didn't work for me. As soon as I deleted that line the script worked (I tested several times toggling it on/off).
So can anyone confirm if that's outdated on the Gradle documentation, or if Gradle does indeed uses it and maps them internally, but the version I was using (Gradle 6.1.1) had a bug in that regard? Is there a task (or something similar by default) to test/visualize if Gradle is indeed connecting via HTTP or HTTPS so I can corroborate that's working as intended?

It's really hard to tell in retrospect what the issue was in your case.
but the Networking Properties section from the Java SDK states that systemProp.https.nonProxyHosts does not exist and the non-https sibling will be used instead.
First and foremost, this is true. For reference, the DefaultProxySelector in JDK 8 reads
static class NonProxyInfo {
static final String defStringVal = "localhost|127.*|[::1]|0.0.0.0|[::0]";
static NonProxyInfo httpNonProxyInfo = new NonProxyInfo("http.nonProxyHosts", null, null, defStringVal);
}
public java.util.List<Proxy> select(URI uri) {
//....
} else if ("https".equalsIgnoreCase(protocol)) {
// HTTPS uses the same property as HTTP, for backward
// compatibility
pinfo = NonProxyInfo.httpNonProxyInfo;
}
//....
}
So can anyone confirm if that's outdated on the Gradle documentation, or if Gradle does indeed uses it and maps them internally, but the version I was using (Gradle 6.1.1) had a bug in that regard?
Gradle indeed reads https.nonProxyHosts and maps it internally. See JavaSystemPropertiesSecureHttpProxySettings for more details. I'm not aware of any bugs. The commit log's last relevant change was from 2014. The property is used for example when resolving dependencies from remote Maven repositories.
Is there a task (or something similar by default) to test/visualize if Gradle is indeed connecting via HTTP or HTTPS so I can corroborate that's working as intended?
I'm not aware of any task, be it built-in or contributed by 3rd party plugins. You may get more details on what's going on by inspecting --info and --debug output.

Related

Custom ListenTCP processor works fine on 1.11.4 but fails to deploy on version 1.11.4.2.0.4.0-80

I created a Custom ListenTCP processor by creating new socket handlers and referring them in CustomListen TCP. I was able to deploy it on my mac and tested it with a sample file that has a different incoming delimiter and works great on my mac.(version 11.4)
However, My org is using this version: Cloudera Cloudera Flow Management (CFM) 2.0.4.0 1.11.4.2.0.4.0-80, Tagged nifi-1.11.4-RC1
So, I tried to change the version appropriately on my mac for deploying the nar file into our Cloudera cluster but it is failing with ClientAuth class not found in SSLContextService( version 1.11.4.2.0.4.0-80)
Here is the link for 1.11.4 on my mac works fine
Modified to 1.11.4.2.0.4.0-80 and fails with not finding $ClientAuth
I looked at the source code
it was deprecated and somehow not found in your CFM jar.
Maybe putting this enum in your custom code solves your problem .
enum ClientAuth {
WANT,
REQUIRED,
NONE
}

Why is Jenkins.get().getRootUrl() not available when generating DSL?

I'm debugging a problem with atlassian-bitbucket-server-integration-plugin. The behavior occurs when generating a multi-branch pipeline job, which requires a Bitbucket webhook. The plugin works fine when creating the pipeline job from the Jenkins UI. However, when using DSL to create an equivalent job, the plugin errors out attempting to create the webhook.
I've tracked this down to a line in RetryingWebhookHandler:
String jenkinsUrl = jenkinsProvider.get().getRootUrl();
if (isBlank(jenkinsUrl)) {
throw new IllegalArgumentException("Invalid Jenkins base url. Actual - " + jenkinsUrl);
}
The jenkinsUrl is used as the target for the webhook. When the pipeline job is created from the UI, the jenkinsUrl is set as expected. When the pipeline job is created by my DSL in a freeform job, the jenkinsUrl is always null. As a result, the webhook can't be created and the job fails.
I've tried various alternative ways to get the Jenkins root URL, such as static references like Jenkins.get().getRootUrl() and JenkinsLocationConfiguration.get().getUrl(). However, all values come up empty. It seems like the Jenkins context is not available at this point.
I'd like to submit a PR to fix this behavior in the plugin, but I can't come up with anything workable. I am looking for suggestions about the root cause and potential workarounds. For instance:
Is there something specific about the way my freeform job is executed that could cause this?
Is there anything specific to the way jobs are generated from DSL that could cause this?
Is there another mechanism I should be looking at to get the root URL from configuration, which might work better?
Is it possible that this behavior points to a misconfiguration in my Jenkins instance?
If needed, I can share the DSL I'm using to generate the job, but I don't think it's relevant. By commenting out the webhook code that fails, I've confirmed that the DSL generates a job with the correct config.xml underneath. So, the only problem is how to get the right configuration to the plugin so it can set up the webhook.
It turns out that this behavior was caused by a partial misconfiguration of Jenkins.
While debugging problems with broken build links in Bitbucket (pointing me at unconfigured-jenkins-location instead of the real Jenkins URL), I discovered a yellow warning message on the front page of Jenkins which I had missed before, telling me that the root server URL was not set:
Jenkins root URL is empty but is required for the proper operation of many Jenkins features like email notifications, PR status update, and environment variables such as BUILD_URL.
Please provide an accurate value in Jenkins configuration.
This error message had a link to Manage Jenkins > Configure System > Jenkins Location. The correct Jenkins URL actually was set there (I had already double-checked this), but the system admin email address in the same section was not set. When I added a valid email address, the yellow warning went away.
This change fixed both the broken build URL in BitBucket, as well as the problems with my DSL. So, even though it doesn't make much sense, it seems like the missing system admin email address was the root cause of this behavior.

How can I tell when a gradle plugin property's evaluation will be deferred?

I'm using the docker compose plugin from avast. Below is the relevant stanza. How can I tell if mandatoryDockerWebTag() will be called during the configuration phase? Is the only way to inspect the plugin code to figure out when the closures will be called?
Many times I have information that I only want to provide if a task is in the task graph, but that information may be expensive to get, unavailable, or needs to validate a project parameter when its fetched. For instance I don't want someone bringing up the preprod docker image instance of our stack with the "latest" tag, so the mandatoryDockerWebTag() throws an exception if it's "latest", otherwise it returns the current tag.
dockerCompose {
preprod {
useComposeFiles = ['docker-compose.yml']
environment.putAll([
WEB_DOCKER_IMAGE_VERSION : mandatoryDockerWebTag()
])
tcpPortsToIgnoreWhenWaiting = [33333]
}
}
How can I tell if mandatoryDockerWebTag() will be called during the configuration phase?
I do not believe there is a way to explictly tell how or when a task or configuration is called in Gradle without either:
Examine the source of the plugin you are using.
Examine the build scan report.
For instance I don't want someone bringing up the preprod docker image instance of our stack
Unfortunately, you do not have control over what a plugin author does to your Gradle configuration. They have free/complete access to your project and can configure/alter at will as far as I know.
Good/effective plugin authors (IMO) utilize configuration avoidance. It applies to not only tasks, but configurations as well.

protoc command not generating all base classes (java)

I have been trying to generate the basic gRPC client and server interfaces from a .proto service definition here from the grpc official repo.
The relevant service defined in that file (from the link above) is below:
service RouteGuide {
rpc GetFeature(Point) returns (Feature) {}
rpc ListFeatures(Rectangle) returns (stream Feature) {}
rpc RecordRoute(stream Point) returns (RouteSummary) {}
rpc RouteChat(stream RouteNote) returns (stream RouteNote) {}
}
The command I run is protoc --java_out=${OUTPUT_DIR} path/to/proto/file
According to the grpc site (specifically here), a RouteGuideGrpc.java which contains a base class RouteGuideGrpc.RouteGuideImplBase, with all the methods defined in the RouteGuide service is supposed to have been generated from the protoc command above, but that file does not get generated for me.
Has anyone faced similar issues? Is the official documentation simply incorrect? And would anyone have any suggestion as to what I can do to generate that missing class?
This may help someone else in the future so I'll answer my own question.
I believe the java documentation for gRPC code generation is not fully up to date and the information is scattered amongst different official repositories.
So turns out that in order to generate all the gRPC java service base classes as expected, you need to specify an additional flag to the protoc cli like so grpc-java_out=${OUTPUT_DIR}. But in order for that additional flag to work, you need to have a few extra things:
The binary for the protoc plugin for gRPC Java protoc-gen-grpc-java: you can get the relevant one for your system from maven central here (the link is for v1.17.1). If there isn't a prebuilt binary available for your system, you can compile one yourself from the github repo instructions here.
Make sure the binary location is added to your PATH environment variable and the binary is renamed to "protoc-gen-grpc-java" exactly (that is the name the protoc cli expects to have in the path).
Finally, you are ready to run the correct command protoc --java_out=${OUTPUT_DIR} --grpc-java_out=${OUTPUT_DIR} path/to/proto/file and now the service base classes like RouteGuideGrpc.RouteGuideImplBase should be generated when it previously was not.
I hope this explanation helps someone else out in the future.
Thank you very much for this investigation. Indeed, the doc is incomplete, and people use Maven to compile everything without understanding of how it really works. Yp

SonarQube 6.7 failed to start because CONFIG_SECCOMP not compiled into kernel

I've just upgraded SonarQube from 6.0 to 6.7 LTS running in a CentOS 6 box, and noticed that ElasticSearch (ES) failed to start because the kernel (2.6.32-696.3.1.el6.x86_64) doesn't have seccomp available.
This is officially documented at System call filter check and a correct workaround for systems without this feature is to configure bootstrap.system_call_filter to false in elasticsearch.yml.
The issue here is because Sonar creates the ES configuration at startup, writing in $SONAR_HOME/temp/conf/es/elasticsearch.yml and I haven't found a way to set bootstrap.system_call_filter property.
I tried a natural (undocumented) way introducing sonar.search.bootstrap.system_call_filter and bootstrap.system_call_filter properties in sonar.properties but it doesn't work.
We had the same problem. At first we used the above solution but after searching in the sonar code on github found the place where this setting should be placed:
Edit the sonar.properties file and change the line:
#sonar.search.javaAdditionalOpts=
to
sonar.search.javaAdditionalOpts=-Dbootstrap.system_call_filter=false
For sonarqube docker image, setup additional environment to disable this feature when "docker run":
-e SONAR_SEARCH_JAVAADDITIONALOPTS="-Dbootstrap.system_call_filter=false"
Hi I tried to echo bootstrap.system_call_filter: 'false' to temp/conf/es/elasticsearch.yml, I see the line in that file, but got same error during start of sonarqube 6.7 on centos6.
Has someone tested that with success?
First of all: don't even try to update elasticsearch.yml . SonarQube self-manages its ElasticSearch component config, so any attempt of manual intervention will be harmful. (reminder: the only config file that should ever be modified to operate SonarQube is sonar.properties)
More interestingly regarding that seccomp component:
the seccomp requirement does come from underlying ElasticSearch requirement, and transitively applies to operating SonarQube
if you run SonarQube locally with default config (specifically: default sonar.search.host), then the seccomp check may not be fatal (i.e. just a warning)
if you did override sonar.search.host , then the first thing you should wonder is: does the ElasticSearch JVM really needs to listen on other interfaces than loopback ? (knowing that SonarQube uses ES locally, except with the Data Center Edition). If no good answer to that, then keep sonar.search.host at its default value.
Last but not least, the golden path here is obviously to follow the requirement (i.e. have seccomp available on your OS), even if that involves upgrading to a more recent Linux kernel. And to wrap it all up: we've edited SonarQube Requirements to transparently share this situation.
You could really cheat and edit /${SONAR_HOME}/elasticsearch/bin/elasticsearch.
Add
echo "bootstrap.system_call_filter = 'false'" >>
/${SONAR_HOME}/temp/conf/es/elasticsearch.yml
before the "demonized" variable is set.

Resources