IntelliJ IDEA - Maven Repository glitch? - maven

OK, I should put everything in simple explanation:
When I try to add new Java library from Maven, I do everything as normal, but in the "Download Library from Maven Repository", after the spinning dotted circle finishes animating, I get no result. For example When I try to find HttpClient library in the group org.apache.httpcomponents, if I only search for HttpClient or httpcomponents:HttpClient, no result is found for that library. It's found if only I search the whole groupID, and narrow down the search with library name. Moreover, the version appears, they are not always the latest version.
On the other hand, it does get some library like NekoHTML.
Import manually by adding to pom.xml works well. But I'm actually not in good term with XML, my vision just cannot apprehend the XML view.

There is related issue on YouTrack: https://youtrack.jetbrains.com/issue/IDEA-179857. As a workaround you could use "HttpClient" as a search string

Related

Aurelia CLI and braintree-web

I am trying to use https://www.npmjs.com/package/braintree-web with Aurelia (using the aurelia-cli and RequireJS). I am stuck trying to get all the many dependencies to resolve.
To use 3rd party library in Aurelia the library must be defined in the aurelia.json file.
If I add "braintree-web" in that file then aurelia complains that "braintree-web" it requires the modules "american-express", "apple-pay" etc etc.
If I manually create the "american-express", "apple-pay" dependencies then each one also refers to "braintree-web/lib", and a bunch of other sub-directory dependencies.
In short I can't get the "braintree-web" module to load because I have to manually build all sub-dependencies and its too complex to get working.
As I state above, I am using requireJS, should these dependencies all resolve correctly?
Any ideas as to how I can get this working?
Thanks
If all dependencies is what you need, then with requirejs + aurelia-cli you'll have to declare all dependencies. There is an experimental version of the cli being developed which you can find here, where you won't have to declare any dependencies in aurelia.json anymore.
With webpack you also don't need to declare any dependencies by the way.
Do you really need everything though? The docs mention for example you could import just the client. Still looks like a whole heap of dependencies, but at least a lot less than importing the main index.js.
You could also just include their pre-bundled client which I believe is https://js.braintreegateway.com/web/3.32.1/js/client.min.js
On a side note, the person developing aforementioned experimental CLI is actually looking for people to test it with non-trivial apps. Me and several others have tried it with great results, so I can recommend you try it. If you could report back in the PR that would be really awesome.

Firefox source code analysis; lines of code per component

I am currently trying to analyse Bugzilla in order to find the ratio of number of bugs : lines of code for each Firefox component. However, I have never worked with Bugzilla before and have no knowledge of Firefox's codebase.
How would I go about finding lines of code per Firefox component (as they appear on Bugzilla under Comp header)? I have made an attempt at looking through mozilla central, but have no idea which source files relate to which components.
EDIT: Dexter pointed out that there is a directive BUG_COMPONENT in the mozilla-central tree, but this directive seems extremely incomplete and is not helpful. Any other advice, or pointers as to where I could get such advice would be much appreciated.
Great question! We recently added the BUG_COMPONENT directive (see the meta bug) to the Firefox code: it's in the moz.build file contained in each directory in the source. This directive allows linking each file in the repository to the related Bugzilla component.
For example, the following directive found here, tells that all the files in test/browser containing the Telemetry word belong to the Toolkit::Telemetry component on Bugzilla.
with Files("test/browser/*Telemetry*"):
BUG_COMPONENT = ("Toolkit", "Telemetry")
You can use either DXR or searchfox to quickly search the Firefox repository.
Updated the answer to account for the questions in the comments.
As noted in the comments, some components are tracked on Bugzilla (e.g. Activity Stream) but do not have a direct mapping to source files within the mozilla-central repository (the one Firefox is built from). That's because some newer components do not ride "the trains" (~6 weeks development cycle), but are rather updated more frequently and deployed as addons.
The code for these components usually lives under the Mozilla github account, along with other project. Since there are quite a number of projects, one way to identify the ones you might be interested in is to restrict them to JavaScript ones. If you follow this last link, you'll see the repository for both the test-pilot and Activity Stream (plus other addons).
I'm afraid the only way to match GitHub projects to Bugzilla components is to look at the name of the repository on GitHub and find the matching component in Bugzilla: you can type the name here to get some component suggestions. If you want to get fancy, you might also leverage the Bugzilla REST API:
Get a list of the JS GitHub project.
Extract the name of the project.
Use the REST API to get the component suggestion.
I would personally just consider the mozilla-central repository as a starting point, as it is mostly annotated: scrape the BUG_COMPONENT from the source files, map them to the paths then use the REST API to get the list of bugs.
Sidenote: the Download Panel seems to be correctly annotated in the main repo.

Using SemanticGraphCoreAnnotations in Stanford CoreNLP

I want to use the example code from http://nlp.stanford.edu/software/openie.html but the line
System.out.println(sentence.get(SemanticGraphCoreAnnotations.EnhancedDependenciesAnnotation.class).toString(SemanticGraph.OutputFormat.LIST));
gives the error
SemanticGraphCoreAnnotations.EnhancedDependenciesAnnotation cannot be
resolved to a type
even though I imported edu.stanford.nlp.semgraph.SemanticGraphCoreAnnotations.
Eclipse suggests following quick fixes:
Change to "AlternativeDependenciesAnnotation"
(edu.stanford.nlp.semgraph.SemanticGraphCoreAnnotations;)
Change to "BasicDependenciesAnnotation"
(edu.stanford.nlp.semgraph.SemanticGraphCoreAnnotations;)
Applying the first fix leads to an java.lang.NullPointerException while the second fix gives following, rather unsatisfying results for the first sentence:
Loading clause searcher from
edu/stanford/nlp/models/naturalli/clauseSearcherModel.ser.gz...1.0 Obama be
bear in Hawaii
The import edu.stanford.nlp.semgraph.SemanticGraphCoreAnnotations.EnhancedDependenciesAnnotation; cannot be resolved and the import import edu.stanford.nlp.semgraph.SemanticGraphCoreAnnotations; does not help either.
I imported the JARs
joda-time
stanford-corenlp-3.6.0
stanford-corenlp-3.6.0-models
ejml-0.23
jollyday
xom
slf4j
slf4j-simple
I use CoreNLP 3.6.0. . I checked the SemanticGraphCoreAnnotations.java in edu.stanford.nlp.semgraph and it contains the EnhancedDependenciesAnnotation.class . How can I fix this issue?
It seems like the example code was written for the latest version of OpenIE, which is so far only on GitHub. The easiest way of getting this to run would probably be to clone the repository and compile CoreNLP yourself.
Use the stanford-corenlp-3.9.2 jar. This worked for me.

maven-javadoc-plugin sourceFileExcludes not working

I'm not too sure what is the right way to use this tag, but I use it like this:
<sourceFileExcludes>
<exclude></exclude>
<exclude></exclude>
</sourceFileExcludes>
It doesn't work at all. It seems that there was a known bug in maven that this tag won't work as I found these threads:
https://stackoverflow.com/a/26223872/3209177
But that was a while ago. I didn't find much useful information on maven website as well.
So how can we exclude certain source files / classes while we build javadoc using maven?
Finally figured it out.
First, there was an known bug as tracked in this page: https://issues.apache.org/jira/browse/MJAVADOC-365
And the patch went into the plugin 2.10.2. So after that version, the bug is fixed. However, I was using some earlier version.
Second, use this schema:
<sourceFileExcludes>
<sourceFileExclude></sourceFileExclude>
<sourceFileExclude></sourceFileExclude>
</sourceFileExcludes>
To exclude file.
Third, in the sourceFileExclude, I used someClass.java, this is probably not right. Instead, I used someClass.* and this works for me.
You can use this pattern. It excludes the files irregardless of their package structure. More information here
<sourceFileExcludes>
<exclude>**/AppTest.java</exclude>
<exclude>**/Tester.java</exclude>
</sourceFileExcludes>

Maven site + search capabilities

Recently in our organisation we've decided to work with maven site plugin and maintain all the documentation about our project in the site generated by maven.
However I haven't found any way to add a search functionality, the only thing I've come across that some skins provide an integration with the google search engine, but I can't use it because we're running in our own network and there is no chance to make it 'indexable' from outside.
So, my question is whether someone can suggest a descent solution for this?
I thought about developing a kind of maven plugin that would run lucene and index everything by itself and then provide an API to use this search from within the site, but I hope I won't need to reinvent the wheel :) So any suggestion will be welcome here
Thanks in advance
Just an idea, you can try to use JavaScript based full-text search engine e.g. http://jssindex.sourceforge.net/
We are using constellio to index the published site on a schedule. That works well so far.
I've raised http://jira.codehaus.org/browse/MSKINS-88 to cover adding a generic search form to the fluido skin which we use to build our maven sites. Hopefully that'll be progressed and we can have the search form baked into the documentation.
I know this is an old question, but a very easy (and admittedly ugly) way to accomplish what you want is simply generating a PDF with the site contents and letting your users do the search on the PDF. The advantage over searching on the generated site is that any PDF reader will be able to search the whole document.
mvn pdf:pdf
If you cannot use Google Site Search you're dependent on local search implementations. Hence, you either need to build the index during the site build (and for it to be available as part of your site) or do both index and search in the browser.
Besides JSSindex which appears to be somewhat dated there's http://www.tipue.com/search/ which is based on jQuery.
Maven site plugin approach is not widely used. So there is nothing specific for indexing yet.
You should look at non-maven tools.

Resources