Bintray jcenter has copied my maven central artifacts into their repository. I would like to be able to assume ownership, in bintray, of the package, so that I can edit the description and/or hyperlinks and/or other metadata, all of which is currently empty in bintray. However, the form to request ownership does not work, when you try to send, it gives an error stating some field is missing (without stating which field). Any ideas?
You can't link packages if you are a trial user. So this might also be an issue if you want to take ownership. If that is not the case then I would contact Bintray support.
This is now resolved. In the end I needed to create a user on bintray, then create a repository for that user, and then that allowed me to select both of those things for the form requesting ownership.
As a newbie to bintray/jcentral none of this was clear or obvious, nor do they do these things for you, nor do they allow you to request ownership first so that they can inform you of these necessary steps to complete. So it's not particularly user-friendly.
Related
After we updated JFrog Artifactory we realised that all uploaded artifacts could be uploaded with related .pom file.
But usually the maven-directory contained 4 files:
1. Our uploaded .jar/.war file
2. related .pom file
3. .sha1 file
4. .md5 file
and 4. are missing now.
Is there any setting I've overseen? all documentation from JFrog tells me, it should be generated automatically.
Artifactory started 'hiding' these files as part of RTFACT-6962 where they were deemed mostly unnecessary since only a handful of legacy clients even care about them (i.e. old maven which also uses the browsing api they appear in) .
If they matter to you they can be 'brought back' by adding the property artifactory.ui.hideChecksums=false to your system.properties file.
As #DarthFennec mentioned these are not actually files, rather they are checksum string that are generated from the artifact's checksum each time you trigger the .md5 .sha1 or .sha2 endpoints for a certain path.
These files are sort of "phantom" files. They don't show up in the directory, but if you request them using the REST API you'll get the expected response. For any existing file foo.bar, requesting foo.bar.md5, foo.bar.sha1, or foo.bar.sha256 will provide the appropriate checksum, even though those files don't actually exist.
I think this makes more sense than autogenerating these files for every artifact. Since they do exist for every artifact in every repository, they don't actually provide useful information in the UI, so it just becomes needless clutter.
We are hosting one of our binary packages on Bintray in a private repository and give user a signed URL when downloading from our website.
If we open Bintray download statistics (live log) we see really strange records for one and the same file (it is our normal file):
time IP file size user
1500912829000 114.4.79.235 /bla-bla.exe 72016 anonymous
1500912828000 114.4.79.235 /bla-bla.exe 56756 anonymous
1500912828000 114.4.79.235 /bla-bla.exe 24049 anonymous
...
A lot of downloads with the same IP and different file size.
It seems that Bintray counts partial downloads as a unique download attempt. When we open statistics graphs we see really big numbers of downloads, but now we assume that these numbers are fake.
Does anybody know how Bintray counts partial downloads?
Bintray displays partial download transactions in downloads statistics, since there is no way of reliably telling if multiple partial downloads from a single origin amount to a full download/s.
The total bytes consumed by partial downloads against your account in calculated correctly, however.
One possible explanation to what you are seeing is customers using a download-manager browser extension.
Disclaimer, I work for JFrog, the company behind Bintray.
I'm new to maven. Now I'm learning how to find and use libraries from maven repository. I see the maven central repository (mcr) as similar to cpan, just mcr is for java and cpan is for perl.
I see a big difference between mcr and cpan: When I search for something (for example "ssh") in the cpan web, I get a brief description of the packages found (what they are and what they do). And if I click on the packages link, then I get the full description (name, sypnosis, description, examples, etc).
Now, if I search for something (for example "ssh") in the mcr web, I get the list of artifacts found, their groupid, version and date, but there is no description on what an artifact is or what it does. Even if I click on the links (version link is the only one that gives some information), I don't get any description on what it is or what it does or examples.
Is there any way (i.e. some other page) to browse the repository artifacts in a more friendly way? (something similar to cpan)
You can use the mvnrepository website.
It is linked to the Maven Central Repository, and provides a more detailed view for each of the artifacts - including descriptions*.
So for example for the commons-httpclient artifact it has the following description:
The HttpClient component supports the client-side of RFC 1945 (HTTP/1.0) and RFC 2616 (HTTP/1.1) , several related specifications (RFC 2109 (Cookies) , RFC 2617 (HTTP Authentication) , etc.), and provides a framework by which new request types (methods) or HTTP extensions can be created easily.
*Note, the descriptions shown are taken from the <description> tag from the artifact's pom. This tag is optional, which means not every project actually defines it, so unfortunately you might not always see a description.
Take a look at Bintray's jcenter. It is a superset of Maven Central and adds metadata on packages, such as author, license, description, release notes, ratings and reviews etc. You can also register to get updates on version releases.
with maven you can easily specify settings.xml location, e.g:
mvn -s custom/dir/settings.xml package
Is there a similiar way to specify custom security-settings.xml?
The reasoning behind this is simple: to easily distribute it via a local repository. Now, security is NOT a concern - it's all on the intranet.
This was requested as MNG-4853. Although not implemented directly, a workaround is suggested:
I've verified that -Dsettings.security=path/to/security-settings.xml works
As detailed in the announcement blog post, you can also use the master password relocation feature. Write an ~/.m2/security-settings.xml file with a redirection in it:
<settingsSecurity>
<relocation>/Volumes/mySecureUsb/secret/settings-security.xml</relocation>
</settingsSecurity>
I understand that a plugin registered for pre-validation executes outside of the database transaction but I'm not sure I can think of a scenario when this would be preferable to pre-operation. Can someone give me an example of where pre-validation registration might be useful?
We have a few plugins registered on the 'PreValidation' event although this is on premise, not online.
I did not write these specific plugins myself but I can describe one and give the justification for using 'PreValidation' rather than 'PreOperation'.
Entity: Account
Event: Delete
Logic: Plugin runs pre validation. Checks that there are no contacts referencing any of the account's addresses. If any are found, stop execution. If not, delete account.
e.g.
Account 'Stackoverflow' has address 'Jeff Attwood's House' and Contact 'glosrob'. 'glosrob' is referencing 'Jeff Attwood's House' through a customisation. If a user selects to delete 'StackOverflow', we should detect 'glosrob' is referencing an address and prevent the delete.
The reasoning behind this was the developer found that at the PreOperation stage, some aspects of the delete had already happened, namely the cascade deletes. The logic of the plugin requires us to check all contacts - by registering at PreOperation, contacts under the account had already been deleted, rendering the check obsolete.
In our previous scenario, when the user selected to delete 'StackOverflow' Account, the Contact 'glosrob' would be deleted before the plugin runs. Therefore when the plugin did run afterwards, it would allow the delete.
As with most things in CRM, it all comes down to requirements and solutions, but I think that gives you an idea of why/when you might use a PreValidation stage. We have a few others with similar reasoning that run on the 'Delete' event.
I know its very old post, came here while digging for an answer for the same question...
Later I found one key point from MSDN on the same topic and i thought it would be helpful If I post the infromation over here for all..
The Prevalidation plugin would happen prior to the security checks. For ex: If an account is "VIP" account and you dont want this account record to be deleted (no matter even he is a super user/admin), then this better can happen in pre validation. Because at that time you are not really bothered about who the user is and what sort of permissions he has (even he may not have any permissions to delete any records in the system), CRM will go and check the database for the user's security roles during the pre operation and that is where the first database hit would happen.. before that it self, we can stop the exucution of the plugin based on our validation rules..
I hope that make sense...
Thank you
Regards
Srikanth