Cannot configure Vaadin addons maven repository in Artifactory - maven

I'm trying to configure the Vaadin addons repository in Artifactory, but I always get this error message:
When I change the url to just http://maven.vaadin.com, the connection test succeeds, but the artifacts cannot be resolved. I have other remote repositories configured that work.
Currently I have to resort to downloading and deploying the jar files manually.
Any ideas about what I could be doing wrong here?
Edit: Specific addon example (although all have failed for me, so far):
<dependency>
<groupId>org.vaadin.addons</groupId>
<artifactId>loginform</artifactId>
<version>0.5.2</version>
</dependency>

The test is failing since the Vaadin addons Maven repository does not support directory listing (browsing).
When Artifactory send a test request to the root of the repository it gets a 404 status:
$ curl -vv http://maven.vaadin.com/vaadin-addons/
* Trying 54.86.23.48...
* Connected to maven.vaadin.com (54.86.23.48) port 80 (#0)
> GET /vaadin-addons/ HTTP/1.1
> Host: maven.vaadin.com
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Server: nginx
< Date: Thu, 17 Sep 2015 12:50:53 GMT
< Content-Type: application/xml
< Transfer-Encoding: chunked
< Connection: keep-alive
< x-amz-request-id: 2F7E32DADE9E2C20
< x-amz-id-2: EjRvUE7kv4GOdPE0ry+VsmXvmva4QgBptK/CcnSESZbe2AqotmXpAuM3AuChq2Gd
<
<?xml version="1.0" encoding="UTF-8"?>
* Connection #0 to host maven.vaadin.com left intact
<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>vaadin-addons/</Key><RequestId>2F7E32DADE9E2C20</RequestId><HostId>EjRvUE7kv4GOdPE0ry+VsmXvmva4QgBptK/CcnSESZbe2AqotmXpAuM3AuChq2Gd</HostId></Error>
However you can still use Artifactory to proxy this repository. You will be able to download artifacts from it.

Related

How to handle compression static resources over https in Quarkus

Problem
I want to compress static resources in Quarkus like js, css, and image. I was activated configuration for compression quarkus.http.enable-compression:true. It's working perfectly on HTTP mode but does not working over https.
Expected behavior
Content will be compressed as GZIP over HTTPS
Actual behavior
No GZIP compression over HTTPS
To Reproduce
Steps to reproduce the behavior:
Git pull from quarkus-demo I made earlier
Create certificate for enable SSL in localhost with mkcert
Compile with command mvn clean package -Dquarkus.profile=prod for running over HTTPS. If you want to test over HTTP please run with this command mvn quarkus:dev
Run quarkus app with this command java -jar target\quarkus-app\quarkus-run.jar
Finally, open your browser to access https://localhost or http://localhost:8080 and then please inspect element to check loaded resources details at Network tab
application.yml
quarkus:
application:
version: 1.0.0-SNAPSHOT
http:
port: 8080
enable-compression: true
application-prod.yml
quarkus:
http:
port: 8080
ssl-port: 443
ssl:
certificate:
file: D:\system\server\localhost.pem
key-file: D:\system\server\localhost-key.pem
insecure-requests: redirect
enable-compression: true
HTTP
Request URL: http://localhost:8080/js/chunk-vendors.e96189d0.js
Request Method: GET
Status Code: 200 OK
Remote Address: 127.0.0.1:8080
Referrer Policy: strict-origin-when-cross-origin
accept-ranges: bytes
content-encoding: gzip
content-type: text/javascript;charset=UTF-8
date: Thu, 10 Mar 2022 07:41:46 GMT
transfer-encoding: chunked
HTTPS
Request URL: https://localhost/js/chunk-vendors.e96189d0.js
Request Method: GET
Status Code: 200
Remote Address: [::1]:443
Referrer Policy: strict-origin-when-cross-origin
accept-ranges: bytes
cache-control: public, immutable, max-age=86400
content-length: 880682
content-type: text/javascript;charset=UTF-8
date: Thu, 10 Mar 2022 07:45:07 GMT
last-modified: Thu, 10 Mar 2022 07:45:07 GMT
vary: accept-encoding
FYI : I've tried using vert.x filter but doesn't help :(
import javax.enterprise.context.ApplicationScoped;
import javax.enterprise.event.Observes;
#ApplicationScoped
public class FilterRegistrator {
void setUpFilter(#Observes Filters filters) {
filters.register((rc) -> {
rc.next();
if (rc.normalizedPath().matches("^.*\\.(js|css|svg|png)$")) {
rc.response().headers().add("content-encoding", "gzip");
}
}, 0);
}
}
This issue was solved on Quarkus 2.7.5.Final

Why does tomcat returns "400 Bad Request" while using url which are similar to "something.com-xyz"

Why does Tomcat version 7.0.88 gives "400 Bad Request" error code if the hostname ends with xyx.com-abc.
For testing purpose let's assume we have the following entry in the hosts file
127.0.0.1 hello.hello.hello-erq
And we try to access this url from curl
curl -v hello.hello.hello-er:8080
We get the following output
* Rebuilt URL to: hello.hello.hello-er:8080/
* Trying 127.0.0.1...
* Connected to hello.hello.hello-er (127.0.0.1) port 8080 (#0)
> GET / HTTP/1.1
> Host: hello.hello.hello-er:8080
> User-Agent: curl/7.49.0
> Accept: */*
>
< HTTP/1.1 400 Bad Request
< Server: Apache-Coyote/1.1
< Transfer-Encoding: chunked
< Date: Thu, 20 Dec 2018 19:53:09 GMT
< Connection: close
<
* Closing connection 0
While using the localhost in the url we get
C:\playground\apache-tomcat-7.0.88-windows-x64\apache-tomcat-7.0.88\bin>curl -v localhost:8080
* Rebuilt URL to: localhost:8080/
* Trying ::1...
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 8080 (#0)
> GET / HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.49.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: Apache-Coyote/1.1
< Content-Type: text/html;charset=ISO-8859-1
< Transfer-Encoding: chunked
< Date: Thu, 20 Dec 2018 20:00:07 GMT
<
<!DOCTYPE html>
All the tomcat configurations are the same for both the tests and the same issue is replicated on a vanilla out of the box tomcat server too.
I tried to replicate the same issue on tomcat-8 but both the url's worked fine there. How can i dig deeper and find out the root cause of this issue in 7.0.88 ?
Is there some additional logging which i can enable to get more on this issue ?
Or the only thing i have left is to pull my hairs and upgrade ?
Tomcat was attempting to enforce the domain name specification by refusing your hostname with a hyphen in the TLD. This was deemed a bug in Tomcat and fixed in 7.0.89 (and versions of Tomcat 8.0.x, 8.5.x and 9.0.x released around the same time).
So it seems that all you need is a small version bump.

Gitlab CI - Failed to register runner

I've setup my gitlab installation from source, secured it with letsencrypt and deployed it under https://gitlab.mydomain.com. I can access the website and create repositories, etc. but I can't find a way to register a gitlab ci runner for the installation.
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/ci):
https://gitlab.mydomain.com/ci
Please enter the gitlab-ci token for this runner:
xxxxxxxx-xxxxxxxx
Please enter the gitlab-ci description for this runner:
[server]: test
Please enter the gitlab-ci tags for this runner (comma separated):
test
ERROR: Registering runner... failed runner=xxxxxxx
status=couldn't execute POST against https://gitlab.mydomain.com/ci/api/v1/runners/register.json:
Post https://gitlab.mydomain.com/ci/api/v1/runners/register.json:
read tcp [ipv6address]:33518->[ipv6address]:443: read: connection reset by peer
PANIC: Failed to register this runner. Perhaps you are having network problems
My gitlab system is working fine and I really ran out of explanations why there would be a connection reset by peer. When I try to curl the address from the error message directly, it returns a correct response.
curl -v https://gitlab.mydomain.com/ci/api/v1/runners/register.json
* Trying ipv6address...
* Connected to gitlab.mydomain.com (ipv6address) port 443 (#0)
* found 174 certificates in /etc/ssl/certs/ca-certificates.crt
* found 700 certificates in /etc/ssl/certs
* ALPN, offering h2
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_256_GCM_SHA384
* server certificate verification OK
* server certificate status verification SKIPPED
* common name: mydomain.com (matched)
* server certificate expiration date OK
* server certificate activation date OK
* certificate public key: RSA
* certificate version: #3
* subject: CN=mydomain.com
* start date: Wed, 18 May 2016 14:35:00 GMT
* expire date: Tue, 16 Aug 2016 14:35:00 GMT
* issuer: C=US,O=Let's Encrypt,CN=Let's Encrypt Authority X3
* compression: NULL
* ALPN, server did not agree to a protocol
> GET /ci/api/v1/runners/register.json HTTP/1.1
> Host: gitlab.mydomain.com
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 405 Method Not Allowed
< Server: nginx
< Date: Sun, 29 May 2016 09:14:09 GMT
< Content-Type: application/json
< Content-Length: 2
< Connection: keep-alive
< Allow: OPTIONS, POST
< Cache-Control: no-cache
< Status: 405 Method Not Allowed
If the runner and gitlab are running on the same host you can get around this problem by entering a the first question the following instead what is given in the docs:
http://gitlab:port
where gitlab is the container name and port the left port number of the container. If you are using gitlab internal ssl certs you specify https instead of http. This always solves this problem when I get it.
For those that are using docker:
The issue its about docker network.
If you try
"$docker container inspect $id"
You are going to see the IPAddress of gitlab container.
Point to that ip adress on first question to works fine.
Problem went away after updating gitlab to 8.8.3 and gitlab-multi-ci-runner to the most recent version.
I also started my gitlab nginx configuration files from scratch.
In the end, I can't tell which change exactly solved the problem.
I had so far many errors and issue starting with Error 404, 403 and endings with problem with post request.
For me, problem seems to be incompatibility between GitLab and ci-runner.
Solution, same on post issue, was install older version of ci-runner:
sudo apt install gitlab-ci-multi-runner=1.11.1
I've solved it by installing gitlab-ci-multi-runner=1.11.1.

artifact is not uploaded to Nexus repository via curl command

Follow instruction on this page:
https://support.sonatype.com/entries/22189106-How-can-I-programatically-upload-an-artifact-into-Nexus-
I was able to upload artifact to repository Australia by using this command:
curl -v -u admin:admin123 --upload-file RE_0.0.0.19.tar.gz http://nexus1.ccorp.com/nexus/content/repositories/Australia/RE_0.0.0.19.tar.gz
That doesn't create POM file or has artifactId associate with, which mean I won't able to query for latest build in that repository.
I then tried this command:
curl -v -F r=releases -F hasPom=false -F e=tar.gz -F g=Australia -F a=RE -F v=0.0.0.19 -F p=tar.gz -F file=RE_0.0.0_19.tar.gz -u admin:admin123 http://nexus1.ccorp.com/nexus/service/local/artifact/maven/content
I got this log with no error, but artifact is not uploaded:
Hostname was NOT found in DNS cache
Trying 10.10.5.92...
Connected to nexus1.ccorp.com (10.10.5.92) port 80 (#0)
Server auth using Basic with user 'admin'
POST /nexus/service/local/artifact/maven/content HTTP/1.1
Authorization: Basic YWRtaW46YWRtaW4xMjM=
User-Agent: curl/7.35.0
Host: nexus1.ccorp.com
Accept: /
Content-Length: 852
Expect: 100-continue
Content-Type: multipart/form-data; boundary=------------------------929d6986ddb3024d
HTTP/1.1 100 Continue
HTTP/1.1 201 Created
Date: Tue, 20 Oct 2015 20:29:11 GMT
Server Nexus/2.11.4-01 Noelios-Restlet-Engine/1.1.6-SONATYPE-5348-V8 is not blacklisted
Server: Nexus/2.11.4-01 Noelios-Restlet-Engine/1.1.6-SONATYPE-5348-V8
X-Frame-Options: SAMEORIGIN
X-Content-Type-Options: nosniff
Content-Type: text/html;charset=UTF-8
Content-Length: 85
Connection #0 to host nexus1.ccorp.com left intact
{"groupId":"Australia","artifactId":"RE","version":"0.0.0.19","packaging":"tar.gz"}

Webrick proxy + cURL returns bad request

I'm going nuts, I have a very simple WEBrickProxy (WEBrick 1.3.1, with Ruby 1.9.3), and I want to try it out with curl. Here is the proxy:
require 'webrick'
require 'webrick/httpproxy'
server = WEBrick::HTTPProxyServer.new(:BindAddress => "localhost", :Port => 8888)
trap('INT') { server.shutdown }
server.start
And here is the cURL command
curl --proxy localhost:8888 http://www.google.de -v
But the curl command always returns a bad request:
* About to connect() to proxy localhost port 8888 (#0)
* Trying ::1...
* connected
* Connected to localhost (::1) port 8888 (#0)
> GET http://www.google.de HTTP/1.1
> User-Agent: curl/7.24.0 (x86_64-apple-darwin12.0) libcurl/7.24.0 OpenSSL/0.9.8r zlib/1.2.5
> Host: www.google.de
> Accept: */*
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 400 Bad Request
< Content-Type: text/html; charset=ISO-8859-1
< Server: WEBrick/1.3.1 (Ruby/1.9.3/2012-10-12)
< Date: Mon, 18 Mar 2013 13:44:27 GMT
< Content-Length: 295
< Connection: close
<
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN">
<HTML>
<HEAD><TITLE>Bad Request</TITLE></HEAD>
<BODY>
<H1>Bad Request</H1>
bad URI `http://www.google.de'.
<HR>
<ADDRESS>
WEBrick/1.3.1 (Ruby/1.9.3/2012-10-12) at
23tuxmb.local:8888
</ADDRESS>
</BODY>
</HTML>
* Closing connection #0
curl --version returns on my Mac OS X 10.8
curl 7.24.0 (x86_64-apple-darwin12.0) libcurl/7.24.0 OpenSSL/0.9.8r zlib/1.2.5
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IPv6 Largefile NTLM NTLM_WB SSL libz
I can't find out where the error is, and I tried that piece of code a couple of weeks ago, and I can remember that it worked.
The weird thing is, that when I configure my Mac to globally use that proxy (inside the System Settings -> Network -> Advanced -> Proxies -> Web Proxy), everything works. A request with Chrome or any other application is routed through the proxy (and I see this inside the Terminal log of the proxy).
So, can anyone reproduce this issue? Is it a curl bug? Or a Webrick related issue?
EDIT
More information: The output of the ruby script itself when curl tries to connect is
[2013-03-18 17:16:32] ERROR bad URI `http://www.amazon.de'.
localhost - - [18/Mar/2013:17:16:32 CET] "GET http://www.amazon.de HTTP/1.1" 400 286
- -> http://www.amazon.de
thx!
If you add a trailing slash to the URL you request, does it work then? Ie a command line like this:
curl --proxy localhost:8888 http://www.google.de/ -v
(this is a curl bug reported in bug #1206, and fixed in git and next release...)

Resources