I'm working with an application that interacts with Google Cloud PubSub. It works fine in normal scenario but I want to enable proxy support so I was going through Publisher.Builder and Subscriber classes and their APIs to see if there are any APIs available to enable proxy support. I managed to find only the setChannelProvider but I'm not sure whether that will work or not.
The following code snippet is what I'm thinking of using but that doesn't seem to work.
ManagedChannel channel = ManagedChannelBuilder.forAddress(proxyHost, proxyPort).build();
TransportChannelProvider channelProvider = FixedTransportChannelProvider.create(GrpcTransportChannel.create(channel));
publisherBuilder.setChannelProvider(channelProvider);
I wasn't able to successfully publish or pull messages to the cloud service. I get the following error:
java.util.concurrent.ExecutionException: io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED: deadline exceeded after 9978300322ns
So I wanted to know does the PubSub service support proxy through APIs or does it only support the proxy setting i.e. host and port to be provided in the environment path only.
You can specify the proxyHost/port directly using JVM args https.proxyHost, https.proxyPort
mvn clean install -Dhttps.proxyHost=localhost -Dhttps.proxyPort=3128 exec:java
then just directly create a client of your choice
TopicAdminSettings topicAdminSettings = TopicAdminSettings.newBuilder().build();
TopicAdminClient topicAdminClient = TopicAdminClient.create(topicAdminSettings);
FYI- Setting ManagedChannelBuilder.forAddress() here overrides the final target for pubsub (which should be pubsub.googleapis.com 443 (not the proxy)
Here is a medium post i put together as well as a gist specificlly for pubsub and pubsub+proxy that requires basic auth headers
finally, just note, its https.proxyHost even if you're using httpProxy, ref grpc#9561
Proxy authentication via HTTP is not supported by Google Pub/Sub, but it can be configured by using GRPC_PROXY_EXP environment variable.
I found the same error that you got here (that's why I assume you are using HTTP) and it got fixed by using what I said.
You need to set JVM args: https.proxyHost, https.proxyPort
for proxy authentication an additional configration is needed before any client creation:
Authenticator.setDefault(new Authenticator() {
protected PasswordAuthentication getPasswordAuthentication() {
return
new PasswordAuthentication(proxyUsername,proxyPassword).toCharArray());
}
Related
I am using pybliometrics, a Python interface to the Scopus API, to download the abstracts of some papers.
Unfortunately Scopus only works inside the network of the university that subscribed to it. I am currently at home and whenever I try to download something using pybliometrics it gives me the following error:
pybliometrics.scopus.exception.Scopus401Error: The requestor is not authorized to access the requested view or fields of the resource
I need to use my university's proxy in order to enter the internet with the IP address of my university. The proxy has a WPAD configuration file available, but I fail to realize how to use it with pybliometrics. The pybliometrics documentation says to add a block in the configuration file like this:
[Proxy]
ftp = socks5://127.0.0.1:1234
http = socks5://127.0.0.1:1234
https = socks5://127.0.0.1:1234
But this proxy requires authentication. How can I specify the proxy username and password?
EDIT: I have tried setting up the block in config.ini like:
[Proxy]
ftp = http://username:password#proxy.thing.it:8080
http = http://username:password#proxy.thing.it:8080
https = http://username:password#proxy.thing.it:8080
but it still fails with the following error message:
requests.exceptions.ProxyError: HTTPSConnectionPool(host='api.elsevier.com', port=443): Max retries exceeded with url: /content/abstract/scopus_id/84983158344?view=META_ABS (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 407 Proxy Authentication Required')))
From our perspective the API will work via a proxy as long as the proxy is configured correctly. I would suggest you speak to the provider of the proxy to see if they can help.
We don't have specific instructions on how to use APIs with a proxy (as there are many potential different versions and potential configurations); however, the general instructions are here:
https://service.elsevier.com/app/answers/detail/a_id/29026/supporthub/elsevieraccess/
To me your new proxy block looks suspicious. It funnels ftp and https requests through the http as well. Maybe try ftp and https as protocols in the corresponding sections.
The other solution is to ask Scopus Integration Support for an InstToken, which you use instead of a proxy. You then specify the InstToken in the configuration file as well.
The problem was that my proxy requires DigestAuth rather than BasicAuth.
As the title says, how do I consume an http service that requires a certificate from KrakenD?
Is it possible with KrakenD-CE? How do I implement it with KrakenD framework?
You can configure the TLS using the
"tls": {
"public_key": "/etc/krakend/certs/tls.crt",
"private_key": "/etc/krakend/certs/tls.key",
"min_version": "TLS12",
"max_version": "TLS13"
}
based on the config KrakenD will work with TLS and handle the request.
https://www.krakend.io/docs/service-settings/tls/
Regarding implementation, it's quite simple
You can check out : https://github.com/devopsfaith/krakend-ce
Create the build using the Make
Make build & if you are on docker Make docker
Make build will provide you executable binary of KrakenD API gateway which require the config file in JSON, YAML format for API config, Rate limit and whatever config you want to set for your API.
I was trying to enable the AMQP 1.0 connection with Ditto running on my local virtual Ubuntu machine following the instruction from the website. So I created the twin on my instance, verified it exists and the following step was to create a connection with the endpoint.
First my question: Is it mandatory to use Hono to create AMQP connection? Cause I would prefer to use simple mosquito client. So I tryed to execute the PUT CURL:
{
"targetActorSelection": "/system/sharding/connection",
"headers": { "aggregate": false },
"piggybackCommand": {
"type": "connectivity.commands:createConnection",
"connection": {}
}
}
to the adress where my instance of eclipse ditto running http://localhost/devops/piggyback/connectivity, but i'm getting 401 Authorization error.
I tryed to put the basic authentication used in the example: devops:devopsPw1!, but it fails as well.
Meanwhile sending the same command to the Ditto sandbox instance working fine. What did I miss in my configuration?
Thanks a lot in advance, Mila
regarding the first question. No it is not mandatory to use Hono to create an AMQP connection. You can establish an AMQP connection to whatever uri you define in your connection.
This leads me to the next point. The JSON you provided in your question is missing the description of the actual connection.
I see that we should clarify this in the documentation more explicitly like we did for the testConnection command.
You could have a look at the connection model to see how to configure the connection.
Regarding your second question (401 response) the problem is that the default devops password is "foobar". You can configure it to a password you like by setting the environment variable DEVOPS_PASSWORD of the gateway container.
I hope I could help you.
I'm running a micronaut microservice on a Win 7.
My GET Request looks like : http://localhost:8080/maps/myreq.
The controller use a httpclient to send request to an external webseite : image.maps.api.here.com
When running without proxy, all went fine and the response is ok (an image).
But when running behind the proxy, connection timed out. Proxy works fine for any other applications or browser.
How to set micronaut server behind proxy to properly root requests?
edit : when sending a request, the netty server respond with an error : unable to connect to image.maps.api.here.com:xx.xx.xx.xx:xxxx where xx.xx.xx.xx:xxxx is the proxy
How to set micronaut server behind proxy to properly root requests?
You can set the https.proxyHost, https.proxyPort, http.proxyUser and http.proxyPassword system properties. A common place to do that is in the MN_OPTS environment variable. For example, you could set MN_OPTS to have a value like "-Dhttps.proxyHost=127.0.0.1 -Dhttps.proxyPort=3128 -Dhttp.proxyUser=test -Dhttp.proxyPassword=test".
See https://docs.micronaut.io/1.1.0/guide/index.html#proxy for more info.
I hope that helps.
I fixed the problem with settings the proxy for the CLI but also by setting the proxy in the application.yml like here :
https://github.com/micronaut-projects/micronaut-core/issues/1611
I am running a RabbitMQ instance that provides MQTT over websockets via the rabbitmq_web_mqtt plugin.
For legacy reasons, I need to support a non-default WebSocket URL.
I saw in the documentation it is possible to change the port via the { port, 1234 } config, but I could not find any way to change the WebSocket URL. It is currently set to the default path of /ws
Is it possible to change the WebSocket URL without modifying the plugin?
This has been made configurable back September 2018. See already mentioned ticket.
Add line:
# echo 'web_mqtt.ws_path = /mqtt' >> /etc/rabbitmq/rabbitmq.conf
# service rabbitmq-server restart
Now being accessible by (compliant) MQTT Clients. For instance at:
ws://192.168.210.84:15675/mqtt
UPDATE: RabbitMQ now allows configuration of the WebSocket URL. See this answer.
After some research, I found out that the path is not configurable