Spring Data Couchbase - Connection problem with single server - spring-boot

I am getting started with Spring Boot and Spring Data Couchbase and I am having problems to connect to my couchbase server.
I using IntelliJ and I have used the Spring Initialzr to create my project.
Here's my configuration (I am using Kotlin):
#Configuration
class Config : AbstractCouchbaseConfiguration() {
override fun getBootstrapHosts(): List<String> = Collections.singletonList("10.0.0.10")
override fun getBucketName(): String = "cwp"
override fun getBucketPassword(): String = "password"
}
But instead of "just connecting" to the given ip there seems to be some reverse dns and so on in place which resolves wrong ips (due to routers and vpn) and so I am getting the following errors:
[CWSRV.fritz.box:8091][ConfigEndpoint]: Socket connect took longer than specified timeout: connection timed out: CWSRV.fritz.box/10.0.0.112:8091
The name of my server is CWSRV and I am using a vpn between my routers (Fritzboxes)
To omit such problems I want to use just the ip without any mishmash.
Any help would be appreciated!

I figured it out myself:
It seems that the Java SDK does a reverse DNS lookup if it gets an IP address. Since I had not reverse zone created in my DNS server it resolved to the router on the server side which return cwsrv.fritz.box. That resolved to 10.0.0.112 (instead of 10.0.0.10 - my server could have had this ip address assigned from the router any time in the past) and there no Couchbase server responded).
I created an entry of the server in my DNS and it works.
Resolution: Since the Couchbase (Java) SDK seems to rely on properly configured DNS make sure that Forward and Reverse lookups work as expected! :)

Related

Is typing the db url as localhost faster than giving remote server address?

I was wondering if typing remote server url instead of typing localhost in spring boot properties db url (spring.datasource.url) is slower? Let's say I am running spring boot application on server with IPv4 123.123.12.12, will typing
jdbc:mariadb://123.123.12.12:3306/dbname
make it slower than
jdbc:mariadb://localhost:3306/dbname ?
When you access localhost, your /etc/hosts file will tell your computer not to look any further and redirects you to your own computer. When you access the IP address, your computer will ask the router to fetch the data, and your router will then point back to your computer.
Directly using the IP address of any interface on the localhost - either the loopback interface (127.0.0.1) or any other - is the option with the absolutely best performance. The packets will be actually routed through the loopback interface (no matter which IP is actually used) at - practically - CPU speed.
There are three reasons, however, to prefer 127.0.0.1 over the IPs of the other interfaces:
The loopback interface is crucial to the operation of the system and
as such it is initialized very early in the boot process and nearly
always available.
It is not affected by external factors: while removing the eth0 cable
will not by itself interrupt localhost's access to itself via eth0's
IP, it will mess things up if you have any of the many
"autoconfiguration" systems that will happily shutdown the interface
on link loss.
If you have a firewall setup, it's quite possible that the rule chain
is longer (and thus slightly worse performance-wise) when the IPs of
the public interfaces are involved.
Loopback, Also Refer This
Yes, setting up IP or DNS are slower than localhost. In case of localhost, application does not need to verify anything. It will directly try to connect the database on same server. But in case of IP and DNS, it first need to check that provided url is valid and then will connect to database.

Postgresql: No connection could be made because the target machine actively refused it

Running Postgresql 9.5 on a windows server 2012 R2 in Azure
While running some loadtests on my application, I get errors on not being able to connect to the postgres server. In the logs of postgres I get the following message:
could not receive data from client: No connection could be made
because the target machine actively refused it.
This only happens when the loadtest goes to the next scenario, hitting a different part of the code. So new connections to the database are required. But after 10-20 seconds the rest of the scenario works flawlessly without hitting any other hiccups. So the problem seems to be the tcp connections. (My code retries a couple of times but it is not feasible to let it retry for 20 seconds)
I'm using the following settings in the config files
postgresql.conf
listen_addresses = '*'
max_connections = 500
shared_buffers = 1024MB
temp_buffers = 2MB
work_mem = 2MB
maintenance_work_mem = 128MB
pg_hba.conf
host all all 0.0.0.0/0 trust
host all all ::/0 trust
I know, I know.. It is not save to accept connections from everyone, but this is just for testing purposes and to make sure these settings are not blocking any connection. So this answer is void
I've been monitoring the number of connection on the server and under the load it is stable at 75. Postgres is using around 350mb of RAM. So given the config and the vm specs (7gb ram) there should be plenty of space to create more connections. However when the next scenario is spinning up the number of connections does not increase, it stays level and starts giving these log messages about no connection could be made.
What could be the problem here?
It does sound like this isn't really a Postgres problem (hence no changes in DB stats you're checking), rather that the traffic is being stopped by the server. Possibly because traffic on that port is saturated while handling your load testing queries?
It doesn't sound like you're hitting any of the Azure resource limits (including the database limits if that applies to your setup?), but without more detail on your load tests it's hard to say exactly what is needed.
Solutions from around the web and other SO answers suggest:
Disable TCP autotuning and tweak the TCP/IP registry keys on the server, e.g. set TcpAckFrequency - see this article for details
Make TCP setting adjustments (like WinsockListenBacklog) - which may be affected by whether connection pooling is in use or not - see this MS support article, which is for SQL Server 2005 but has some great tips on troubleshooting rejected TCP/IP connections (using Network Monitor, but applies to newer tools)
Faster request processing if you have enough control of the server - source
Disabling network proxying (in your load testing app): <defaultProxy> <proxy usesystemdefault="False"/> </defaultProxy> - source
Most possible reason is a Firewall/Anti-virus:
Software/Personal Firewall Settings
Multiple Software/Personal Firewalls
Anti-virus Software
LSP Layer
(Virtual) Router Firmware
Does your current Azure infrastructure contain Firewall or Anti-virus ?
Additionally on doing some additional searches, it looks like this is a standard Windows "connection refused" message, which suggests that PostgreSQL is trying to connect to something and being refused.
Also possible that one network element in your network - assuming that you are still connected to the server - will delay or drop somes DB login/authentication network packets (considered for example as a fake auth.replay) ...
You may also use a packet analyzer (like Wireshark) to record/inspect network flow when the error appear.
Regards
I was facing the same issue in my AspNet core application while I was trying to connect the Postgresql from my application. The error was thrown in the Program.cs file when I was calling the Migrate function.
public static void Main(string[] args) {
try {
var host = BuildWebHost(args);
using(var scope = host.Services.CreateScope()) {
// Migrate once after app is started.
scope.ServiceProvider.GetService <MyDatabaseContext>().Migrate();
}
host.Run();
}
catch(Exception e) {
//NLog: catch setup errors
_logger ? .Error(e, "Stopped program because of exception: ");
throw;
}
}
To fix this problem I did the following steps.
Check whether the Postgresql service is running by going to the services.msc
Tried to login to the pgAdmin with the user and password I provided in the database context
Everything was file, and as you know that 5432 is the default port of Postgresql and somehow I was using a different port in my application connection string, changing it to 5432 fixed this issue for me.
"ConnectionString": "User Id=postgres;Password=mypwd;Host=localhost;Port=5432;Database=mydb;"
I came across a similar issue whilst trying to beast my api, where I was seeing Npgsql.NpgsqlException No connection could be made because the target machine actively refused it..
However my issue was was down to the fact that I was re-creating my NpgsqlConnection for each query rather than re-using and keeping it alive.

How to set up EC2 with public IP for connections from itself?

I have an EC2 instance (running kafka) which needs to access itself via public IPs, but I would like to not open the network ACLs to the whole world.
The rationale is that when a connection is made to a kafka broker, the broker advertises which kafka nodes are available. As kafka will be used inside and outside EC2, the only common option is for the broker to advertise its public IP.
My setup:
an instance, with public IP (not an elastic IP)
a vpc
a security group, allowing access to the kafka ports from my work network
an internet gateway
a route allowing external access via the gateway
The security group is as follow:
Custom TCP Rule, proto=TCP, port=9092, src=<my office network>
Custom TCP Rule, prtot=TCP, port=2181, src=<my office network>
In short, all works fine inside the instance if I use localhost.
All works fine outside the instance if I use the public IP.
What I now want is to use kafka from inside the instance with the public IP.
If I open the kafka ports to the whole world:
Custom TCP Rule, proto=TCP, port=9092, src=0.0.0.0/0
Custom TCP Rule, prtot=TCP, port=2181, src=0.0.0.0/0
It works, as expected, but it does not feel safe.
How could I setup the network ACL to accept inbound traffic from my local instance/subnet/vpv (does not matter which) without opening too much?
Well, this is not clean, but it has the added advantage of not having to pay for external bandwidth.
I did not find a way as I expected (via the security groups), but just by updating the /etc/hosts on my ec2 instance, and actually using a hostname instead of an IP, all works as expected.
For instance, if I give the instance the hostname kafka.example.com, then by having the following line in /etc/hosts:
127.0.0.1 kafka.example.com
I can use the name kafka.example.com everywhere, even if it actually points to a different IP depending on where the call is made.

WebAuthenticationDetails getRemoteAddress() not returning real ip address of client

I am using WebAuthenticationDetails in my application.The method of WebAuthenticationDetails's getRemoteAddress() returns same IP address even if i login in application from different client machine.This may be due to proxy server.Can anybody help me to resolve this issue?
If your app is working behind a reverse proxy (for example nginx, Apache, etc.) then you'll always see IP of the reverse proxy machine in the WebAuthenticationDetails object. To solve this problem you can configure your reverse proxy in such a way that it will send client's IP address to your application server using a HTTP header. Then in your webapp get clinet's IP from this header.

Starting multiple remote servers with Akka

I'm running into some deployment issues using Akka remoting to implement a small search application.
I want to deploy my ActorSystem on a set of local cluster machines to use them as workers, but I'm a bit confused for what to put into my application.conf to make this happen. For example, I can use:
akka.remote {
transport = "akka.remote.netty.NettyRemoteTransport"
netty {
hostname = "0.0.0.0"
port = 2552
}
}
Each worker just runs the ActorSystem at startup.
This allows my worker machines to bind to their address when they start up, but then they refuse to listen to messages:
beaker-24: [ERROR] ... dropping message DaemonMsgWatch for non-local recipient akka://SearchService#beaker-24:2552/remote at akka://SearchService#0.0.0.0:2552
The documentation I've found for this so far only discusses deployment on my localhost, which is not so useful :). I'm hoping there is a way to do this without generating a separate configuration for each host.
Update:
Using an empty string as the hostname allows for contacting the host via the normal IP address. Addressing using the hostname itself doesn't work at the moment.
Setting “0.0.0.0” as host name will currently basically disable remoting, because that is not a legal IP to send to. Background: actor references get the configured IP (or host name) inserted in their address part when they leave the local system, and that is exactly their “pointer home” for other systems to send messages back.
There has been an effort by Scott which would enable a system to receive replies to a different address here, but that is not included yet—and we may well chose a different solution to this problem.

Resources