How to configure Prosody IM for communication between two computers using DNSmasq - dnsmasq

I install Prosody IM sucessfull and work with it in localhost. Now, I have two computers connected by a crossover cable with fix IP address (I check it, sent a ping). In one of this computers are installed Jabber server and in both, the client based on xmpp.
But, this clients cannot resolve the name of my server even if it are in the same host. For example, if I have a virtual host 'lti.loc', my client (based on aioxmpp) when trying to connect show this error:
ioxmpp.errors.MultiOSError: failed to connect to XMPP domain 'lti.loc': multiple errors: [Errno -2] Name or service not known
Have one tool or manner to set this service in SRV records only for local network?
UPDATE: I found a tool called dnsmasq, and now I working in configurate this properly. If someone know more about this configuration, please answer.

I found a solution:
First, it's need to install dnsmasq.
After, it's important to disable the resolution of names of the system. In Ubuntu based systems the commands are:
$ sudo systemctl disable systemd-resolved
$ sudo systemctl stop systemd-resolved
Let's configurate the dnsmasq config file to locate the Prosody server. The command to open the file is:
$ sudo gedit /etc/dnsmasq.conf
The configuration example is:
local=/localnet/
address=/lti.loc/192.168.1.1
srv-host=_xmpp-client._tcp.lti.loc,lti.loc,5222
srv-host=_xmpp-server._tcp.lti.loc,lti.loc,5269
Finally, start the dnsmasq with the command:
$ sudo systemctl start dnsmasq
Some applications require SSL to connect to Prosody. Prosody have commands to generate this automatically. The final Prosody config file /etc/prosody/prosody.cfg.lua seems like this:
-- Prosody Example Configuration File
modules_enabled = {
-- Generally required
"roster"; -- Allow users to have a roster. Recommended ;)
"saslauth"; -- Authentication for clients and servers. Recommended if you want to log in.
"tls"; -- Add support for secure TLS on c2s/s2s connections
"dialback"; -- s2s dialback support
"disco"; -- Service discovery
-- Not essential, but recommended
"carbons"; -- Keep multiple clients in sync
"pep"; -- Enables users to publish their mood, activity, playing music and more
"private"; -- Private XML storage (for room bookmarks, etc.)
"blocklist"; -- Allow users to block communications with other users
"vcard"; -- Allow users to set vCards
-- Nice to have
"version"; -- Replies to server version requests
"uptime"; -- Report how long server has been running
"time"; -- Let others know the time here on this server
"ping"; -- Replies to XMPP pings with pongs
"register"; -- Allow users to register on this server using a client and change passwords
--"mam"; -- Store messages in an archive and allow users to access it
--"smacks"; -- Stream manager
-- Admin interfaces
"admin_adhoc"; -- Allows administration via an XMPP client that supports ad-hoc commands
--"admin_telnet"; -- Opens telnet console interface on localhost port 5582
-- HTTP modules
-- "bosh"; -- Enable BOSH clients, aka "Jabber over HTTP"
--"websocket"; -- XMPP over WebSockets
--"http_files"; -- Serve static files from a directory over HTTP
-- Other specific functionality
--"limits"; -- Enable bandwidth limiting for XMPP connections
--"groups"; -- Shared roster support
--"server_contact_info"; -- Publish contact information for this service
--"announce"; -- Send announcement to all online users
--"welcome"; -- Welcome users who register accounts
--"watchregistrations"; -- Alert admins of registrations
--"motd"; -- Send a message to users when they log in
--"legacyauth"; -- Legacy authentication. Only used by some old clients and bots.
--"proxy65"; -- Enables a file transfer proxy service which clients behind NAT can use
}
-- These modules are auto-loaded, but should you want
-- to disable them then uncomment them here:
modules_disabled = {
-- "offline"; -- Store offline messages
-- "c2s"; -- Handle client connections
-- "s2s"; -- Handle server-to-server connections
-- "posix"; -- POSIX functionality, sends server to background, enables syslog, etc.
}
-- Disable account creation by default, for security
-- For more information see https://prosody.im/doc/creating_accounts
allow_registration = true
-- Force clients to use encrypted connections? This option will
-- prevent clients from authenticating unless they are using encryption.
c2s_require_encryption = false
-- Force servers to use encrypted connections? This option will
-- prevent servers from authenticating unless they are using encryption.
-- Note that this is different from authentication
s2s_require_encryption = false
-- Force certificate authentication for server-to-server connections?
-- This provides ideal security, but requires servers you communicate
-- with to support encryption AND present valid, trusted certificates.
-- NOTE: Your version of LuaSec must support certificate verification!
-- For more information see https://prosody.im/doc/s2s#security
s2s_secure_auth = false
allow_unencrypted_plain_auth = false
disable_sasl_mechanisms = { "DIGEST-MD5" }
-- Some servers have invalid or self-signed certificates. You can list
-- remote domains here that will not be required to authenticate using
-- certificates. They will be authenticated using DNS instead, even
-- when s2s_secure_auth is enabled.
--s2s_insecure_domains = { "insecure.example" }
-- Even if you leave s2s_secure_auth disabled, you can still require valid
-- certificates for some domains by specifying a list here.
--s2s_secure_domains = { "jabber.org" }
-- Select the authentication backend to use. The 'internal' providers
-- use Prosody's configured data storage to store the authentication data.
-- To allow Prosody to offer secure authentication mechanisms to clients, the
-- default provider stores passwords in plaintext. If you do not trust your
-- server please see https://prosody.im/doc/modules/mod_auth_internal_hashed
-- for information about using the hashed backend.
authentication = "internal_hashed"
-- Select the storage backend to use. By default Prosody uses flat files
-- in its configured data directory, but it also supports more backends
-- through modules. An "sql" backend is included by default, but requires
-- additional dependencies. See https://prosody.im/doc/storage for more info.
--storage = "sql" -- Default is "internal"
-- For the "sql" backend, you can uncomment *one* of the below to configure:
--sql = { driver = "SQLite3", database = "prosody.sqlite" } -- Default. 'database' is the filename.
--sql = { driver = "MySQL", database = "prosody", username = "prosody", password = "secret", host = "localhost" }
--sql = { driver = "PostgreSQL", database = "prosody", username = "prosody", password = "secret", host = "localhost" }
-- Archiving configuration
-- If mod_mam is enabled, Prosody will store a copy of every message. This
-- is used to synchronize conversations between multiple clients, even if
-- they are offline. This setting controls how long Prosody will keep
-- messages in the archive before removing them.
archive_expires_after = "1w" -- Remove archived messages after 1 week
-- You can also configure messages to be stored in-memory only. For more
-- archiving options, see https://prosody.im/doc/modules/mod_mam
-- Logging configuration
-- For advanced logging see https://prosody.im/doc/logging
log = {
info = "prosody.log"; -- Change 'info' to 'debug' for verbose logging
error = "prosody.err";
-- "*syslog"; -- Uncomment this for logging to syslog
-- "*console"; -- Log to the console, useful for debugging with daemonize=false
}
-- Uncomment to enable statistics
-- For more info see https://prosody.im/doc/statistics
-- statistics = "internal"
-- Certificates
-- Every virtual host and component needs a certificate so that clients and
-- servers can securely verify its identity. Prosody will automatically load
-- certificates/keys from the directory specified here.
-- For more information, including how to use 'prosodyctl' to auto-import certificates
-- (from e.g. Let's Encrypt) see https://prosody.im/doc/certificates
-- Location of directory to find certificates in (relative to main config file):
--certificates = "certs"
----------- Virtual hosts -----------
-- You need to add a VirtualHost entry for each domain you wish Prosody to serve.
-- Settings under each VirtualHost entry apply *only* to that host.
pidfile = "prosody.pid" -- stores prosody.pid in the current directory
VirtualHost "lti.loc"
ssl = {
key = "/var/lib/prosody/lti.loc.key";
certificate = "/var/lib/prosody/lti.loc.crt";
}
Finally, it's important correct setup the hosts in /etc/host/!

Related

how can I prevent systemd-networkd from sending client identifier?

I have a machine with CoreOS 1800(or 1855) installed onto disk, and with following systemd-networkd config (there is only one network interface in the machine):
$ cat /etc/systemd/network/zz-default.network
[Network]
DHCP=yes
[DHCP]
ClientIdentifier=mac
UseMTU=true
UseDomains=true
Another notable thing is that this machine is also configured with PXE boot but PXE server will reject boot so it will finally boot from disk.
When I restart the machine, there will be two DHCP IPs allocated for it, I confirmed it by checking /var/lib/dhcpd.leases in DHCP server:
lease 100.79.223.152 {
starts 5 2018/09/28 02:34:00; ends 6 2018/09/29 02:33:59; tstp 6 2018/09/29 02:33:59; cltt 5 2018/09/28 02:34:00;
binding state active; next binding state free; rewind binding state free;
hardware ethernet 08:9e:01:d9:28:64;
option agent.circuit-id 0:5:8:b9:1:0:29;
}
lease 100.79.223.150 {
starts 5 2018/09/28 02:34:29; ends 6 2018/09/29 02:34:28; tstp 6 2018/09/29 02:34:28; cltt 5 2018/09/28 02:34:29;
binding state active; next binding state free; rewind binding state free;
hardware ethernet 08:9e:01:d9:28:64; uid "001010236001331(d";
option agent.circuit-id 0:5:8:b9:1:0:29;
}
The lease record 100.79.223.152 is requested by the PXE loader, though rejected by DHCP server.
The lease record 100.79.223.150 is requested by systemd-networkd of
CoreOS. (I can confirm it by running systemctl restart systemd-networkd and watch the leases file)
All seems fine, but the PXE lease record 100.79.223.152 cause other problem (when really PXE boot the machine and deploy another OS to it then it will get the 100.79.223.152 instead of 150, then cause other private problem).
If I install other OS which does not use systemd-networkd, then the reboot only cause 1 lease record.
You can see the lease 100.79.223.150 has a field uid "001010236001331(d", which means let DHCP server allocate IP by the uid (client identifier), currently it is actually same content of mac address, just be printed as octet.
This is the root cause of two IPs.
To prevent this two IP problem, I've tried to set deny duplicates in /etc/dhcp/dhcpd.conf in DHCP server, but nothing changes.
I was wandering that if it is possible to tell systemd-networkd not to send uid (client identifier). According to source of systemd, it is intentionally implemented to "always send client identifier",
given such condition, how can I prevent systemd-networkd from sending client identifier?
EDIT 2019/02/17: I found that I misunderstood the meaning of deny duplicates, it does not help this problem.
I remembered I had ever tested another option first but not works.
ignore-client-uids on;
The ignore-client-uids statement
ignore-client-uids flag;
If the ignore-client-uids statement is present and has a value of true
or on, the UID for clients will not be recorded. If this statement is
not present or has a value of false or off, then client UIDs will be
recorded.
https://www.isc.org/wp-content/uploads/2017/08/dhcp43.html
The DHCP server version is isc-dhcpd-4.2.4
EDIT 2019-03-12: I had some mistaken and found something, so answered this question myself. Simple answer is ignore-client-uids true; on server side works well, ClientIdentifier=mac on client side does not work well.
Have you tried setting the client identifier to (empty)?
$ cat /etc/systemd/network/zz-default.network
[Network]
DHCP=yes
[DHCP]
ClientIdentifier=
UseMTU=true
UseDomains=true
After many times of experiments, I found that only ignore-client-uids true; works constantly, all mystery disappeared., when you set it, you can confirm that no uid "....." appear in /var/lib/dhcp/dhcpd.leases` file, the server completely ignore the client identifier sent from client and just use MAC to determine how to allocate IP.
If you insist on using ClientIdentifier=mac, you can take a look at what I found:
specifying ClientIdentifier=mac (on client *.network) does let me get same IP as before. The reason why I said it does not work is probably because I have another NIC which also enabled DHCP by default hence caused a new IP.
/lib/systemd/network/zz-default.network
[Network]
DHCP=yes
[DHCP]
UseMTU=true
UseDomains=true
After I change above file to
[Network]
DHCP=no
I got only 1 and same IP as before.
The client identifier will be a string "\0x1" + MAC, you can confirm it grep uid "..." in /var/lib/dhcp/dhcpd.leasesfile, e.g.,uid "001304TDD210272"`, for any non-printable char it will be encoded as 3 digits Octal such as 304. Some client automatically generate an client identifier like this "\0x1" + "MAAS" + MAC ...
The most unfortunate thing is: once a client send client identifier, for the same MAC, if the client send anther request WITHOUT client identifier, it will get new IP.
Considering DDNS, for same MAC, the DHCP request with and without client identifier are treated as different client when DHCP server composing DNS update request for it. Simply speaking,
for DHCP request without client identifier -> server send DDNS request with a hash of the MAC -> DNS server: OK
for DHCP request with client identifier -> server send DDNS request with a hash of the client identifier -> DNS server: rejected due to the hash is not same, for security.
That is all I found, hope it helpful.
You can also check if there is configuration under the /run/.../systemd/network/*.network, I had the same issue because of netplan putting a configuration network file in the /run which is applied instead of the etc or lib one.
The solution in this case is to add the dhcp-identifier: mac in the the netplan yml configuration

Concept of remote controling several consul stacks securely

Introduction
I am running multiple, i call them consul-stacks. They do always look like:
- 1 consul server
- 9 consul nodes
Each node offers some services - just a classic web-stack and more (not interesting for this question).
Gossip is used to protect the server getting queried by arbitrary nodes and reveal data.
Several consul-template / tiller "watchers" are waiting to dynamically configure the nodes/services on KV changes
Goal
Lets say i have 10 of those stacks ( number is dynamic ) and i want to build a web-app controlling the consul-KV of each stack using a specific logic
What i have right now
I have created a thor+diplomat tool to wrap the logic i need to create specific KV entries. I implemented it while running it on the "controller" container in the stack, talking to localhost:8500 - which then authenticates with gossip and writes to the server.
Question
What concept would i now use to move this tool a remote ( not part of the consul-stack ) server, while being able to write into each consul-stacks KV.
Sure, i can use diplomat to connect to stack1.tld:8500 - but this would mean i open the HTTP port and need to secure it somehow ( not protected by gossip? somehow, only RPC? ) and also protect the /ui.
Is there a better way to connect to each of those stacks?
use an nginx proxy server with basic auth in fron of 8500 to protect the access?
also using ssl-interception on this port and still using 8500 or rather use the a configured https port (in consul HTTPS API)
use ACLs to protect the access? ( a lot of setup to allow the access for the stack members - need of TLS?)
In general, without using TLS ( which needs to much work for the clients to setup ), what concepts would fit this need communicating to the stack-server to write into its KV, securely.
If i missed something, happy to add anything you ask for
The answer on this is
Enable ACLs on the consul-server
{
"acl_datacenter": "stable",
"acl_default_policy": "deny",
"acl_down_policy": "deny"
}
Create a general acl token with write/write/write
consul-cli acl create --management=false --name="general_node" --rule "key::write" --rule "event::write" --rule "service::write" --token=<master-token>
Ensure to use your master-token here, created during the server start
Optionally also configure gossip to let your clients communicate encrypted ( otherwise ACLs kind of not make sense )
Add the general token to the consul-client you use remotely to be able to talk to the remote consul - since this consul will no longer be publicly doing anything ( without token )

Advanced fail2ban parameters in filter/action

I have fail2ban running to protect our freeswitch servers against attack. When an IP address has too many failed logins, it gets banned.
I'd like to get notification of which account is being attacked - not just the IP address.
So a log line might be
2015-09-11 08:27:40.212155 [WARNING] sofia_reg.c:1477 SIP auth failure (REGISTER) on sofia profile 'internal' for [kloch#inbox.ru#004-2025.sb12.dmclub.org] from ip 78.31.75.181
I would like an email sent (or some php script run) that includes the [kloch#inbox.ru#004-2025.sb12.dmclub.org] bit (or even just the whole line)
Is that possible?
My guess is that it isn't, just due to the flow of data from many rows with a common host, so I'm not holding my breath! ;-)
If you configure fail2ban to use the action_mwl Action Shortcut, it will send you a mail with whois information and the full log line.
In /etc/fail2ban/jail.conf, make sure the action setting is set to:
action = %(action_mwl)s

HTTP load balancing with nginx

I have a configuration like this.
upstream servers{
server localhost:port1;
server localhost:port2;
server localhost:port3;
}
server{
listen nginx_port;
server_name localhost
location{
proxy_pass http//:servers;
}
}
Now what I want to know is how to keep a user's session alive while maintaining (I mean temporarily closing) one server. Let say I have a 3 page registration now user is connected to localhost:port1 and working on page 2 in the meantime if I want to close the server(localhost:port1) and forward the user to next server(localhost:port2) keeping the session alive I mean the user should be able to complete his registration without any trouble then what i have to do in the nginx configuration file. is it possible?
You can't do this with nginx, as nginx is not what's providing the session functionality. You need to do this with your upstream servers by configuring them to use session storage that's sharable by all the servers (like a database or memcache) instead of server-specific session storage (like files in a temp dir on the local hard drive.) How you do that will vary based on whatever your upstream servers are. For example, if you're using Zend, you might implement a database save handler.
(I'm assuming here that your config is just an example and that you don't actually have three identical upstream servers on the same machine.)

how to change ProFTPd port without using "passive mode"

I just re-installed Ubuntu server 10.04 and decided to change all of my default ports to get a little extra security. Everything works fine, except when I decided to change the FTP (ProFTPd) port from the standard 21 to 3521. No problems with firewalls or port forwarding. ProFTPd was restarted but when I am trying to connect to it,even though it does respond, it throws the client (FileZilla) into a "passive mode" and then never goes into listing a directory.
I don't really want to use the "passive mode" and I have it disabled in proftpd.conf, but nevertheless I can't seem to change the default port otherwise and make it working. It does seem to work fine on port 21. FYI, the proftpd was installed as a standalone daemon, if that matters somehow?
Ok, I think I figured this out after reading this page: link . It appears that most FTP connections are indeed "passive" and the problem with "active" connections comes from the use of firewalls on the client side since FTP server is initiating an outgoing "data" connection to the client on some random port. In passive mode the client initiates both "command" and "data" connections to the server and hence the firewall isn't a problem, but you should specify which "passive" ports to use on the server. I enabled 3520 and 3521 PassivePorts and it's now working
FTP Active Mode by definition requires the server to initiate its outgoing connections from port L-1. Does your firewall allow outgoing connections from port 3520 as well?
From the FTP RFC:
3.2. ESTABLISHING DATA CONNECTIONS
The mechanics of transferring data consists of setting up the data
connection to the appropriate ports and choosing the parameters
for transfer. Both the user and the server-DTPs have a default
data port. The user-process default data port is the same as the
control connection port (i.e., U). The server-process default
data port is the port adjacent to the control connection port
(i.e., L-1).
...
3.3. DATA CONNECTION MANAGEMENT
Default Data Connection Ports: All FTP implementations must
support use of the default data connection ports, and only the
User-PI may initiate the use of non-default ports.
Negotiating Non-Default Data Ports: The User-PI may specify a
non-default user side data port with the PORT command. The
User-PI may request the server side to identify a non-default
server side data port with the PASV command. Since a connection
is defined by the pair of addresses, either of these actions is
enough to get a different data connection, still it is permitted
to do both commands to use new ports on both ends of the data
connection.
You might wish to take the opportunity to change your users to SFTP, a much nicer protocol.

Resources