Setting Turn server on EC2 - amazon-ec2

I am using EC2(Amazon Linux 2). I created Turn server
0: log file opened: /var/log/turn_8708_2020-07-08.log
0:
RFC 3489/5389/5766/5780/6062/6156 STUN/TURN Server
Version Coturn-4.5.0.3 'dan Eider'
0:
Max number of open files/sockets allowed for this process: 300000
0:
Due to the open files/sockets limitation,
max supported number of TURN Sessions possible is: 150000 (approximately)
0:
==== Show him the instruments, Practical Frost: ====
0: TLS supported
0: DTLS supported
0: DTLS 1.2 supported
0: TURN/STUN ALPN supported
0: Third-party authorization (oAuth) supported
0: GCM (AEAD) supported
0: OpenSSL compile-time version: OpenSSL 1.0.2k-fips 26 Jan 2017
0:
0: SQLite is not supported
0: Redis is not supported
0: PostgreSQL is not supported
0: MySQL is not supported
0: MongoDB is not supported
0:
0: Default Net Engine version: 3 (UDP thread per CPU core)
=====================================================
0: Config file found: /usr/local/etc/turnserver.conf
0: log file opened: /var/log/turnserver_2020-07-08.log
0: Config file found: /usr/local/etc/turnserver.conf
0: Domain name:
0: Default realm: meetmocha.com
0:
CONFIGURATION ALERT: you specified long-term user accounts, (-u option)
but you did not specify the long-term credentials option
(-a or --lt-cred-mech option).
I am turning --lt-cred-mech ON for you, but double-check your configuration.
0: WARNING: cannot find certificate file: turn_server_cert.pem (1)
0: WARNING: cannot start TLS and DTLS listeners because certificate file is not set properly
0: WARNING: cannot find private key file: turn_server_pkey.pem (1)
0: WARNING: cannot start TLS and DTLS listeners because private key file is not set properly
0: NO EXPLICIT LISTENER ADDRESS(ES) ARE CONFIGURED
0: ===========Discovering listener addresses: =========
0: Listener address to use: 127.0.0.1
0: Listener address to use: 172.26.193.9
0: Listener address to use: ::1
0: =====================================================
0: Total: 1 'real' addresses discovered
0: =====================================================
0: NO EXPLICIT RELAY ADDRESS(ES) ARE CONFIGURED
0: ===========Discovering relay addresses: =============
0: Relay address to use: 172.26.193.9
0: Relay address to use: ::1
0: =====================================================
0: Total: 2 relay addresses discovered
0: =====================================================
0: pid file created: /var/run/turnserver.pid
0: IO method (main listener thread): epoll (with changelist)
0: WARNING: I cannot support STUN CHANGE_REQUEST functionality because only one IP address is provided
0: Wait for relay ports initialization...
0: relay 172.26.193.9 initialization...
0: relay 172.26.193.9 initialization done
0: relay ::1 initialization...
0: relay ::1 initialization done
0: Relay ports initialization done
0: IO method (general relay thread): epoll (with changelist)
0: turn server id=1 created
0: IPv4. SCTP listener opened on : 127.0.0.1:3478
0: IPv4. TCP listener opened on : 127.0.0.1:3478
0: IPv4. TCP listener opened on : 172.26.193.9:3478
0: IPv6. SCTP listener opened on : ::1:3478
0: IPv6. TCP listener opened on : ::1:3478
0: IO method (general relay thread): epoll (with changelist)
0: turn server id=0 created
0: IPv4. TCP listener opened on : 127.0.0.1:3478
0: IPv4. TCP listener opened on : 172.26.193.9:3478
0: IPv6. TCP listener opened on : ::1:3478
0: IPv4. UDP listener opened on: 127.0.0.1:3478
0: IPv4. UDP listener opened on: 172.26.193.9:3478
0: IPv6. UDP listener opened on: ::1:3478
0: Total General servers: 2
0: IO method (admin thread): epoll (with changelist)
0: IPv4. CLI listener opened on : 127.0.0.1:5766
0: IO method (auth thread): epoll (with changelist)
0: IO method (auth thread): epoll (with changelist)
I have tested My turn server on Trickle ICE page:
But When I am calling Turn server from my Javascript code still not able to make screen sharing connection on different network while if I make connection on same network than successfully made.
For example, I share my laptop screen to my mobile both device connected with same wifi than work perfectly but if internet connection different than not working so I set up Turn server on my EC2 server. After adding Turn server to my Javascript code I am not able to share screen on different network.
Please can anyone help me regarding this?

Related

I need to get my WAMP working. I have two of three services active. Error message is below. (I ran a test)

***** Test which uses port 80 *****
===== Tested by command netstat filtered on port 80 =====
Test for TCP
Port 80 is not found associated with TCP protocol
Port 80 is not found associated with TCP protocol
===== Tested by attempting to open a socket on port 80 =====
Your port 80 seems not actually used.
Unable to initiate a socket connection
Error number: 10060 - Error string: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
--- Do you want to copy the results into Clipboard?
--- Press the Y key to confirm - Press ENTER to continue...

When I started HAProxy under Windows Server, I got the following error

haproxy.exe -f haproxy.cfg -d
When I run HAProxy, I get an error:
'''
Available polling systems :
poll : pref=200, test result OK
select : pref=150, test result FAILED
Total: 2 (1 usable), will use poll.
Available filters :
[SPOE] spoe
[CACHE] cache
[FCGI] fcgi-app
[COMP] compression
[TRACE] trace
Using poll() as the polling mechanism.
[NOTICE] (1036) : haproxy version is 2.4.0-6cbbecf
[ALERT] (1036) : Starting proxy warelucent: cannot bind socket (Address already in use) [0.0.0.0:5672]
[ALERT] (1036) : [haproxy.main()] Some protocols failed to start their listeners! Exiting.
'''
In the meantime, no other services are running, and I have the RabbitMQ service open.
My haproxy.cfg file is as follows:
'''
#logging options
global
log 127.0.0.1 local0 info
maxconn 1500
daemon
quiet
nbproc 20
defaults
log global
mode tcp
#if you set mode to tcp,then you nust change tcplog into httplog
option tcplog
option dontlognull
retries 3
option redispatch
maxconn 2000
timeout connect 10s
timeout client 10s
timeout server 10s
#front-end IP for consumers and producters
listen warelucent
bind 0.0.0.0:5672
#配置TCP模式
mode tcp
#balance url_param userid
#balance url_param session_id check_post 64
#balance hdr(User-Agent)
#balance hdr(host)
#balance hdr(Host) use_domain_only
#balance rdp-cookie
#balance leastconn
#balance source //ip
#简单的轮询
balance roundrobin
server one 1.1.1.1:5672 check inter 5000 rise 2 fall 2
server two 2.2.2.2:5672 check inter 5000 rise 2 fall 2
server three 3.3.3.3:5672 check inter 5000 rise 2 fall 2
listen stats
bind 127.0.0.1:8100
mode http
option httplog
stats enable
stats uri /rabbitmq-stats
stats refresh 5s
'''
Most of the Internet is due to the version, but I checked the official website, the version is the latest, and I also started the RabbitMQ service, so I don't know where the error is at present
(Address already in use) [0.0.0.0:5672]
it means that the port 5672 (RabbitMQ is already in use. Most likely you have a rabbitmq node running in that machine.
So just change the HA-PROXY port.

Docker service tasks stuck in preparing state after reboot on windows

Restarting a windows server that is a swarm worker, causes windows containers to get stuck in a "Preparing" state indefinitely once the server and docker daemon are back online.
Image of tasks/containers stuck in preparing state:
https://user-images.githubusercontent.com/4528753/65180353-4e5d6e80-da22-11e9-8060-451150865177.png
Steps to reproduce the issue:
Create a swarm (in my case I have CentOS7 managers, and a few windows server 1903 workers)
Create a "global" docker service that only runs on the windows machines. They should start up fine
initially and work just fine.
Drain one or more of the windows nodes that is running the windows container(s) from step 2 (docker node update --availability=drain nodename)
Restart one or more of the nodes that were drained in step 3, wait for them to come back up
Set the windows node(s) back to active (docker node update --availability=active nodename)
At this point, just observe that the docker service created in step 2 will be "Preparing" the containers to start up on these nodes, and there it will stay (docker service ps servicename --no-trunc) -- you can observe this and run these commands from any master node
memberlist: Refuting a suspect message (from: c9347e85405d)
memberlist: Failed to send ping: write udp 10.60.3.40:7946->10.60.3.110:7946: wsasendto: The requested address is not valid in its
context.
grpc: addrConn.createTransport failed to connect to {10.60.3.110:2377 0 <nil>}. Err :connection error: desc = "transport: Error while
dialing dial tcp 10.60.3.110:2377: connectex: A socket operation was attempted to an unreachable host.". Reconnecting... [module=grpc]
memberlist: Failed to send ping: write udp 10.60.3.40:7946->10.60.3.186:7946: wsasendto: The requested address is not valid in its
context.
grpc: addrConn.createTransport failed to connect to {10.60.3.110:2377 0 <nil>}. Err :connection error: desc = "transport: Error while
dialing dial tcp 10.60.3.110:2377: connectex: A socket operation was attempted to an unreachable host.". Reconnecting... [module=grpc]
agent: session failed [node.id=wuhifvg9li3v5zuq2xu7c6hxa module=node/agent error=rpc error: code = Unavailable desc = all SubConns are
in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 10.60.3.69:2377:
connectex: A socket operation was attempted to an unreachable host." backoff=6.3s]
Failed to send gossip to 10.60.3.110: write udp 10.60.3.40:7946->10.60.3.110:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.69: write udp 10.60.3.40:7946->10.60.3.69:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.105: write udp 10.60.3.40:7946->10.60.3.105:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.69: write udp 10.60.3.40:7946->10.60.3.69:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.186: write udp 10.60.3.40:7946->10.60.3.186:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.105: write udp 10.60.3.40:7946->10.60.3.105:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.186: write udp 10.60.3.40:7946->10.60.3.186:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.69: write udp 10.60.3.40:7946->10.60.3.69:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.105: write udp 10.60.3.40:7946->10.60.3.105:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.109: write udp 10.60.3.40:7946->10.60.3.109:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.69: write udp 10.60.3.40:7946->10.60.3.69:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.110: write udp 10.60.3.40:7946->10.60.3.110:7946: wsasendto: The requested address is not valid in its
context.
memberlist: Failed to send gossip to 10.60.3.105:7946: write udp 10.60.3.40:7946->10.60.3.105:7946: wsasendto: The requested address is
not valid in its context.
memberlist: Failed to send gossip to 10.60.3.186:7946: write udp 10.60.3.40:7946->10.60.3.186:7946: wsasendto: The requested address is
not valid in its context.
Many of these errors are odd, for example... 7946 is totally open between the cluster nodes, telnets confirm this.
I expect to see the docker service containers start promptly, and not stuck in a Preparing state. The docker image is already pulled, it should be fast.
docker version output
Client: Docker Engine - Enterprise
Version: 19.03.2
API version: 1.40
Go version: go1.12.8
Git commit: c92ab06ed9
Built: 09/03/2019 16:38:11
OS/Arch: windows/amd64
Experimental: false
Server: Docker Engine - Enterprise
Engine:
Version: 19.03.2
API version: 1.40 (minimum version 1.24)
Go version: go1.12.8
Git commit: c92ab06ed9
Built: 09/03/2019 16:35:47
OS/Arch: windows/amd64
Experimental: false
docker info output
Client:
Debug Mode: false
Plugins:
cluster: Manage Docker clusters (Docker Inc., v1.1.0-8c33de7)
Server:
Containers: 4
Running: 0
Paused: 0
Stopped: 4
Images: 4
Server Version: 19.03.2
Storage Driver: windowsfilter
Windows:
Logging Driver: json-file
Plugins:
Volume: local
Network: ics l2bridge l2tunnel nat null overlay transparent
Log: awslogs etwlogs fluentd gcplogs gelf json-file local logentries splunk syslog
Swarm: active
NodeID: wuhifvg9li3v5zuq2xu7c6hxa
Is Manager: false
Node Address: 10.60.3.40
Manager Addresses:
10.60.3.110:2377
10.60.3.186:2377
10.60.3.69:2377
Default Isolation: process
Kernel Version: 10.0 18362 (18362.1.amd64fre.19h1_release.190318-1202)
Operating System: Windows Server Datacenter Version 1903 (OS Build 18362.356)
OSType: windows
Architecture: x86_64
CPUs: 4
Total Memory: 8GiB
Name: SWARMWORKER1
ID: V2WJ:OEUM:7TUQ:WPIO:UOK4:IAHA:KWMN:RQFF:CAUO:LUB6:DJIJ:OVBX
Docker Root Dir: E:\docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: this node is not a swarm manager - check license status on a manager node
Additional Details
These nodes are not using Docker Desktop for windows. I am provisioning docker on the box primarily based on the powershell instructions here: https://docs.docker.com/install/windows/docker-ee/
Windows firewall is disabled
iptables/firewalld is disabled
Communication is completely open between the cluster nodes
Totally up-to-date on cumulative updates
I posted on the moby repo issues but never heard a peep:
https://github.com/moby/moby/issues/39955
The ONLY way I've found to temporarily fix the issue is to drain the node from the swarm, delete docker files, reinstall windows "Containers" feature, then rejoin to the swarm. But, it happens again on reboot.
What's interesting is that when I see a swarm task in a "Preparing" state on the windows worker, the server doesn't seem to be doing anything at all, it's like the manager thinks the worker is preparing the container, but it isn't...
Anyone have any suggestions??

Telegram calls via Dante socks5 proxy server not working

I've confugured Dante 1.4on Ubuntu 16.04 as a socks5 proxy for Telegram.
Chats are working, but voice calls are not, failing at "Connecting".
Is there something special I need to configure in order to proxy Telegram voice traffic?
I'm using a single non priveleged (>1024) TCP/UDP port + login + password for connection.
Thanks!
UPD: Thats piece of log while i am trying to call somebody:
Apr 15 23:05:38 (1523736338.510915) danted[22977]: info: pass(1): udp/udpassociate [: username%USER#0.0.0.0.0 192.168.1.30.36562
Apr 15 23:08:33 (1523736513.020190) danted[22989]: info: pass(1): udp/udpassociate [: username%USER#0.0.0.0.0 192.168.1.30.49065
I can answer the call at destination device but connection is looping and getting error after 30 seconds.
Proxying UDP with socks is a bit more complex than it might seem, so let's start from the beginning.
Telegram calls use UDP with socks. Socks5 RFC1928 defines the following sequence for relaying UDP:
Client instantiates a TCP socks5 connection.
Client sends a UDP ASSOCIATE request, containing the client's source address and port, which will be used to send UDP datagrams to the socks5 Server. They might be zeros (in Telegram they are) (section 4).
Socks5 Server binds a random UDP port for relaying datagrams for this TCP socks5 connection and sends a UDP ASSOCIATE response, containing the address and port where the client should send the datagrams to be relayed (section 6).
To send a datagram, the Client must add a header to the payload, containing a destination address and port, where the server should relay that datagram (section 7).
Server will keep the UDP port bound until the TCP socks5 connection terminates.
As you can see, opening a single TCP port is not enough. For UDP to work correctly, the automatically bound UDP port must be reachable by client. NATs and Firewalls might further complicate the situation.
UDP relaying configuration with Dante
Telegram calls are Peer-to-Peer, so the udpassociate command should be allowed to 0/0:
socks pass {
from: 0.0.0.0/0
to: 0.0.0.0/0
# udp.portrange: 40000-45000
command: udpassociate
log: error connect disconnect
}
udpreply (that's for the actual relaying, the 4'th step above) should also be allowed to everyone as well:
socks pass {
from: 0.0.0.0/0
to: 0.0.0.0/0
command: udpreply
log: error connect disconnect
}
If your socks5 Server is behind a firewall, open a range of UDP ports (say 40000-45000) and add the udp.portrange: 40000-45000 line to the udpassociate block (see the commented out example in the first point). Then Dante would bind UDP ports in that range only.
If your socks5 Server is behind a NAT, then the returned destination address in the response to UDP ASSOCIATE request would be a local IP, rather than the external one. That local IP is unlikely to be reachable by the client, so the sent datagrams would be silently dropped.
Unfortunately, Dante uses the destination address of the TCP connection as the one where the client should send UDP datagrams to (see the comment in the source code). NAT mangles this address from an external to a local one, so the Dante's assumption that the client can reach the proxy using that destination address is broken.
A possible solution, which doesn't involve patching Dante, would be to use iptables to change the destination address from a local to the external one (assuming that it's known and doesn't change):
# 203.0.113.12 – the external IP
# 1080/tcp - Dante TCP port
# 40000:45000 – Dante UDP portrange
iptables -t nat -A PREROUTING -p tcp --dport 1080 -j DNAT --to-destination 203.0.113.12
iptables -t nat -A PREROUTING -p udp --dport 40000:45000 -j DNAT --to-destination 203.0.113.12
# If external address is not added to any network device on that
# machine, then add it to the loopback interface, so the kernel
# would know where to route the DNATed packets:
ip addr add 203.0.113.12/32 dev lo
I had the same problem. Found the solution.
You have to add udpassociate bindreply udpreply commands to conf file. here is my conf file that works with voice calls.
logoutput: syslog /var/log/danted.log
internal: ip port = 1080
external: ip
socksmethod: username
user.privileged: root
user.unprivileged: nobody
client pass {
from: 0.0.0.0/0 to: 0.0.0.0/0
log: error connect
}
socks pass {
from: 0.0.0.0/0 to: 0.0.0.0/0
command: bind connect udpassociate bindreply udpreply
log: error connect
}
Allow clients' voice traffic
socks pass {
from: 0.0.0.0/0 to: 0.0.0.0/0
command: udpreply
log: connect disconnect error
socksmethod: username
}
iptables -A INPUT -p udp -m multiport --dports 1024:65535 -j ACCEPT
You should enable calls via proxy in your telegram settings.

Windows keeps a listening socket for a non-existent process indefinitely

On Windows, after process 628 (my app) has exited, tcpview shows:
Process PID Pro Local Address Local Port Remote Address Rem Port State
-------------- --- --- ------------- ---------- --------------- -------- -----------
<non-existent> 628 TCP 0.0.0.0 http 0.0.0.0 0 LISTENING
<non-existent> 628 TCP 0.0.0.0 https 0.0.0.0 0 LISTENING
<non-existent> 628 TCP 0.0.0.0 http x.x.x.x xxxxx ESTABLISHED
I was able to kill the ESTABLISHED connection with tcpview, but can't kill the LISTENING ones (as admin) with tcpview or CurrPorts. The LISTENING connections remained indefinitely (>24 hours), preventing the app from binding to port 80 and 443 when restarted ("[10048] Only one usage of each socket address (protocol/network address/port) is normally permitted").
When I added the SO_REUSEADDR option before binding the listening socket, the app still couldn't bind the ports, this time with "[10013] An attempt was made to access a socket in a way forbidden by its access permissions".
My Questions:
Does it make any sense for the listening socket to be kept after the owning process is gone? It's not in half-closed state, since no connection has been established.
Is it expected that a socket of a non-existent process will linger indefinitely?
Are these known bugs in Windows/Winsocks?
Thanks!

Resources