I have a project where I use channels and locally everything worked well but when I deployed on Heroku, I got 403 each time I tried to connect. At first I thought the problem was with Heroku as I tested it locally and even used the database and redis instance from Heroku locally and everything worked.
But then, when I used ngrok to open a public tunnel to my localhost, I discovered the result was the same as in Heroku. For every request, I get 403 and trying to debug it doesn't help much as the event loop just suddenly takes control sometimes or I get timeout error. The setup is exactly the same save that one is being accessed locally while the other is remotely.
This is how I start Daphne:
daphne weout.asgi:application --port 8000 --bind 0.0.0.0 -v 3.
My lib versions:
Django==2.0.7
channels==2.2.0
channels-redis==2.4.0
daphne==2.3.0
With Daphne's versbosity set to maximum, this is what I get when I try to connect:
Nov 22 07:16:34 weout-staging app/web.1 2019-11-22 15:16:33,489 [asyncio] DEBUG: poll 101.195 ms took 0.023 ms: 1 events
Nov 22 07:16:34 weout-staging app/web.1 10.12.43.130:10299 - - [22/Nov/2019:15:16:33] "WSCONNECTING /api/v1/ws/" - -
Nov 22 07:16:34 weout-staging app/web.1 2019-11-22 15:16:33,513 [daphne.http_protocol] DEBUG: Upgraded connection ['10.x.x.x', 10299] to WebSocket
Nov 22 07:16:34 weout-staging app/web.1 2019-11-22 15:16:33,648 [asyncio] WARNING: Executing <Task pending coro=<AsyncConsumer.__call__() running at /app/.heroku/python/lib/python3.6/site-packages/channels/consumer.py:59> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7fe01e6979d8>()] created at /app/.heroku/python/lib/python3.6/asyncio/base_events.py:276> created at /app/.heroku/python/lib/python3.6/site-packages/daphne/server.py:209> took 0.131 seconds
Nov 22 07:16:34 weout-staging app/web.1 2019-11-22 15:16:33,655 [daphne.server] INFO: failing WebSocket opening handshake ('Access denied')
Nov 22 07:16:34 weout-staging app/web.1 2019-11-22 15:16:33,656 [daphne.server] WARNING: dropping connection to peer tcp4:10.12.43.130:10299 with abort=False: Access denied
Nov 22 07:16:34 weout-staging app/web.1 2019-11-22 15:16:33,656 [daphne.ws_protocol] DEBUG: WebSocket ['10.x.x.x', 10299] rejected by application
Nov 22 07:16:34 weout-staging app/web.1 10.12.43.130:10299 - - [22/Nov/2019:15:16:33] "WSREJECT /api/v1/ws/" - -
Nov 22 07:16:34 weout-staging app/web.1 2019-11-22 15:16:33,660 [aioredis] DEBUG: Parsing Redis URI 'redis://xxxx#xxxxx'
Nov 22 07:16:34 weout-staging app/web.1 2019-11-22 15:16:33,660 [aioredis] DEBUG: Creating tcp connection to ('xxx.compute.amazonaws.com', 14059)
Nov 22 07:16:34 weout-staging app/web.1 2019-11-22 15:16:33,663 [asyncio] DEBUG: Get address info xxx, type=<SocketKind.SOCK_STREAM: 1>
Nov 22 07:16:34 weout-staging app/web.1 2019-11-22 15:16:33,667 [asyncio] DEBUG: Getting address info xxx.compute.amazonaws.com:14059, type=<SocketKind.SOCK_STREAM: 1> took 3.777 ms: [(<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_STREAM: 1>, 6, '', ('x.x.x.x', 14059))]
Nov 22 07:16:34 weout-staging app/web.1 2019-11-22 15:16:33,669 [daphne.ws_protocol] DEBUG: WebSocket closed for ['10.x.x.x', 10299]
Nov 22 07:16:34 weout-staging app/web.1 10.12.43.130:10299 - - [22/Nov/2019:15:16:33] "WSDISCONNECT /api/v1/ws/" - -
Nov 22 07:16:34 weout-staging app/web.1 2019-11-22 15:16:33,671 [asyncio] DEBUG: connect <socket.socket fd=16, family=AddressFamily.AF_INET, type=2049, proto=6, laddr=('0.0.0.0', 0)> to ('x.x.x.x', 14059)
I use Daphne to serve both my normal Django views and websocket consumers. Everything works well for the Django views, so the problem occurs only when connecting to the consumers
Does anyone have a similar issue while accessing Dapnhe remotely?
At first I tried uvicorn served with gunicorn but they have a bug when the consumer is closed during the initial connection phase so I switched back to Daphne
Turns out it was the AllowedHostsOriginValidator! That surely wasted a whole lot of my time. By the way, any idea how that middleware will behave for requests from mobile apps or other sources without the Origin header?
Anyway I removed it for now and the current issue was solved
Related
after upgrading via mc command i get this error when i try to login to the (kind of new) minio console:
Post "https://fqdn.org/": dial tcp 127.0.1.1:443: connect: connection refused
I have a signed and valid SSL Certificate.
Downgrading minio (aka restore Snapshot of VM) solves the problem.
Any ideas?
This is my config:
MINIO_SERVER_URL="https://fqdn.org"
MINIO_ACCESS_KEY="key"
MINIO_VOLUMES="/mnt/hdd2/minio/"
MINIO_OPTS="-C /etc/minio --address :9000 --console-address :9001"
MINIO_SECRET_KEY="minio"
This is my minio startup log:
● minio.service - MinIO
Loaded: loaded (/etc/systemd/system/minio.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2021-11-11 08:41:14 CET; 4min 50s ago
Docs: https://docs.min.io
Process: 3567 ExecStartPre=/bin/bash -c if [ -z "${MINIO_VOLUMES}" ]; then echo "Variable MINIO_VOLUMES not set in /etc/default/minio"; exit 1; fi (code=exited, status=0/SUCCESS)
Main PID: 3568 (minio)
Tasks: 9 (limit: 2351)
Memory: 101.9M
CGroup: /system.slice/minio.service
└─3568 /home/minio/minio server -C /etc/minio --address :9000 --console-address :9001 /mnt/hdd2/minio/
Nov 11 08:41:14 pmit-minio-test systemd[1]: Starting MinIO...
Nov 11 08:41:14 pmit-minio-test systemd[1]: Started MinIO.
Nov 11 08:41:17 pmit-minio-test minio[3568]: WARNING: MINIO_ACCESS_KEY and MINIO_SECRET_KEY are deprecated.
Nov 11 08:41:17 pmit-minio-test minio[3568]: Please use MINIO_ROOT_USER and MINIO_ROOT_PASSWORD
Nov 11 08:41:17 pmit-minio-test minio[3568]: API: https://fqdn.org
Nov 11 08:41:17 pmit-minio-test minio[3568]: Console: https://191.164.213.7:9001 https://127.0.0.1:9001
Nov 11 08:41:17 pmit-minio-test minio[3568]: Documentation: https://docs.min.io
Please see the answer here:
https://github.com/minio/minio/issues/13639#issuecomment-966244704
I had to change this line:
MINIO_SERVER_URL="https://fqdn.org:9000"
I'm trying to run a fairly simple docker stack, but for some reason it is failing to register certificates.
My composer:
version: '2'
services:
nginx-proxy:
image: nginxproxy/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- conf:/etc/nginx/conf.d
- vhost:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- dhparam:/etc/nginx/dhparam
- certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
network_mode: bridge
acme-companion:
image: nginxproxy/acme-companion
container_name: nginx-proxy-acme
volumes_from:
- nginx-proxy
volumes:
- certs:/etc/nginx/certs:rw
- acme:/etc/acme.sh
- /var/run/docker.sock:/var/run/docker.sock:ro
network_mode: bridge
volumes:
conf:
vhost:
html:
dhparam:
certs:
acme:
This is my log from acme-companion
Info: running acme-companion version v2.1.0-25-g7f1b754,
Generating a RSA private key,
...................................................................++++,
...........................................................................................................................................................................................++++,
writing new private key to '/etc/nginx/certs/default.key.new',
-----,
1996071824:error:0D0D90AD:asn1 encoding routines:ASN1_TIME_adj:error getting time:crypto/asn1/a_time.c:330:,
Info: a default key and certificate have been created at /etc/nginx/certs/default.key and /etc/nginx/certs/default.crt.,
Warning: /etc/nginx/certs/default.key does not exist. Skipping ownership and permissions check.,
Warning: /etc/nginx/certs/default.crt does not exist. Skipping ownership and permissions check.,
Info: Custom Diffie-Hellman group found, generation skipped.,
Reloading nginx proxy (nginx-proxy)...,
2021/09/13 08:54:28 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification '',
2021/09/13 08:54:28 [notice] 91#91: signal process started,
2021/09/13 08:54:29 Generated '/app/letsencrypt_service_data' from 4 containers,
2021/09/13 08:54:29 Running '/app/signal_le_service',
2021/09/13 08:54:29 Watching docker events,
2021/09/13 08:54:29 Contents of /app/letsencrypt_service_data did not change. Skipping notification '/app/signal_le_service',
[Thu Jan 1 00:00:00 UTC 1970] Please refer to https://curl.haxx.se/libcurl/c/libcurl-errors.html for error code: 6,
[Thu Jan 1 00:00:00 UTC 1970] Can not init api.,
[Thu Jan 1 00:00:00 UTC 1970] Registering account: https://acme-v02.api.letsencrypt.org/directory,
[Thu Jan 1 00:00:00 UTC 1970] Please refer to https://curl.haxx.se/libcurl/c/libcurl-errors.html for error code: 6,
[Thu Jan 1 00:00:00 UTC 1970] Could not get nonce, let's try again.,
[Thu Jan 1 00:00:00 UTC 1970] Please refer to https://curl.haxx.se/libcurl/c/libcurl-errors.html for error code: 6,
[Thu Jan 1 00:00:00 UTC 1970] Could not get nonce, let's try again.
So error code 6 should be CURLE_COULDNT_RESOLVE_HOST but I'm not sure exactly what it can't resolve. This server has a connection and everything else seems to work.
If anyone stumbles on to this issue then here is the fix. It seems to affect alpine version 3.13 and probably other ones.
https://github.com/alpinelinux/docker-alpine/issues/135
I've setup an Ubuntu 16.04 EC2 t2.medium server and followed the instructions here http://docs.bigbluebutton.org/2.0/20install.html to install BigBlueButton 2.0-beta.
When I log into the Demo Meeting room and select Microphone it says calling... then changes to connecting... and then I get a message saying:
WebRTC Audio Failure
Detected the following WebRTC issue: Error 1010: ICE negotiation
timeout. Do you want to try Flash instead?
Here is the output from the console:
BigBlueButton call accepted
bbb_webrtc_bridge_sip.js?v=591:497 Waiting for ICE negotiation
sip.js?v=591:2900 Thu Sep 21 2017 11:27:19 GMT+0800 (WITA) | sip.invitecontext.mediahandler | stream added: fuEOgOt7p5aHrW58wGtGVszgTdGBcNKi
sip.js?v=591:2900 Thu Sep 21 2017 11:27:24 GMT+0800 (WITA) | sip.invitecontext.mediahandler | RTCIceChecking Timeout Triggered after 5000 milliseconds
bbb_webrtc_bridge_sip.js?v=591:499 5 seconds without ICE finishing
bbb_webrtc_bridge_sip.js?v=591:119 Stopping webrtc audio test
bbb_webrtc_bridge_sip.js?v=591:555 Hanging up current session
sip.js?v=591:2900 Thu Sep 21 2017 11:27:24 GMT+0800 (WITA) | sip.inviteclientcontext | terminating Session
sip.js?v=591:2900 Thu Sep 21 2017 11:27:25 GMT+0800 (WITA) | sip.transport | sending WebSocket message:
BYE sip:919673089#172.31.36.135:5060;transport=udp SIP/2.0
Via: SIP/2.0/WSS msk2aa46iuih.invalid;branch=z9hG4bK7275105
Max-Forwards: 70
To: <sip:919673089#staging.bigbluebutton.xxxxxx.com>;tag=SBBF64e6999Hm
From: "w_zqmgpmdukz39-bbbID-Mikhail" <sip:w_zqmgpmdukz39-bbbID-Mikhail#staging.bigbluebutton.xxxxxx.com>;tag=d6e9kj05rj
Call-ID: epjgsffnq2hi688jlvgl
CSeq: 9777 BYE
Supported: outbound
User-Agent: BigBlueButton
Content-Length: 0
bbb_webrtc_bridge_sip.js?v=591:465 call ended null
sip.js?v=591:2900 Thu Sep 21 2017 11:27:25 GMT+0800 (WITA) | sip.inviteclientcontext | closing INVITE session epjgsffnq2hi688jlvglj9cdnvthcr
sip.js?v=591:2900 Thu Sep 21 2017 11:27:25 GMT+0800 (WITA) | sip.invitecontext.mediahandler | closing PeerConnection
sip.js?v=591:2900 Thu Sep 21 2017 11:27:25 GMT+0800 (WITA) | sip.dialog | dialog epjgsffnq2hi688jlvgld6e9kj05rjSBBF64e6999Hm deleted
sip.js?v=591:2900 Thu Sep 21 2017 11:27:25 GMT+0800 (WITA) | sip.ua | user requested closure...
sip.js?v=591:2900 Thu Sep 21 2017 11:27:25 GMT+0800 (WITA) | sip.ua | closing registerContext
sip.js?v=591:2900 Thu Sep 21 2017 11:27:25 GMT+0800 (WITA) | sip.registercontext | already unregistered
LoggerFactory.print # sip.js?v=591:2900
LoggerFactory.(anonymous function) # sip.js?v=591:2917
Logger.(anonymous function) # sip.js?v=591:2911
unregister # sip.js?v=591:3579
close # sip.js?v=591:3570
UA.stop # sip.js?v=591:8929
(anonymous) # bbb_webrtc_bridge_sip.js?v=591:505
setTimeout (async)
(anonymous) # bbb_webrtc_bridge_sip.js?v=591:498
EventEmitter.emit # sip.js?v=591:115
accepted # sip.js?v=591:5641
onSuccess # sip.js?v=591:6851
Promise resolved (async)
receiveInviteResponse # sip.js?v=591:6832
receiveResponse # sip.js?v=591:3784
InviteClientTransaction.receiveResponse # sip.js?v=591:7832
onMessage # sip.js?v=591:8566
ws.onmessage # sip.js?v=591:8424
sip.js?v=591:2900 Thu Sep 21 2017 11:27:25 GMT+0800 (WITA) | sip.transport | received WebSocket text message:
SIP/2.0 200 OK
Via: SIP/2.0/WSS msk2aa46iuih.invalid;branch=z9hG4bK7275105;received=52.51.xx.xx;rport=38902
From: "w_zqmgpmdukz39-bbbID-Mikhail" <sip:w_zqmgpmdukz39-bbbID-Mikhail#staging.bigbluebutton.xxxxxx.com>;tag=d6e9kj05rj
To: <sip:919673089#staging.bigbluebutton.xxxxxx.com>;tag=SBBF64e6999Hm
Call-ID: epjgsffnq2hi688jlvgl
CSeq: 9777 BYE
User-Agent: FreeSWITCH-mod_sofia/1.9.0+git~20170822T213300Z~2ebdf42f2c~64bit
Allow: INVITE, ACK, BYE, CANCEL, OPTIONS, MESSAGE, INFO, UPDATE, REGISTER, REFER, NOTIFY
Supported: timer, path, replaces
Content-Length: 0
sip.js?v=591:2900 Thu Sep 21 2017 11:27:25 GMT+0800 (WITA) | sip.transport | closing WebSocket wss://staging.bigbluebutton.xxxxxx.com/ws
sip.js?v=591:2900 Thu Sep 21 2017 11:27:25 GMT+0800 (WITA) | sip.transport | WebSocket disconnected (code: 1000)
sip.js?v=591:2900 Thu Sep 21 2017 11:27:25 GMT+0800 (WITA) | sip.ua | connection state set to 1
sip.js?v=591:2900 Thu Sep 21 2017 11:27:25 GMT+0800 (WITA) | sip.transaction.ict | transport error occurred, deleting INVITE client transaction z9hG4bK9694442
I've searched for anything related to bigbluebutton error 1010 but can't find anything.
Please check your firewall :
TCP ports 80, 443,and 1935 are accessible.
TCP port 7443 is accessible if you intend to configure SSL (recommended), otherwise port 5066 is accessible.
UDP ports 16384 - 32768 are accessible.
Port 80 is not in use by another application.
the problem is your sip ip and port.
Check /opt/freeswitch/etc/freeswitch/sip_profiles/external.xml settings firstly.
ws-binding: :5066
wss-binding: :7443
change it to like this:
ws-binding: your_external_ip_address:5066
wss-binding: your_external_ip_address:7443
Your external ip address is same with external_sip_ip address.
Then check your 5066 and 7443 port if you use firewall.
Then dont forget to restart bbb (bbb-conf --restart)
Today I configured a basic tinyproxy.
I expected it to act as proxy for ubuntu repositories.
But when trying to download stuff from repositories I got this on tinyproxy log
CONNECT Mar 27 17:30:46 [20348]: Connect (file descriptor 9): [unknown] [192.168.2.30]
CONNECT Mar 27 17:30:46 [20348]: Request (file descriptor 9): GET http://br.archive.ubuntu.com/ubuntu/pool/main/t/tdb/python-tdb_1.2.12-1_amd64.deb HTTP/1.1
INFO Mar 27 17:30:46 [20348]: No upstream proxy for br.archive.ubuntu.com
ERROR Mar 27 17:30:56 [20348]: opensock: Could not retrieve info for br.archive.ubuntu.com
INFO Mar 27 17:30:56 [20348]: no entity
I stuck on some misconcept. Do not tinyproxy send requests for outside servers directly?
I supllied an external proxy server to fix this
upstream 117.79.64.29:80
I have a single instance of MongoDB 2.4.8 running on Windows Server 2012 R2. MongoDB is installed as a Windows Service. I have journalling enabled.
The MongoDB documentation suggests that the MongoDB service should just be shut down via the Windows Service Control Manager:
net stop MongoDB
When I did this recently, the following was logged and I ended up with a non-zero byte mongod.lock file on disk. (I used the --repair option to fix this but it turns out this probably wasn't necessary as I had journalling enabled.)
Thu Nov 21 11:08:12.011 [serviceShutdown] got SERVICE_CONTROL_STOP request from Windows Service Control Manager, will terminate after current cmd ends
Thu Nov 21 11:08:12.043 [serviceShutdown] now exiting
Thu Nov 21 11:08:12.043 dbexit:
Thu Nov 21 11:08:12.043 [serviceShutdown] shutdown: going to close listening sockets...
Thu Nov 21 11:08:12.043 [serviceShutdown] closing listening socket: 1492
Thu Nov 21 11:08:12.043 [serviceShutdown] closing listening socket: 1500
Thu Nov 21 11:08:12.043 [serviceShutdown] shutdown: going to flush diaglog...
Thu Nov 21 11:08:12.043 [serviceShutdown] shutdown: going to close sockets...
Thu Nov 21 11:08:12.043 [serviceShutdown] shutdown: waiting for fs preallocator...
Thu Nov 21 11:08:12.043 [serviceShutdown] shutdown: lock for final commit...
Thu Nov 21 11:08:12.043 [serviceShutdown] shutdown: final commit...
Thu Nov 21 11:08:12.043 [conn1333] end connection 127.0.0.1:51612 (18 connections now open)
Thu Nov 21 11:08:12.043 [conn1331] end connection 127.0.0.1:51610 (18 connections now open)
...snip...
Thu Nov 21 11:08:12.043 [conn1322] end connection 10.1.2.212:53303 (17 connections now open)
Thu Nov 21 11:08:12.043 [conn1337] end connection 127.0.0.1:51620 (18 connections now open)
Thu Nov 21 11:08:12.839 [serviceShutdown] shutdown: closing all files...
Thu Nov 21 11:08:14.683 [serviceShutdown] Progress: 5/163 3% (File Closing Progress)
Thu Nov 21 11:08:16.012 [serviceShutdown] Progress: 6/163 3% (File Closing Progress)
...snip...
Thu Nov 21 11:08:52.030 [serviceShutdown] Progress: 143/163 87% (File Closing Progress)
Thu Nov 21 11:08:54.092 [serviceShutdown] Progress: 153/163 93% (File Closing Progress)
Thu Nov 21 11:08:55.405 [serviceShutdown] closeAllFiles() finished
Thu Nov 21 11:08:55.405 [serviceShutdown] journalCleanup...
Thu Nov 21 11:08:55.405 [serviceShutdown] removeJournalFiles
Thu Nov 21 11:09:05.578 [DataFileSync] ERROR: Client::shutdown not called: DataFileSync
The last line is my main concern.
I'm also interested in how MongoDB is able to take longer to shut down than Windows normally allows for service shutdown? At what point is it safe to shut down the machine without checking the log file?