Bolt-js in socket mode on heroku fails with Error R10 - heroku

I have a slack bolt-js server app that runs with socketMode: true
It works perfectly fine on my local pc, but when moving it to Heroku in a web dyno, it fails after 1 min. It starts up just fine and it is fully functional during that minute, but after 1 min I get this
2021-09-01T12:59:33.771745+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
My bolt-js app is started like this:
await app.start(process.env.PORT);
I think Heroku is failing to detect that there is a websocket on this port open and then times out.
I went through a lot of documentation already. I must say that the bolt-js documentation is conflicting with Heroku's. They suggest I use a worker dyno, but Heroku says explicitly that worker dynos cannot receive web HTTP traffic.
Totally at a loss here. Any ideas anybody?

I ran into this just a few days ago, I found that changing the dyno type on Heroku from web to worker solved this issue. The idea came from this page, specifically the Dyno configurations section.
To change the dyno type, navigate to the Resources section of your Heroku app, you should see something like the following:
Resources Tab
From there you want to edit the web type and toggle it off -- do the same for the worker type and toggle it on. The dyno automatically restarts so you should be able to view the logs immediately.

This question is posted far log ago but may be this help to anyone.
you can add port in app configuration like
const {App} = require('#slack/bolt');
const app = new App({
token:process.env.bot_token,
port: process.env.PORT || 3000,
});
(async()=>{
await app.start();
})
may be this help to you.

Related

Heroku H31 errors on production, 'upstream prematurely closed connection while reading response header' - Is this an 'us' or 'them' proplem?

Hosting a rails web app on heroku; no recent code changes but about 2 days ago the application started reporting constant H31 misdirect errors. After looking into the documentation, since the application was still accessible I moved on.
The next day users from some of the subdomains could not access the application while others could. The application uses a wildcard certificate and we are behind a WAF. The first thing I tried was checking firewall logs and nothing was showing as blocked. I turned off the firewall and connected the domain directly to the application and still received the H31 stream, after 15 minutes of no change i re-enabled the firewall.
Reached out to the firewall and they said that their logs only showed upstream prematurely closed connection while reading response header
At this point they suggested the issue was most likely on the Heroku DNS servers, so I'm waiting for a response at this point.
Is there anything I can do in the heroku environment that would change this or am I stuck waiting for a response?

Concourse external windows worker failed register through atc

I have spun up a bosh cluster on AWS, running a concourse deployment. For this I used a tool called concourse-up. I spun up a windows worker outside the created VPC of the cluster, and I am trying to register the worker through the atc but this step fails with an error. I have opened up all ports and both a web VM and the Worker VM. I have tried several things but there are two specific errors that I get:
When I connect the worker without a --peer-ip, the worker registers so I can see it through the fly cli, but I get this error in the log(snippet below), and jobs will fail with this error:
Put /volumes/47c1c26c-274b-4f04-4dea-01d476ed949e/stream-in?path=.: read tcp 10.0.0.7:59478->10.0.0.7:39198: read: connection reset by peer
{"timestamp":"1513510128.917933226","source":"worker","message":"worker.setup.no-assets","log_level":1,"data":{"session":"1"}}
{"timestamp":"1513510128.920933962","source":"worker","message":"worker.garden.started","log_level":1,"data":{"session":"2"}}
{"timestamp":"1513510128.921934128","source":"baggageclaim","message":"baggageclaim.listening","log_level":1,"data":{"addr":"127.0.0.1:7788"}}
{"timestamp":"1513510130.645173311","source":"tsa","message":"tsa.connection.channel.forward-worker.register.start","log_level":1,"data":{"remote":"34.242.192.32:56803","session":"12.1.1.5","worker-address":"10.0.0.7:38380","worker-platform":"windows","worker-tags":""}}
{"timestamp":"1513510130.649989367","source":"tsa","message":"tsa.connection.channel.forward-worker.register.reached-worker","log_level":0,"data":{"baggageclaim-took":"2.251829ms","garden-took":"2.492218ms","remote":"34.242.192.32:56803","session":"12.1.1.5"}}
{"timestamp":"1513510128.960758924","source":"baggageclaim","message":"baggageclaim.repository.get-volume.volume-not-found","log_level":1,"data":{"session":"1.2","volume":"resource-certs"}}
{"timestamp":"1513510128.960758924","source":"baggageclaim","message":"baggageclaim.api.volume-server.get-volume.volume-not-found","log_level":1,"data":{"session":"2.1.2","volume":"resource-certs"}}
{"timestamp":"1513510128.963933945","source":"baggageclaim","message":"baggageclaim.repository.create-volume.failed-to-materialize-strategy","log_level":2,"data":{"error":"mkdir C:\\Users\\Administrator\\workspace\\concourse-workspace\\volumes\\init\\resource-certs: Cannot create a file when that file already exists.","handle":"resource-certs","session":"1.3"}}
supposedly this is not the right way to do it if I follow the official docs: https://concourse-ci.org/clusters-with-bosh.html#configuring-bosh-tsa
The other thing I tried is using the --peer-ip which is recommended is the official docs if you are outside a cluster and have no connection to any of the resources other than the atc. But this will not even register the worker, it fails with this error in the log:
{"timestamp":"1513510497.727977514","source":"tsa","message":"tsa.connection.channel.register-worker.register.start","log_level":1,"data":{"remote":"34.242.192.32:56828","session":"13.1.1.8","worker-address":"34.242.40.26:7777","worker-platform":"windows","worker-tags":""}}
{"timestamp":"1513510497.728802919","source":"tsa","message":"tsa.connection.channel.register-worker.register.failed-to-fetch-containers","log_level":2,"data":{"error":"Get http://api/containers: dial tcp 34.242.40.26:7777: getsockopt: connection refused","remote":"34.242.192.32:56828","session":"13.1.1.8"}}
{"timestamp":"1513510497.729249239","source":"tsa","message":"tsa.connection.channel.register-worker.register.failed-to-list-volumes","log_level":2,"data":{"error":"Get http://34.242.40.26:7788/volumes: dial tcp 34.242.40.26:7788: getsockopt: connection refused","remote":"34.242.192.32:56828","session":"13.1.1.8"}}
{"timestamp":"1513510497.729336023","source":"tsa","message":"tsa.connection.channel.register-worker.register.failed-to-reach-worker","log_level":1,"data":{"baggageclaim-took":"469.89µs","garden-took":"666.118µs","remote":"34.242.192.32:56828","session":"13.1.1.8"}}
{"timestamp":"1513510497.729401112","source":"tsa","message":"tsa.connection.channel.register-worker.register.done","log_level":1,"data":{"remote":"34.242.192.32:56828","session":"13.1.1.8","worker-address":"34.242.40.26:7777","worker-platform":"windows","worker-tags":""}}
I have used this guide to configure the worker: http://www.chrisumbel.com/article/windows_worker_to_bosh_deployed_concourse and the official docs
This issue was never resolved, but I believe that it is no longer relevant. Since this answer was posted concourse have added BOSH orchestrated windows workers :
https://github.com/pivotal-cf-experimental/concourse-windows-worker-release
They have also updated their documentation official documentation: https://concourse-ci.org/worker-pools.html

Simple Ruby Sinatra application runs, does not receive requests, times out

Required Sinatra, got '/', did 'hello, world.', requested application root in browser and received nothing.
Literally the barest of bones Sinatra web application will not run as intended on my Mac, after previously working just fine. Page requests do not show up in the console.
I feel this may be due to force quitting an instance of a running application but after searching for a solution on the web I am at a loss and do not know how to rectify the issue.
edit:
require 'sinatra'
get '/' do
'Hello, World.'
end
Problem does persist after a restart
Definitely the correct URL (exactly as I've been working with all day). localhost:4567 as stated in the output when the application is run:
== Sinatra (v1.4.7) has taken the stage on 4567 for development with backup from Thin
Thin web server (v1.6.4 codename Gob Bluth)
Maximum connections set to 1024
Listening on localhost:4567, CTRL+C to stop
edit 2: scorched earth strategy; full reinstall with rvm fixed this issue.

Meteor client can't connect to server

At least half of the time, when a client loads my Meteor app, it can't connect to the server.
If I run this in the console:
Deps.autorun(function() {
console.log(Meteor.status());
});
I can see that the client is connecting, disconnected, reconnecting repeatedly.
If I open the Network tab in Chrome's dev tools I can see that about every second or two, another WebSocket request is made.
What could be causing this?
Did you try disabling WebSockets? Try running your meteor app with following env variable
export DISABLE_WEBSOCKETS=true
This suggestion was posted by Arunoda on CodersClan.

Why is my auto scaling EC2 instance reported as 'out of service' by the load balancer?

I'm having an issue with an Amazon EC2 instance during auto scaling. Every command I typed worked. I found no errors. But when testing whether auto scaling is working or not I found that it works until the instance started. The newly spawned instance does not work afterwards though: It's under my load balancer but its status is out of service. One more issue is when I copy and paste the public DNS link into the browser it does not respond and an error is triggered like "firefox can't find ..."
I doubt that there should be problem with the image or the Linux configuration.
Thanks in advance.
Although its been long since you posted it, but try adjusting the health check of the Load balancer,
if your health check is like this
Ping Target:
HTTP:80/index.php
Timeout:
10 seconds
Interval:
30 seconds
Unhealthy Threshold:
4
Healthy Threshold:
2
that means an instance will be marked out of service if the ping target doesn't respond within 10 seconds for 4 consecutive instances, while ELB will try to reach it every 30 seconds.
usually the fact that you get "firefox can't find ..." when you try to access the instance directly means that the service is down. Try to login on the instance check if the service is alive, also check the firewall rules which might block internet/elb requests. Check also your ELB health-check it's a good place to start. If you still have issues try to post some debug information like instance netstat, elb describe, parameters.
Rules on security groups assigned to the instance and the load balancer were not allowing traffic to pass between the two. This caused the health check to fail.So , u r load balancer is out of service.
If you don't have index.html in document root of instance - default health check will fail. You can set custom protocol, port and path for health check when creating load balancer saying as per my experience

Resources