I recently upgraded rocket.chat from 0.62 to 0.65 in CentOS server. While running node main.js, I get the following output:
Setting default file store to FileSystem
connect deprecated multipart: use parser (multiparty, busboy, formidable) npm module instead npm/node_modules/connect/lib/middleware/bodyParser.js:56:20
connect deprecated limit: Restrict request size at location of read npm/node_modules/connect/lib/middleware/multipart.js:86:15
{"line":"160","file":"rocketchat_migrations.js","message":"Migrations: Not migrating, already at version 121","time":{"$date":1528731630704},"level":"info"}
Updating process.env.MAIL_URL
Using GridFS for custom sounds storage
Using GridFS for custom emoji storage
Push: configuring...
Push.Configure { sendTimeout: 60000,
apn: undefined,
gcm:
{ apiKey: 'XXXX',
projectNumber: 'YYYY' },
production: true,
sendInterval: 1000,
sendBatchSize: 10 }
GCM configured
Push: Send worker started, using interval: 1000
Exception in callback of async function: Error: listen EADDRINUSE 0.0.0.0:3000
at Object._errnoException (util.js:1024:11)
at _exceptionWithHostPort (util.js:1046:20)
at Server.setupListenHandle [as _listen2] (net.js:1351:14)
at listenInCluster (net.js:1392:12)
at doListen (net.js:1501:7)
at _combinedTickCallback (internal/process/next_tick.js:141:11)
at process._tickDomainCallback (internal/process/next_tick.js:218:9)
➔ System ➔ startup
➔ +-------------------------------------------------------------+
➔ | SERVER RUNNING |
➔ +-------------------------------------------------------------+
➔ | |
➔ | Rocket.Chat Version: 0.65.1 |
➔ | NodeJS Version: 8.9.3 - x64 |
➔ | Platform: linux |
➔ | Process Port: 3000 |
➔ | Site URL: http://ZZZZ:3000 |
➔ | ReplicaSet OpLog: Disabled |
➔ | Commit Hash: 8349c36de0 |
➔ | Commit Branch: HEAD |
➔ | |
➔ +-------------------------------------------------------------+
Although the server is running, I am not able to access it from the browser. What is the possible error?
I listed all processes running node by ps -ax | grep node and killing all them all. I reran the node main.js and it fixed everything.
Related
I have a question. Maybe this is easy and i just dont come behind it.
I wrote a little Sinatra (ruby) WebApp. It is running with a puma server and is started with foreman. I now started it on my Raspberry (Raspian stretch). This is working.
14:28:45 web.1 | started with pid 10847
14:28:52 web.1 | Puma starting in single mode...
14:28:52 web.1 | * Version 3.12.0 (ruby 2.5.1-p57), codename: Llamas in Pajamas
14:28:52 web.1 | * Min threads: 0, max threads: 16
14:28:52 web.1 | * Environment: development
14:28:52 web.1 | * Listening on tcp://localhost:10001
14:28:52 web.1 | Use Ctrl-C to stop
I can access it locally on my raspberry.
curl localhost:10001
this is working.
But I also want to reach it with my PC (Home network). And this is not working.
I can ping the raspberry successfully.
ping 192.XXX.XXX.XX
but when i ping the port it is running on, it is not working (also try that with my browser). I have a fritzbox.
ping 192.XXX.XXX.XX:10001
Procfile:
web: bundle exec rackup -p 10001 -s puma
I am not sure what i am doing wrong :-( .
By default rackup binds to localhost. You have to tell it to listen on 0.0.0.0
rackup -p 10001 -o 0.0.0.0
or
rackup -p 10001 --host 0.0.0.0
Related source here: https://github.com/rack/rack/blob/master/lib/rack/server.rb#L56
Could you tell me what's wrong with my Dockerfile or docker-compose?
# Dockerfile
FROM ruby:2.5.0
RUN apt-get update -qq && apt-get install -y build-essential
ENV APP_HOME /app
RUN mkdir $APP_HOME
WORKDIR $APP_HOME
ADD Gemfile* $APP_HOME/
RUN bundle install
ADD . $APP_HOME
# docker-compose.yml
version: '3'
services:
db:
image: mongo
volumes:
- $HOME/data/mongodb:/data/db
ports:
- "27017:27017"
web:
build: .
command: rackup config.ru --port 4567
volumes:
- .:/app
ports:
- "4567:4567"
depends_on:
- db
If I run the app directly with 'rackup' command it works well. The issues start when I run the containers through 'docker-compose up' command, it stops accepting requests through 'localhost:4567'.
I couldn't spot the issue, that's why I'm asking for help. 🤓
Here is the logs from 'docker-compose up', just in case.
Starting tracker_db_1 ... done
Recreating tracker_api_1 ... done
Attaching to tracker_db_1, tracker_api_1
db_1 | 2018-07-30T09:43:17.580+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
db_1 | 2018-07-30T09:43:17.669+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=fe12227a1143
db_1 | 2018-07-30T09:43:17.669+0000 I CONTROL [initandlisten] db version v4.0.0
db_1 | 2018-07-30T09:43:17.669+0000 I CONTROL [initandlisten] git version: 3b07af3d4f471ae89e8186d33bbb1d5259597d51
db_1 | 2018-07-30T09:43:17.669+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016
db_1 | 2018-07-30T09:43:17.669+0000 I CONTROL [initandlisten] allocator: tcmalloc
db_1 | 2018-07-30T09:43:17.669+0000 I CONTROL [initandlisten] modules: none
db_1 | 2018-07-30T09:43:17.669+0000 I CONTROL [initandlisten] build environment:
db_1 | 2018-07-30T09:43:17.669+0000 I CONTROL [initandlisten] distmod: ubuntu1604
db_1 | 2018-07-30T09:43:17.669+0000 I CONTROL [initandlisten] distarch: x86_64
db_1 | 2018-07-30T09:43:17.669+0000 I CONTROL [initandlisten] target_arch: x86_64
db_1 | 2018-07-30T09:43:17.669+0000 I CONTROL [initandlisten] options: { net: { bindIpAll: true } }
db_1 | 2018-07-30T09:43:17.686+0000 I STORAGE [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
db_1 | 2018-07-30T09:43:17.700+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=487M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
web_1 | [2018-07-30 09:43:19] INFO WEBrick 1.4.2
web_1 | [2018-07-30 09:43:19] INFO ruby 2.5.0 (2017-12-25) [x86_64-linux]
web_1 | [2018-07-30 09:43:19] INFO WEBrick::HTTPServer#start: pid=1 port=4567
db_1 | 2018-07-30T09:43:29.033+0000 I STORAGE [initandlisten] WiredTiger message [1532943809:33683][1:0x7f429bbb8a00], txn-recover: Main recovery loop: starting at 7/8576
db_1 | 2018-07-30T09:43:29.787+0000 I STORAGE [initandlisten] WiredTiger message [1532943809:787811][1:0x7f429bbb8a00], txn-recover: Recovering log 7 through 8
db_1 | 2018-07-30T09:43:30.362+0000 I STORAGE [initandlisten] WiredTiger message [1532943810:362900][1:0x7f429bbb8a00], txn-recover: Recovering log 8 through 8
db_1 | 2018-07-30T09:43:30.433+0000 I STORAGE [initandlisten] WiredTiger message [1532943810:433423][1:0x7f429bbb8a00], txn-recover: Set global recovery timestamp: 0
db_1 | 2018-07-30T09:43:30.463+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
db_1 | 2018-07-30T09:43:30.487+0000 I CONTROL [initandlisten]
db_1 | 2018-07-30T09:43:30.488+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
db_1 | 2018-07-30T09:43:30.488+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
db_1 | 2018-07-30T09:43:30.488+0000 I CONTROL [initandlisten]
db_1 | 2018-07-30T09:43:30.635+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
db_1 | 2018-07-30T09:43:30.642+0000 I NETWORK [initandlisten] waiting for connections on port 27017
Your app is not available outside localhost and its localhost is docker container, not your system (you can even try to curl localhost from inside the container and I bet it will work just fine). I can't give you more details, I am not strong at the topic. But all you need to do is just bind rack to 0.0.0.0.
for example, I run my rails app this way:
bundle exec rails s -p 3000 -b 0.0.0.0
in your case it will be:
rackup --host 0.0.0.0 --port ...
Now it will be available even from your network (at least it should be)
I ran into this problem when trying to setup a simple Sinatra app in Docker using the built in WEBrick server. In other words, I don't have a config.ru file. In case anyone else runs into the same issue, I thought I'd post my solution.
My app is a "Hello World" app I used just to get Sinatra running inside a Docker container.
Here's the app.rb before my changes.
require 'sinatra'
get '/' do
'Hello world!'
end
When I ran the Docker container I could see WEBrick was running and ready to accept requests. But when I tried to access it from my laptop the request never made it to the container.
To fix the issue I added set :bind, '0.0.0.0' to app.rb, like this.
require 'sinatra'
set :bind, '0.0.0.0'
get '/' do
'Hello world!'
end
For anything requiring more complex configuration settings it would be better to add the bind setting to a configuration file.
I'm working through the Spring tutorial here;
Messaging with RabbitMQ
I found this question but it did not address my query regarding the docker-compose.yml file found in the tutorial;
Spring RabbitMQ tutorial results in Connection Refused error
I've completed all necessary steps up until the actual running of the application, at which point I'm getting ConnectException exceptions suggesting that the server is not running or not running correctly.
The docker-compose.yml file specified in the tutorial is as follows;
rabbitmq:
image: rabbitmq:management
ports:
- "5672:5672"
- "15672:15672"
Basically I am unsure what this docker-compose file actually does, because it doesn't seem to set up the RabbitMQ server as the tutorial suggests (or at least not in the way the tutorial expects). I'm quite new to Docker also so perhaps I am mistaken in thinking this file would run a new instance of the RabbitMQ server.
When I run docker-compose up I get the following console output;
rabbitmq_1 |
rabbitmq_1 | =INFO REPORT==== 28-Jun-2017::13:27:26 ===
rabbitmq_1 | Starting RabbitMQ 3.6.10 on Erlang 20.0-rc2
rabbitmq_1 | Copyright (C) 2007-2017 Pivotal Software, Inc.
rabbitmq_1 | Licensed under the MPL. See http://www.rabbitmq.com/
rabbitmq_1 |
rabbitmq_1 | RabbitMQ 3.6.10. Copyright (C) 2007-2017 Pivotal Software, Inc.
rabbitmq_1 | ## ## Licensed under the MPL. See http://www.rabbitmq.com/
rabbitmq_1 | ## ##
rabbitmq_1 | ########## Logs: tty
rabbitmq_1 | ###### ## tty
rabbitmq_1 | ##########
rabbitmq_1 | Starting broker...
rabbitmq_1 |
rabbitmq_1 | =INFO REPORT==== 28-Jun-2017::13:27:26 ===
rabbitmq_1 | node : rabbit#bd20dc3d3d2a
rabbitmq_1 | home dir : /var/lib/rabbitmq
rabbitmq_1 | config file(s) : /etc/rabbitmq/rabbitmq.config
rabbitmq_1 | cookie hash : DTVsmjdKvD5KtH0o/OLVJA==
rabbitmq_1 | log : tty
rabbitmq_1 | sasl log : tty
rabbitmq_1 | database dir : /var/lib/rabbitmq/mnesia/rabbit#bd20dc3d3d2a
...plus a load of INFO reports. This led me to believe that the RabbitMQ server was up and running, but apparently not as I cannot connect.
The only way I have gotten this to work is by manually installing Erlang and RabbitMQ (on a Windows system here) which does appear to let me complete the tutorial.
Why is Docker even mentioned in this tutorial though? The docker-compose.yml does not appear to do what the tutorial suggests.
What is this file actually doing here and how would one run RabbitMQ in a docker container for the purposes of this tutorial? Is this an issue with port numbers?
It turns out the issue was with the Spring RabbitMQ template connection information.
The Spring tutorial assumes the use of the normal, manual installation of RabbitMQ (plus Erlang) and the RabbitMQ Spring template uses some default connection parameters that are not compatible with the image in docker-compose file specified in the tutorial.
To solve this I needed to add an Spring application.properties file and add it to the resources folder in my application directory structure. Next I needed to find the IP address of my Docker container using the following command;
docker-machine ip
which will gives the IP address. I added the following parameters to the application.properties file;
spring.rabbitmq.host={docker-machine ip address}
spring.rabbitmq.port=5672
spring.rabbitmq.username=guest
spring.rabbitmq.password=guest
The port, username and password here are all defaults and can be found in the RabbitMQ documentation.
Doing this I was able to have my application connect correctly to the RabbitMQ server running in the Docker container.
It appears the Spring tutorial is slightly incomplete as it does not inform the reader that some extra steps are required when using the RabbitMQ docker-compose file over the manual installation of RabbitMQ that the rest of the tutorial assumes.
From what I know, it is not possible to know all the time the IP address and you should instead of the ip address, provide the DNS which is the name of the rabbitmq server defined in your docker-compose file.
I'm currently trying to install a BOSH Director on BOSH Lite - it's clear to me that BOSH Lite already ships with a Director, but I would like to test a release containing a Director "on top of that". Here is my setup:
Everything works fine until I add the warden_cpi job. I would like to configure the Warden CPI to connect to Warden running on the Virtual Machine hosting BOSH Lite and still being available to the Director . So what I tried is this:
releases:
- name: bosh-warden-cpi
url: https://bosh.io/d/github.com/cppforlife/bosh-warden-cpi-release?v=29
sha1: 9cc293351744f3892d4a79479cccd3c3b2cf33c7
version: latest
instance_groups:
- name: bosh-components
jobs:
- name: warden_cpi
release: bosh-warden-cpi
properties:
warden_cpi:
host_ip: 10.254.50.4 # host IP of BOSH Lite Vagrant Box
warden:
connect_network: tcp
connect_address: 10.254.50.4:7777 # again host IP and Port of garden-linux on BOSH Lite Vagrant Box
agent:
mbus: nats://user:password#127.0.0.1:4222
blobstore:
provider: dav
options:
endpoint: http://127.0.0.1:25250
user: user
password: password
where 10.254.50.4 is the IP address of the Vagrant Box and 7777 is the port of garden-linux.
During the deployment, I get this message from bosh vms
+----------------------------------------------------------+--------------------+-----+---------+--------------+
| VM | State | AZ | VM Type | IPs |
+----------------------------------------------------------+--------------------+-----+---------+--------------+
| bosh-components/0 (37a1938e-e1df-4650-bec6-460e4bc3916e) | unresponsive agent | n/a | small | |
| bosh-director/0 (2bb47ce1-0bba-49aa-b9a3-86e881e91ee9) | running | n/a | small | 10.244.102.2 |
| jumpbox/0 (51c895ae-3563-4561-ba3f-d0174e90c3f4) | running | n/a | small | 10.244.102.4 |
+----------------------------------------------------------+--------------------+-----+---------+--------------+
As an error message from bosh deploy, I get this:
Error 450002: Timed out sending `get_state' to e1ed3839-ade4-4e12-8f33-6ee6000750d0 after 45 seconds
After the error occurs, I can see the VM with bosh vms:
+----------------------------------------------------------+---------+-----+---------+--------------+
| VM | State | AZ | VM Type | IPs |
+----------------------------------------------------------+---------+-----+---------+--------------+
| bosh-components/0 (37a1938e-e1df-4650-bec6-460e4bc3916e) | running | n/a | small | 10.244.102.3 |
| bosh-director/0 (2bb47ce1-0bba-49aa-b9a3-86e881e91ee9) | failing | n/a | small | 10.244.102.2 |
| jumpbox/0 (51c895ae-3563-4561-ba3f-d0174e90c3f4) | running | n/a | small | 10.244.102.4 |
+----------------------------------------------------------+---------+-----+---------+--------------+
But when I ssh into the bosh-components VM, there are no jobs in /var/vcap/jobs.
When I remove the warden_cpi block from the jobs list, everything runs as expected. The full jobs list for my BOSH components VM:
nats
postgres
registry
blobstore
The Director itself runs on another machine. Without the Warden CPI the two machines can communicate as expected.
Can anybody point out to me how I have to configure the Warden CPI so that it connects to the Vagrant Box as expected?
The question is very old, it's a BOSH v1 CLI whereas now BOSH v2 is an established standard, Garden Linux had been deprecated a long time ago in favor of Garden runC, but still, having experimented a lot with BOSH-Lite, I'd like to answer this one.
First, a semantics remark: you shouldn't say “on top of that”, but “as instructed by” instead, because a BOSH Director just instructs some underlying (API-based) infrastructure to do something, that eventually makes it run some workloads.
Second, there are two hurdles you might hit here:
The main problem is that the Warden CPI talks to both the Garden backend and the local Linux kernel for setting up various things around those Garden containers. As a direct consequence, you cannot run a Warden CPI inside a BOSH-Lite container.
The filesystem used (here by the long-gone Garden Linux, but nowadays the issue would be similar with Garden runC) might not work inside a Garden container, as managed by the pre-existing Warden CPI.
All in all, then main thing to be aware of, is this idea that the Warden CPI not only talks to the Garden backend through some its REST API. More than that, the Warden CPI needs to be co-located with the Linux kernel that runs Garden, in order to make system calls and run local commands for mounting persistent storage and other things.
I am trying to deploy a Ruby Sinatra api onto port 4567 of an EC2 micro instance.
I have created a Security Group with the following rules (and created the instance with said security group):
--------------------------------
| Ports | Protocol | Source |
--------------------------------
| 22 | tcp | 0.0.0.0/0 |
| 80 | tcp | 0.0.0.0/0 |
| 443 | tcp | 0.0.0.0/0 |
| 4567 | tcp | 0.0.0.0/0 |
--------------------------------
I bound myapp.rb on port 4567 (the default, but for verbosity):
set :port, 4567
and ran the service:
ruby myapp.rb
[2013-09-05 03:12:54] INFO WEBrick 1.3.1
[2013-09-05 03:12:54] INFO ruby 1.9.3 (2013-01-15) [x86_64-linux]
== Sinatra/1.4.3 has taken the stage on 4567 for development with backup from WEBrick
[2013-09-05 03:12:54] INFO WEBrick::HTTPServer#start: pid=1811 port=4567
Used nmap while ssh'd in the EC2 instance on localhost:
Starting Nmap 6.00 ( http://nmap.org ) at 2013-09-05 03:13 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00019s latency).
PORT STATE SERVICE
4567/tcp open tram
Nmap done: 1 IP address (1 host up) scanned in 0.08 seconds
Used nmap while ssh'd in the EC2 instance on the external ip:
Starting Nmap 6.00 ( http://nmap.org ) at 2013-09-05 03:15 UTC
Nmap scan report for <removed>
Host is up (0.0036s latency).
PORT STATE SERVICE
4567/tcp closed tram
Nmap done: 1 IP address (1 host up) scanned in 0.11 seconds
How do I change the state of the port from closed to open?
You’re starting Sinatra in the development environment. When running in development Sinatra only listens to requests from the local machine.
There a few ways to change this, the simplest is probably to run in the production environment, e.g.:
$ ruby myapp.rb -e production
You could also explicitly set the bind variable if you wanted to keep running in development:
set :bind, '0.0.0.0' # to listen on all interfaces
There are two possible causes for your problem.
Your service is only listening to connections on the loopback interface.
A software firewall is running and is blocking connections from outside on that port.