I am deploying an embedded jetty app on Heroku and it listens on port 80 for http. However, on startup I read the System.getenv("PORT") value and initialize the jetty server using that port value, like so:
if (isproduction) {
LOGGER.log(Level.INFO, "System.getenv(\"PORT\"):{0}", System.getenv("PORT"));
port = Integer.valueOf(System.getenv("PORT"));
baseurl = "http://xxxx-xxxx.herokuapp.com";
}
LOGGER.log(Level.INFO, "Starting jetty on port: {0}", port);
final Server jettyServer = new Server(port);
jettyServer.setHandler(context);
The above prints
2017-05-25T17:34:02.943531+00:00 app[web.1]: INFO: System.getenv("PORT"):45203
2017-05-25T17:34:02.945234+00:00 app[web.1]: INFO: Starting jetty on port: 45203
However, heroku open from the command line opens the application at port 80. This is not really a problem (the application is fully operational) but it is unexpected behavior. Anyone can shed some light as to whats going on?
Looking at its code, heroku open does not support custom ports.
Related
I want to run a jmeter test, which listens to a port on a given ip, and prints the messages which are being sent to that port. I have tried using this:
SocketAddress inetSocketAddress = new InetSocketAddress(InetAddress.getByName("<client ipAddress>"),<port number>);
def server = new ServerSocket();
server.bind(inetSocketAddress);
while(true) {
server.accept { socket ->
log.info('Someone is connected')
socket.withStreams { input, output ->
def line = input.newReader().readLine()
log.info('Received message: ' + line)
}
log.info("Connection processed")
}
}
But this is giving me error - "Cannot assign requested address: JVM_Bind
"
Is there any alternate way to approach this? Or what changes do i need to do for the current approach to work?
You copied and pasted this code from the right place and it should work just fine. Evidence:
as per the BindException documentation
Signals that an error occurred while attempting to bind a socket to a local address and port. Typically, the port is in use, or the requested local address could not be assigned.
So I can think of 2 options:
Your <client ipAddress> is not correct and cannot be resolved.
Something is already running on the <port number>, you cannot have 2 applications listening to the same port, the first one will be successful and another one will fail
More information:
Fixing java.net.BindException: Cannot assign requested address: JVM_Bind in Tomcat, Jetty
Apache Groovy - Why and How You Should Use It
I have just setup redis using docker in windows and I am trying to write a simple .Net console app to connect to it to familiarize myself with things.
I am running docker in windows 10 pro using "docker run redis:windowsservercore"
The redis instance looks like it boots up fine and will display:
[1904] 21 Mar 13:18:50.835 # Server started, Redis version 3.2.100
[1904] 21 Mar 13:18:50.847 - The server is now ready to accept connections on port 6379
But in my app when I try to connect to localhost:6379 it seems as if it can't find the redis instance at all:
var log = new StringWriter();
try
{
StackExchange.Redis.ConnectionMultiplexer RedisConnection = StackExchange.Redis.ConnectionMultiplexer.Connect("localhost:6379", log); // This line errors
var cache = RedisConnection.GetDatabase(1);
cache.StringSet("Test", "Hello World");
Console.WriteLine(cache.StringGet("Test"));
}
catch (Exception ex) { }
The connection above always fails with:
"It was not possible to connect to the redis server(s). UnableToConnect on localhost:6379/Interactive, Initializing/NotStarted, last: NONE, origin: BeginConnectAsync, outstanding: 0, last-read: 2s ago, last-write: 2s ago, keep-alive: 60s, state: Connecting, mgr: 10 of 10 available, last-heartbeat: never, global: 9s ago, v: 2.1.0.1"
I just can't figure out what I am doing wrong here. I have tried connecting using other naming methods for localhost. I disabled all firewalls for public and private networks to ensure this wasn't the issue.
I am note sure what I need to do here to get a connection established, hopefully someone has some advice for me.
Thanks everyone in advance
Okay, here is my setup. I'm on Heroku running a scrapyd daemon using the scrapy-heroku package https://github.com/dmclain/scrapy-heroku.
I'm having issues running out of database connections. I decided to try pooling the database connections use pgbouncer. I'm using this buildpack: https://github.com/heroku/heroku-buildpack-pgbouncer
My procfile was:
web: scrapyd
And I changed it to:
web: bin/start-pgbouncer-stunnel scrapyd
The buildpack is supposed to rewrite your DATABASE_URL when it initializes so that whatever child process is run can just use the DATABASE_URL as normal but will now be connecting to pgbouncer instead of directly to the database.
Within scrapy I'm using adbapi to create a pool for each spider as such:
def from_settings(cls, settings):
dbargs = dict(
host=settings['MYSQL_HOST'],
database=settings['MYSQL_DBNAME'],
user=settings['MYSQL_USER'],
password=settings['MYSQL_PASSWD'],
#charset='utf8',
#use_unicode=True,
)
dbpool = adbapi.ConnectionPool('psycopg2', cp_max=2, cp_min=1, **dbargs)
return cls(dbpool)
And in my settings this is how I'm getting the DATABASE_URL info:
import urlparse
urlparse.uses_netloc.append("postgres")
url = urlparse.urlparse(os.environ["DATABASE_URL"])
MYSQL_HOST = url.hostname
MYSQL_DBNAME = url.path[1:]
MYSQL_USER = url.username
MYSQL_PASSWD = url.password
This was working fine before I added pgbouncer buildpack. Now I get connection errors:
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/site-packages/twisted/internet/defer.py", line 150, in maybeDeferred
result = f(*args, **kw)
File "/app/.heroku/python/lib/python2.7/site-packages/scrapy/xlib/pydispatch/robustapply.py", line 57, in robustApply
return receiver(*arguments, **named)
File "/tmp/etc/etc/etc/middlewares.py", line 92, in spider_opened
File "/app/.heroku/python/lib/python2.7/site-packages/psycopg2/__init__.py", line 164, in connect
conn = _connect(dsn, connection_factory=connection_factory, async=async)
OperationalError: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
Does anyone have an idea what the issue may be?
I'm trying to use WebLogic with HTTPS default keystore for development and I get the following error when I try to connect to the server via web browser:
ExecuteThread: '0' for queue: 'weblogic.socket.Muxer', fatal: engine already closed. Rethrowing javax.net.ssl.SSLException: bad record MAC
<13-nov-2014 11H48' COT> <Debug> <SecuritySSL> <BEA-000000> <[Thread[ExecuteThread: '0' for queue: 'weblogic.socket.Muxer',5,Thread Group fo
r Queue: 'weblogic.socket.Muxer']]weblogic.security.SSL.jsseadapter: SSLENGINE: Exception occurred during SSLEngine.unwrap(ByteBuffer,ByteBu
ffer[]).
javax.net.ssl.SSLException: bad record MAC
at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1605)
at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1573)
at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:971)
at sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:876)
at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:750)
at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:664)
at weblogic.security.SSL.jsseadapter.JaSSLEngine$5.run(JaSSLEngine.java:134)
at weblogic.security.SSL.jsseadapter.JaSSLEngine.doAction(JaSSLEngine.java:732)
at weblogic.security.SSL.jsseadapter.JaSSLEngine.unwrap(JaSSLEngine.java:132)
at weblogic.socket.JSSEFilterImpl.unwrap(JSSEFilterImpl.java:603)
at weblogic.socket.JSSEFilterImpl.unwrapAndHandleResults(JSSEFilterImpl.java:507)
at weblogic.socket.JSSEFilterImpl.unwrapAndHandleResults(JSSEFilterImpl.java:474)
at weblogic.socket.JSSEFilterImpl.isMessageComplete(JSSEFilterImpl.java:313)
at weblogic.socket.SocketMuxer.readReadySocketOnce(SocketMuxer.java:991)
at weblogic.socket.SocketMuxer.readReadySocket(SocketMuxer.java:928)
at weblogic.socket.NIOSocketMuxer.process(NIOSocketMuxer.java:507)
at weblogic.socket.NIOSocketMuxer.processSockets(NIOSocketMuxer.java:473)
at weblogic.socket.SocketReaderRequest.run(SocketReaderRequest.java:30)
at weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:43)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:147)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:119)
>
I found some links about this, but nothing important.
Is there any solution for this?
Remove the WebLogic Domain data folder and setup it again. This time I restart the WebLogic server domain after the WebLogic Domain data folder setup and enable the SSL after. Next open the browser with the https address and it work.
I want to use SSL with MongoDB. It's not enabled by default so one has to compile from source with the necessary options. I followed the official documentation and got the v2.6.4 binary built and running nicely on a freshly deployed server running Ubuntu 14.04. All good so far.
Next I set up mongod as described in the official docs. I did follow their example of using a self-certified key for testing purposes. And the relevant part of the config looks like:
...
net:
bindIp: 127.0.0.1
port: 27017
ssl:
mode: requireSSL
PEMKeyFile: /opt/mongo/security/mongodb.pem
...
If I then run the client and specify to use SSL I connect fine. ($ mongo --ssl). FWIW if I try without the --ssl argument then it doesn't connect.
Ok, time to link up via Ruby. I'm on the same server and I try the following ruby script:
require 'rubygems'
require 'mongo'
client = Mongo::MongoClient.new('localhost', 27017, {:ssl => true})
Nope. It's not having it:
/home/test/.rvm/gems/ruby-1.9.3-p547/gems/mongo-1.11.1/lib/mongo/mongo_client.rb:422:in `connect': Failed to connect to a master node at localhost:27017 (Mongo::ConnectionFailure)
from /home/test/.rvm/gems/ruby-1.9.3-p547/gems/mongo-1.11.1/lib/mongo/mongo_client.rb:661:in `setup'
from /home/test/.rvm/gems/ruby-1.9.3-p547/gems/mongo-1.11.1/lib/mongo/mongo_client.rb:177:in `initialize'
from test_mongo_ssl.rb:8:in `new'
from test_mongo_ssl.rb:8:in `<main>'
So best to make sure that there's nothing wrong with the default connection without SSL. I disable SSL on mongod and restart. Then try the ruby script again, this time without the ssl option:
...
client = Mongo::MongoClient.new('localhost', 27017)
And it's fine. Therefore I feel I've narrowed it down to the ruby driver & ssl, but beyond that there's little else to go on.
EDIT I tried their Python driver on the same server and used their example program:
from pymongo import MongoClient
c = MongoClient(host="localhost", port=27017, ssl=True)
And that did connect OK. So at least I can feel fairly confident that the mongod is configured properly and the issue lies somewhere within the Mongo Ruby driver. Quite possibly a bug in their current driver (v1.11.1).
UPDATE I've also had success connecting via ssl using the node.js driver:
var mongo = require('mongodb');
var database = new mongo.Db("my_database", new mongo.Server("127.0.0.1", 27017, {ssl:true} ), {w:0});
database.open(function(err, db) {
if(err) throw err;
db.authenticate('user', 'password', function(err, result) {
var collection = db.collection('foo');
collection.findOne(function(err, item) {
if(err) throw err;
console.log(item);
db.close();
});
});
});
There it seems to be increasingly likely that there's either a bug in the ruby driver, or the documentation is incomplete and not explaining accurately how to use SSL connections. Therefore I've opened a new issue on MongoDB's issue tracker to hopefully get to the bottom of this.
Rather embarrassingly, the solution to this issue was my /etc/hosts file had a typo for the localhost entry:
127.0.0.1 localhost.localdomain locahost
As you can see, it's missing the second letter L in "localhost". (I suspect it went missing during an accidental vim gesture.) Therefore to resolve I just had to reinstate the missing "l":
127.0.0.1 localhost.localdomain localhost
It's still a mystery as to why the Python sample worked correctly. And it's because of that I didn't twig earlier that it was a problem with the hosts file.