I need to make logging with logstash in Padrino project. I setup logstash on remote server and tried to integrate it with Padrino project, I found only one solution
logger = LogStashLogger.new(type: :udp, host: host, port: 5044) if RACK_ENV = 'staging'
but it can working only when use this code logger.debug message: 'test', foo: 'bar'
Can I make that all logs automatically will send to remote server?
Try this:
Padrino::Logger.logger = LogStashLogger.new(type: :udp, host: host, port: 5044)
I use this:
Padrino::Logger.logger = LogStashLogger.new(type: :udp, host: '172.16.x.x', port: 9999).extend(Padrino::Logger::Extensions)
Related
I'm having some problems to establish a websocket connection to a running ddev container.
Trying wo etablish the connection per JS for example with wss://websocket.ddev.site:3000 ends always up with connection failed.
Websocket PHP library used: Ratchet (http://socketo.me/)
I tried to set the ext. container port in an own docker-compose.yaml or web_extra_exposed_ports in config.yaml but nothig worked so far.
I have managed to run a Websocket connection.
Therefore, I did an entry in config.yaml of DDEV with following Content:
web_extra_exposed_ports:
- name: ratchet
container_port: 3000
http_port: 3000
https_port: 3001
After DDEV restart, it is now possible to establish a Websocket connections with:
HTTP: 'ws://websocket.ddev.site:3000'
HTTPS: 'wss://websocket.ddev.site:3001'
My working example was build with the tutorial on http://socketo.me/docs/hello-world calling above URL with Browser console.
I'm confused on how the host: parameter in the Endpoint configuration in Phoenix works.
I'm deploying to different Heroku apps (prod and staging) with different URLs, respectively. I want to configure the Host URL to be dynamic, coming from an environment variable, like so:
config :testapp, TestApp.Endpoint,
http: [port: {:system, "PORT"}],
url: [scheme: "https", host: {:system, "HOST"}, port: 443],
cache_static_manifest: "priv/static/manifest.json",
force_ssl: [rewrite_on: [:x_forwarded_proto]],
secret_key_base: System.get_env("SECRET_KEY_BASE")
However, after deploy, my asset URLs no longer have the unique hash set by phoenix.digest, which is a deal breaker.
Interestingly, when I hardcode the URL:
config :testapp, TestApp.Endpoint,
http: [port: {:system, "PORT"}],
url: [scheme: "https", host: "someapp-staging.herokuapp.com", port: 443],
cache_static_manifest: "priv/static/manifest.json",
force_ssl: [rewrite_on: [:x_forwarded_proto]],
secret_key_base: System.get_env("SECRET_KEY_BASE")
Even if it doesn't match the Heroku app url, everything still seems to work fine, and the asset URLs are correct. E.g. I can deploy to an app with an URL foo.herokuapp.com and everything still works.
The configuration above is from prod.exs, I'm using the elixir and phoenix static custom buildpacks, with the following config:
# elixir_buildpack.config
# Elixir version
elixir_version=1.2.3
# Always rebuild from scratch on every deploy?
always_rebuild=true
# ENV variables
config_vars_to_export=(DATABASE_URL HOST)
and
# phoenix_static_buildpack.config
# We can set the version of Node to use for the app here
node_version=5.10.0
# We can set the version of NPM to use for the app here
npm_version=3.8.3
# ENV variables
config_vars_to_export=(DATABASE_URL HOST)
I could probably introduce a separate staging.exs config file and set MIX_ENV=staging, but I would like to understand:
1) Why using {:system, "HOST"} breaks digested asset URLs
2) Why any string works fine on different applications and URLs
Any help is appreciated!
Heroku does not have an environment variable named HOST by default (though PORT is available). I'd double check that you've added it as a config variable in your Heroku settings.
The command heroku run printenv is handy as it will output both base environment variables and config vars added manually or by add-ons.
I'm trying to migrate my capistrano v2 script to the new v3.4 version.
All went well with development stage: I have one EC2 instance, and the deploy completed without errors.
I'm having some troubles with my production script, because I've got a proxy (EC2 instance) before my production servers (EC2 instances too); in my capistrano v2 script all was working, now I'm using cap-ec2 + capistrano v3.4 to deploy my application only to tagged servers, but when I try it I get "Permission Denied", my production servers refuse my key.
Maybe I've set something wrong with proxy parameters in my script, can you please help me?
Thanks a lot!!
Here you can find proxy parameters:
CAPISTRANO V2 (working)
set :gateway, "deploy#xxx.xxx.xxx.xxx"
set :ssh_options, { :forward_agent => true }
default_run_options[:pty] = true
ssh_options[:port] = "22"
ssh_options[:keys] = [File.join(ENV["HOME"], ".ssh", "id_rsa_deploy_myapp")]
CAPISTRANO V3 (not working)
require 'net/ssh/proxy/command'
set :ssh_options, {
user: "deploy",
keys: %w("~/.ssh/id_rsa_deploy_myapp"),
auth_methods: %w(publickey),
forward_agent: true,
port: 22,
proxy: Net::SSH::Proxy::Command.new('ssh xxx.xxx.xxx.xxx -W %h:%p')
}
I am following the basic Ghost server installation on an ec2 instance, so far I can run ghost server via npm start and I can see that ghost server is up and running:
Ghost is running...
Listening on 127.0.0.1:2368
Url configured as: http://54.187.25.187/
Ctrl+C to shut down
Here is the ghost config config.js:
// ### Development **(default)**
development: {
// The url to use when providing links to the site, E.g. in RSS and email.
url: 'http://54.187.25.187/',
database: {
client: 'sqlite3',
connection: {
filename: path.join(__dirname, '/content/data/ghost-dev.db')
},
debug: false
},
server: {
// Host to be passed to node's `net.Server#listen()`
host: '127.0.0.1',
// Port to be passed to node's `net.Server#listen()`, for iisnode set this to `process.env.PORT`
port: '2368'
}
At the end, I cannot access to anything when I type http://54.187.25.187:2368 on the browser. I really appreciate guidelines on how to setup ghost properly.
EDIT: The problem is solved already, it was a EC2 SG issue that ports remained closed after I have set them to open.
For Amazon EC2 we have found you need to change the port to 0.0.0.0
http://www.howtoinstallghost.com/how-to-setup-an-amazon-ec2-instance-to-host-ghost-for-free-self-install/
require "juggernaut"
Juggernaut.publish("channel1", "Some data")
The code above works if Juggernaut is on the same server as the one running the code. What's the syntax to use a Juggernaut running on another server?
The syntax is the same. I have a juggernaut.yml config file in my config directory with settings for each environment. To run juggernaut on port 8080 of localhost in development, I have:
development:
host: 'localhost'
port: 8080
To run it on a different host, you can just change that host setting (e.g. 'jugg.someserver.com')