Rails multi_db sharding middleware not running in production - sharding

I have this in my multi_db.rb file:
Rails.application.configure do
config.active_record.shard_selector = { lock: true }
config.active_record.shard_resolver = ->(request) {
puts "MULTI_DB: subdomain = #{request.subdomain}"
return request.subdomain == "fr" ? "french": "default"
}
end
Pretty straightforward, trying to route to a different shard based on language. And this works fine locally. Every time I issue a request, I see my puts above print the the debug line. But in prod, I don't see this at all, this code is simply not running.
What could I be missing?

Well, this looked to be a version mismatch. I was actually running rails 7.0.4 locally, but prod was running 7.0.2.4. When I updated prod to 7.0.4, things worked fine. Not sure if there was a problem with initializers in 7.0.2.4, or just some version shenanigans.

Related

meanjs best practice to setup process evn for database

In my attempt to get a 'hello world' skill with meanjs.org product, I cloned 0.4.2 and setup a mongolab account.
I opened > config > env > development.js, to setup db URL, where I have this:
db: {
uri: process.env.MONGOHQ_URL || process.env.MONGOLAB_URI || 'mongodb://' + (process.env.DB_1_PORT_27017_TCP_ADDR || 'localhost') + '/mean-dev',
For trial, I simply replaced process.env.MONGOLAB_URI with my URL from mongolab and everthing worked for sure, but I doubt this is the way to go. I see a Procfile there, may be I should specify the process.env.MONGOLAB_URI there? Where I could specify it, so that if I upload it to Heroku, say, it will setup the process.env.MONGOLAB_URI and no edit will be needed here please?
p.s. I googled and searched SOF
Well just a small progress,
I got to my gulpfile.js and setup a task as:
gulp.task('setmydb', function () {
process.env.MONGOLAB_URI =
'mongodb://mylogin:mypassword#ds157479.mlab.com:57479/meantst1';
});
Then at the end of the file, added into the task sequence:
// Run the project in development mode
gulp.task('default', function (done) {
runSequence('env:dev', 'lint', ['setmydb','nodemon', 'watch'], done);
});
Well it worked, but I'm still not sure if this indeed is how it must be done! So please help me get sure.
Just in case if someone else also needed, this is how I solved my problem:
Setting configuration variables | Heroku
I first followed the Heroku getting started and edited their app there, added this root:
app.get('/envtst', function(request, response) {
var xterm = process.env.XVAR ==='yes' ? 'yes' : 'no';
response.send(xterm);
});
Then pushed the app to Heroku and also setup my test variable XVAR via command line:
heroku config:set XVAR=yes
finally, opened the root in browser and verified.

Chef aws driver tags don't work using Etc.getlogin

I am currently using Chef solo on a Windows machine. I used the fog driver before which created tags for my instances on AWS. Recently, I moved to the aws driver and noticed that aws driver does not handle tagging. I tried writing my own code to create the tags. One of the tags being "Owner" which tells me who created the instance. For this, I am using the following code:
def get_admin_machine_options()
case get_provisioner()
when "cccis-environments-aws"
general_machine_options = {ssh_username: "root",
create_timeout: 7000,
use_private_ip_for_ssh: true,
aws_tags: {Owner: Etc.getlogin.to_s}
}
general_bootstrap_options = {
key_name: KEY_NAME,
image_id: "AMI",
instance_type: "m3.large",
subnet_id: "subnet",
security_group_ids: ["sg-"],
}
bootstrap_options = Chef::Mixin::DeepMerge.hash_only_merge(general_bootstrap_options,{})
return Chef::Mixin::DeepMerge.hash_only_merge(general_machine_options, {bootstrap_options: bootstrap_options})
else
raise "Unknown provisioner #{get_setting('CHEF_PROFILE')}"
end
end
machine admin_name do
recipe "random.rb"
machine_options get_admin_machine_options()
ohai_hints ohai_hints
action $provisioningAction
end
Now, this works fine on my machine. The instance is created on my machine with proper tags but when I run the same code on someone else's machine. It doesn't create the tags at all. I find this to be very weird. Does anyone know what's happening? I have the same code!
Okay so I found the issue. I was using the gem chef-provisioning-aws 1.2.1 and everyone else was on 1.1.1
the gem 1.1.1 does not have support for tagging so it just went right past it.
I uninstalled the old gem and installed the new one. It worked like a charm!

Rails 4 assets not being served in production on Windows

I'm currently having the exact same problem as described in this post: Rails not serving assets in production or staging environments.
I am running Rails 4.0.4 in production environment on Windows 7 (so that could easily be the problem, can't use Linux unfortunately). I have run rake assets:clobber to make sure everything is cleaned up and and then RAILS_ENV=production rake assets:precompile and it succeeds without errors or warnings. All the files appear in my public/assets folder and using Windows explorer I can view the text in application.js and application.css, and images display correctly. However when I try to visit localhost:3001/assets/application.js it is blank, same with application.css, and image files come up with an error. I have restarted the server each time after changing settings and precompiling.
When I look at the logs it says the page renders successfully, there are no "No route matches" errors like I have seen in other posts. So the assets are being found, but for some reason they aren't being properly served.
Here is my production.rb:
ABC::Application.configure do
config.cache_classes = true
config.eager_load = true
config.consider_all_requests_local = false
config.action_controller.perform_caching = true
config.action_dispatch.x_sendfile_header = "X-Sendfile"
config.serve_static_assets = true
config.action_mailer.default_url_options = { :host => '' }
config.i18n.fallbacks = true
config.active_support.deprecation = :notify
config.assets.js_compressor = :uglifier
config.assets.css_compressor = :sass
config.assets.compile = false
config.assets.digest = true
end
Any help would be much appreciated, I've been stuck on this for nearly two days!
Just as I finished typing up this question, I finally came across another post which said the headers were being set but there was no body, which sounded like my problem. They were using nginx and fixed the problem by changing the following:
# Specifies the header that your server uses for sending files
# config.action_dispatch.x_sendfile_header = "X-Sendfile" # for apache
config.action_dispatch.x_sendfile_header = 'X-Accel-Redirect' # for nginx
And seeing as I'm not using either Apache or nginx, I commented out both lines and finally my assets were served. Seems obvious in hindsight but I thought I would post this anyway in case it helps anyone else having the same problem.

bluepill not detecting that processes have, in fact, started successfully, and so creates new ones

I have one (EC2) Ubuntu server where bluepill is working just fine to start and monitoring resque processes (and it has done so on other nodes in the past).
I'm setting up a new node, and for some reason on this node bluepill does not recognize that the processes have started and are running, and so keeps creating new ones. I'm a little baffled by what's causing this. The 2 nodes are almost identical; they're both EC2 servers provisioned by the same chef scripts. It is true that the one not working is 'production' and the other 'staging', but there's almost no difference due to that.
Any thoughts or suggestions before I fork the github project and start inserting more monitoring, to try and figure out what's going on? There's been discussion on this list in the past about troubles w/ bluepill and resque, but as I said this is working fine on my staging server, and has worked fine on earlier production servers (although I will note that this new production server is ruby 1.9.3 (vs 1.9.2) and rails 3.2 (vs. 3.1)).
Here's my .pill file (or more specifically, my chef cookbook's template file):
ENV["RAILS_ENV"] = "<%= node.chef_environment %>"
ENV["QUEUE"] = "*"
Bluepill.application("zmx_app") do |app|
app.working_dir = "/srv/zmx/current"
app.uid = "root"
app.gid = "root"
2.times do |i|
app.process("resque-#{i}") do |process|
process.group = "resque"
process.start_command = "rake resque:work"
process.pid_file = "/srv/zmx/current/tmp/pids/resque_workers-#{i}.pid"
process.stop_command = "kill -QUIT {{PID}}"
process.daemonize = true
end
end
end
This turned out to be a bug in bluepill, which I have forked, fixed, and submitted a pull request.
And I'm not sure why I didn't realize that there was, in fact, a difference between my two environments: staging/old prod was on bluepill 0.0.55, my new production environment on 0.0.58.

Heroku logging not working

I've got a rails 3.1 app deployed on Heroku Cedar. I'm having a problem with the logging. The default rails logs are working just fine, but when I do something like:
logger.info "log this message"
In my controller, Heroku doesn't log anything. When I deploy my app I see the heroku message "Injecting rails_log_stdout" so I think calling the logger should work just fine. Puts statements end up in my logs. I've also tried other log levels like logger.error. Nothing works. Has anyone else seen this?
MBHNYC's answer works great, but it makes it difficult to change the log level in different environments without changing the code. You can use this code in your environments/production.rb to honor an environment variable as well as have a reasonable default:
# https://github.com/ryanb/cancan/issues/511
config.logger = Logger.new(STDOUT)
config.logger.level = Logger.const_get((ENV["LOG_LEVEL"] || "INFO").upcase)
Use this command to set a different log level:
heroku config:set LOG_LEVEL=error
I was just having the same issue, solved by using the technique here:
https://github.com/ryanb/cancan/issues/511
Basically, you need to specify the logger outputs to STDOUT, some gems interfere with the logger and seem to hijack the functionality (cancan in my case).
For the click lazy, just put this in environments/production.rb
config.logger = Logger.new(STDOUT)
config.log_level = :info
As of the moment, it looks like heroku injects these two plugins when building the slug:
rails_log_stdout - https://github.com/ddollar/rails_log_stdout
rails3_server_static_assets - https://github.com/pedro/rails3_serve_static_assets
Anything sent to the pre-existing Rails logger will be discarded and will not be visible in logs. Just adding this for completeness for anyone else who ends up here.
The problem, as #MBHNYC correctly addressed, is that your logs are not going to STDOUT.
Instead of configuring manually all that stuff, Heroku provides a gem that does this all for you.
Just put
gem 'rails_12factor', group: :production
in your Gemfile, bundle, and that's it!
NOTE: This works both for Rails 3 and 4.
Source: Rails 4 on Heroku

Resources