Writing to a file in production with sinatra - ruby

I cannot write to a file for the life of me using Sinatra in production.
In my development environment, I can use Logger without a problem and log STDOUT to a file.
It seems like in production, the Logger class is overwritten by the RACK middleware's Logger and it makes things more complicated.
I simply want to write to a file like this:
post '/' do
begin
$log_file = File.open("/home/ec2-user/www/logs/app.log", "w")
...do..stuff...
$log_file.write "INFO -- #{Time.now} --\n #{notification['Message']}"
...do..stuff...
rescue
$log_file.write "ERROR -- #{Time.now} --" + "\njob failed"
ensure
$log_file.close
end
end
The file doesn't get created when I receive a POST request to '/'.
However the file DOES get created when I load the app running pry:
pry -r ./app.rb
I am certain the code inside the POST block is effectively running because new jobs are getting added to the database upon receiving requests..
Any help would be greatly appreciated.

I was finally able to get to the bottom of this.
I changed the nginx user in /etc/nginx/nginx.conf from nginx to ec2-user. (Ideally I would just fix the write permissions for the nginx user but this solution suits me for now.)
Then I ps aux | grep unicorn and saw the timestamp next to the process name: unicorn master -c unicorn.rb -D was 3 days old!!
All this time I was pushing my code the the production server, restarting nginx and never killed and restart the unicorn process.
I removed all the code in my POST block and left only the file creation part
post '/' do
$log_file = File.open("/home/ec2-user/www/logs/app.log", "a")
$log_file.write("test log string")
$log_file.close
end
And the the file was successfully written to upon receiving a POST request.

Related

How to see print() results in Tarantool Docker container

I am using tarantool/tarantool:2.6.0 Docker image (the latest at the moment) and writing lua scripts for the project. I try to find out how to see the results of callin' print() function. It's quite difficult to debug my code without print() working.
In tarantool console print() have no effect also.
Using simple print()
Docs says that print() works to stdout, but I don't see any results when I watch container's logs by docker logs -f <CONTAINER_NAME>
I also tried to set container's logs driver to local. Than I get one time print to container's logs, but only once...
The container's /var/log directory is always empty.
Using box.session.push()
Using box.session.push() works fine in console, but when I use it in lua script:
-- app.lua
function log(s)
box.session.push(s)
end
-- No effect
log('hello')
function say_something(s)
log(s)
end
box.schema.func.create('say_something')
box.schema.user.grant('guest', 'execute', 'function', 'say_something')
And then call say_something() from nodeJs connector like this:
const TarantoolConnection = require('tarantool-driver');
const conn = new TarantoolConnection(connectionData);
const res = await conn.call('update_links', 'hello');
I get error:
Any suggestions?
Thanx!
I suppose you've missed io.flush() after print command.
After I added io.flush() after each print call my messages start to write to logs (docker logs -f <CONTAINER_NAME>).
Also I'd recommend to use log module for such purpose. It writes to stderr without buffering.
Regarding the error in the connector, I think nodejs connector simply doesn't support pushes.

Logging to STDOUT in a ruby program (not working in Docker)

I'm dockerizing one of my ruby apps, but I've got this very strange logging behavior. It only seems to load when the program ENDS not while it's running. When I run the program (daemon) with docker-compose all I see is this:
Starting custom_daemon_1
Attaching to custom_daemon_1
However, if I put an exit part way into the program I see all my puts and logger outputs.
Starting custom_daemon_1
Attaching to custom_daemon_1
custom_daemon_1 | requires
custom_daemon_1 | starting logger
custom_daemon_1 | Starting loads
custom_daemon_1 | Hello base
custom_daemon_1 | Loaded track
custom_daemon_1 | Loaded geo
custom_daemon_1 | Loaded geo_all
custom_daemon_1 | Loaded unique
custom_daemon_1 | D, [2016-11-14T13:31:19.295785 #1] DEBUG -- : Starting custom_daemon...
custom_daemon_1 | D, [2016-11-14T13:31:19.295889 #1] DEBUG -- : Loading xx from disk...
custom_daemon_1 exited with code 0
The top ones without times were just puts debugging, seeing if it would show - the bottom two are created by:
Logger.new(STDOUT)
LOG = Logger.new(STDOUT)
LOG.level = Logger::DEBUG
Then I would call LOG.debug "xxx" or LOG.error "xxx" any idea why this strange behavior is happening? When I ctrl+c out of the first one, the logs still do not show up.
This was originally run by a .sh script and now I've made the call to run it directly as the CMD of the Dockerfile.
There is a python question I found asking something similar here. Someone speculates it may have to do with PID 1 processes having logging to STDOUT surpressed.
Test
Here is a test I ran:
puts "starting logger"
Logger.new(STDOUT)
LOG = Logger.new(STDOUT)
LOG.level = Logger::DEBUG
puts "this is 'puts'"
p "this is 'p'"
LOG.debug "this is 'log.debug'"
puts "Starting loads"
outputs:
custom_daemon_1 | starting logger
custom_daemon_1 | this is 'puts'
custom_daemon_1 | "this is 'p'"
Notice that the first two puts printed but as soon as I try to use LOG.debug it didn't work.
TEST 2
I also decided to try the logger using a file, and as expected it logs to the file just fine, through docker.
All I did was change Logger.new(STDOUT) to Logger.new('mylog.log') and I can tail -f mylog.log and all the LOG.debug prompts show up.
As say in this thread Log issue in Rails4 with Docker running rake task
Try disabling output buffering to STDOUT: $stdout.sync = true
I've temporarily fixed this with adding a symlink to based on this docker thread. In the Dockerfile:
RUN ln -sf /proc/1/fd/1 /var/log/mylog.log and set my logger to, LOG = Logger.new('/var/log/mylog.log') but this has two undesired consequences. First, the log file will grow and take up space and probably need to be managed - I don't want to deal with that. Second, it seems inelegant to have to add a symlink to get logging to work properly... Would love another solution.

multiple sidekiq queue for an sinatra application

We have a Ruby on Sinatra application. We use sidekiq and redis for queue process.
We already implemented and using sidekiq that queues up jobs that does insertion into database. it works pretty fine till now.
Now I wanted to add another jobs which will read bulk data from database and export to csv file.
I donot want both this job to be in same queue instead is there possible to create different queue for these jobs in same application?
Please give some solution.
You probably need advanced queue options. Read about them here: https://github.com/mperham/sidekiq/wiki/Advanced-Options
Create csv queue from command line (it can be done in config file as well):
sidekiq -q csv -q default
Then in your worker:
class CSVWorker
include Sidekiq::Worker
sidekiq_options :queue => :csv
# perform method
end
take a look at sidekiq wiki: https://github.com/mperham/sidekiq/wiki/Advanced-Options
by default everything goes inside 'default' queue but you can specify a queue in your worker:
sidekiq_options :queue => :file_queue
and to tell sidekiq to process your queue, you have to either declare it in configuration file:
:queues:
- file_queue
- default
or pass it as argument to the sidekiq process: sidekiq -q file_queue

How to ignore error and continue with rest of the script

Some background. I want to delete AWS Redshift cluster and the process takes more than 30 minute. So this is what I want to do:
Start the deletion
Every 1 minute, check the cluster status (it should be “deleting”)
When the cluster is deleted, the command would fail (because it
cannot find the cluster anymore). So log some message and continue with rest of the script
This is the command I run in a while loop to check the cluster status after I start the deletion:
resp = redshift.client.describe_clusters(:cluster_identifier=>"blahblahblah")
Above command will get me cluster status as deleting while the deletion process continues. But once the cluster is completely deleted, then the command itself will fail as it cannot find the cluster blahblahblah.
Here is the error from command once the cluster is deleted:
/var/lib/gems/1.9.1/gems/aws-sdk-1.14.1/lib/aws/core/client.rb:366:in `return_or_raise': Cluster blahblahblah not found. (AWS::Redshift::Errors::ClusterNotFound)
I agree with this error. But this makes my script exit abruptly. So I want to log a message saying The cluster is deleted....continuing and continue with my script.
I tried below settings
resp = redshift.client.describe_clusters(:cluster_identifier=>"blahblahblah")
|| raise (“The cluster is deleted....continuing”)
I also tried couple of suggestion mentioned at https://www.ruby-forum.com/topic/133876
But this is not working. My script exits once above command fails to find the cluster.
Questions:
How to ignore the error, print my own message saying “The cluster is deleted....continuing” and continue with the script ?
Thanks.
def delete_clusters clusters=[]
cluster.each do |target_cluster|
puts "will delete #{target_clust}"
begin
while (some_condition) do
resp = redshift.client.describe_clusters(:cluster_identifier => target_clust)
# break condition
end
rescue AWS::Redshift::Errors::ClusterNotFound => cluster_exception
raise ("The cluster, #{target_clust} (#{cluster_excption.id}), is deleted....continuing")
end
puts "doing other things now"
# ....
end
end
#NewAlexandria, I changed your code to look like below:
puts "Checking the cluster status"
begin
resp = redshift.client.describe_clusters(:cluster_identifier=>"blahblahblah")
rescue AWS::Redshift::Errors::ClusterNotFound => cluster_exception
puts "The cluster is deleted....continuing"
end
puts "seems like the cluster is deleted and does not exist"
OUTPUT:
Checking the cluster status
The cluster is deleted....continuing
seems like the cluster is deleted and does not exist
I changed the raise to puts in the line that immediately follows the rescue line in your response. This way I got rid of the RuntimeError that I mentioned in my comment above.
I do not know what are the implication of this. I do not even know whether this is the right way to do it. But It shows the error when the cluster is not found and then continues with the script.
Later I read a lot of articles on ruby exception/rescue/raise/throw.... but that was just too much for me to understand as I do not belong to programming background at all. So, if you could explain what is going on here, it will really helpful for me to get more confidence in ruby.
Thanks for your time.

Thor Start Jekyll then Open Page in Browser

Hi I want Thor to start a server - Jekyll / Python / PHP etc then open the browser
However the starting is a blocking task.
Is there a way to create a child process in Thor; or spawn a new terminal window - couldnt see and google gave me no reasonable answers.
My Code
##
# Project Thor File
#
# #use thor list
##
class IanWarner < Thor
##
# Open Jekyll Server
#
# #use thor ian_warner:openServer
##
desc "openServer", "Start the Jekyll Server"
def openServer
system("clear")
say("\n\t")
say("Start Server\n\t")
system("jekyll --server 4000 --auto")
say("Open Site\n\t")
system("open http://localhost:4000")
say("\n")
end
end
It looks like you are messing things up. Thor is in general a powerful CLI wrapper. CLI itself is in general singlethreaded.
You have two options: either to create different Thor descendents and run them as different threads/processes, forcing open thread/process to wait until jekyll start is running (preferred,) or to hack with system("jekyll --server 4000 --auto &") (note an ampersand at the end.)
The latter will work, but you still are to control the server is started (it may take a significant amount of time.) The second ugly hack to achieve this is to rely on sleep:
say("Start Server\n\t")
system("jekyll --server 4000 --auto &")
say("Wait for Server\n\t")
system("sleep 3")
say("Open Site\n\t")
system("open http://localhost:4000")
Upd: it’s hard to imagine what do you want to yield. If you want to leave your jekyll server running after your script is finished:
desc "openServer", "Start the Jekyll Server"
def openServer
system "clear"
say "\n\t"
say "Starting Server…\n\t"
r, w = IO.pipe
# Jekyll will print it’s running status to STDERR
pid = Process.spawn("jekyll --server 4000 --auto", :err=>w)
w.close
say "Spawned with pid=#{pid}"
rr = ''
while (rr += r.sysread(1024)) do
break if rr.include?('WEBrick::HTTPServer#start')
end
Process.detach(pid) # !!! Leave the jekyll running
say "Open Site\n\t"
system "open http://localhost:4000"
end
If you want to shutdown the jekyll after the page is opened, you are to spawn the call to open as well and Process.waitpid for it.

Resources