Logging to STDOUT in a ruby program (not working in Docker) - ruby

I'm dockerizing one of my ruby apps, but I've got this very strange logging behavior. It only seems to load when the program ENDS not while it's running. When I run the program (daemon) with docker-compose all I see is this:
Starting custom_daemon_1
Attaching to custom_daemon_1
However, if I put an exit part way into the program I see all my puts and logger outputs.
Starting custom_daemon_1
Attaching to custom_daemon_1
custom_daemon_1 | requires
custom_daemon_1 | starting logger
custom_daemon_1 | Starting loads
custom_daemon_1 | Hello base
custom_daemon_1 | Loaded track
custom_daemon_1 | Loaded geo
custom_daemon_1 | Loaded geo_all
custom_daemon_1 | Loaded unique
custom_daemon_1 | D, [2016-11-14T13:31:19.295785 #1] DEBUG -- : Starting custom_daemon...
custom_daemon_1 | D, [2016-11-14T13:31:19.295889 #1] DEBUG -- : Loading xx from disk...
custom_daemon_1 exited with code 0
The top ones without times were just puts debugging, seeing if it would show - the bottom two are created by:
Logger.new(STDOUT)
LOG = Logger.new(STDOUT)
LOG.level = Logger::DEBUG
Then I would call LOG.debug "xxx" or LOG.error "xxx" any idea why this strange behavior is happening? When I ctrl+c out of the first one, the logs still do not show up.
This was originally run by a .sh script and now I've made the call to run it directly as the CMD of the Dockerfile.
There is a python question I found asking something similar here. Someone speculates it may have to do with PID 1 processes having logging to STDOUT surpressed.
Test
Here is a test I ran:
puts "starting logger"
Logger.new(STDOUT)
LOG = Logger.new(STDOUT)
LOG.level = Logger::DEBUG
puts "this is 'puts'"
p "this is 'p'"
LOG.debug "this is 'log.debug'"
puts "Starting loads"
outputs:
custom_daemon_1 | starting logger
custom_daemon_1 | this is 'puts'
custom_daemon_1 | "this is 'p'"
Notice that the first two puts printed but as soon as I try to use LOG.debug it didn't work.
TEST 2
I also decided to try the logger using a file, and as expected it logs to the file just fine, through docker.
All I did was change Logger.new(STDOUT) to Logger.new('mylog.log') and I can tail -f mylog.log and all the LOG.debug prompts show up.

As say in this thread Log issue in Rails4 with Docker running rake task
Try disabling output buffering to STDOUT: $stdout.sync = true

I've temporarily fixed this with adding a symlink to based on this docker thread. In the Dockerfile:
RUN ln -sf /proc/1/fd/1 /var/log/mylog.log and set my logger to, LOG = Logger.new('/var/log/mylog.log') but this has two undesired consequences. First, the log file will grow and take up space and probably need to be managed - I don't want to deal with that. Second, it seems inelegant to have to add a symlink to get logging to work properly... Would love another solution.

Related

Async::WebSocket unable to connect on Windows 10

I'm attempting to write a Ruby SDK for the Stream Deck, a product that is basically a fancy hardware AutoHotkey allowing the user to program buttons with customized icons that do whatever they please, including making directories to achieve an unlimited amount of buttons organized to their liking. It has a language-agnostic API wherein it runs your script or compiled app with the arguments -port, -pluginUUID, -registerEvent, and -info. It runs a websocket on localhost at the port specified in the args, and on opening a connection you are to send a JSON string with your event and UUID as specified in the args.
I've gotten Ruby 3.0.5 running within a plugin with console output, but I'm having trouble getting it to talk to the websocket. I'm using SDPL to load my script (intended only for testing):
#!/esr/bin/env ruby
require "json"
require "async/websocket/client"
require "async/http/endpoint"
include Async
include Async::HTTP
# Parse arguments
_, port, _, UUID, _, REGISTER_EVENT, _, *info = ARGV
PORT = port.to_i
INFO = JSON.parse info.join(" ")
# Debug output prints properly
p PORT
p UUID
p REGISTER_EVENT
p INFO
Async do |task|
WebSocket::Client.connect(Endpoint.parse "http://localhost:#{PORT}") do |ws|
ws.write({ event: REGISTER_EVENT, uuid: UUID }.to_json)
ws.flush
puts "Opened!"
while msg = ws.read
puts "Message:"
puts msg
end
end
end
The arguments output as expected, then it hangs. If this code is run in WSL (with modifications to hardcode the port and an open plugin UUID), it talks to the Stream Deck as expected. Possible issue with the module on Windows 10? On RubyInstaller 3.1.2, the situation is even worse- it crashes with the following error:
0.0s warn: Async::Task [oid=0x280] [ec=0x294] [pid=32436] [2022-12-13 00:36:46 -0500]
| Task may have ended with unhandled exception.
| Errno::EBADF: Bad file descriptor
| → <internal:io> 63
| C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/io-event-1.1.4/lib/io/event/selector/select.rb 206

mandatory-script-verify-flag-failed (Script evaluated without error but finished with a false/empty top stack element)

Trying to build Bitcoin raw transaction for Bitcoin Testnet in Golang, but when trying to send getting an error:
mandatory-script-verify-flag-failed (Script evaluated without error but finished with a false/empty top stack element)
Here is raw transaction:
01000000014071216d4d93d0e3a4d88ca4cae97891bc786e50863cd0efb1f15006e2b0b1d6010000008a4730440220658f619cde3c5c5dc58e42f9625ef71e8279f923af6179a90a0474a286a8b9c60220310b4744fa7830e796bf3c3ed9c8fea9acd6aa2ddd3bc54c4cb176f6c20ec1be0141045128ccd27482b3791228c6c438d0635ebb2fd6e78aa2d51ea70e8be32c9e54daf29c5ee7a3752b5896e5ed3693daf19b57e243cf2dcf27dfe5081cfcf534496affffffff012e1300000000000017a914de05d1320add0221111cf163a9764587c5a171ba8700000000
Tried to debug with btcdeb:
./btcdeb --tx=01000000014071216d4d93d0e3a4d88ca4cae97891bc786e50863cd0efb1f15006e2b0b1d6010000008a4730440220658f619cde3c5c5dc58e42f9625ef71e8279f923af6179a90a0474a286a8b9c60220310b4744fa7830e796bf3c3ed9c8fea9acd6aa2ddd3bc54c4cb176f6c20ec1be0141045128ccd27482b3791228c6c438d0635ebb2fd6e78aa2d51ea70e8be32c9e54daf29c5ee7a3752b5896e5ed3693daf19b57e243cf2dcf27dfe5081cfcf534496affffffff012e1300000000000017a914de05d1320add0221111cf163a9764587c5a171ba8700000000 --txin=02000000000101394187cababd1c18dfc9d30d6325167aa654b1d35505ab77cd1b96562fda5d500000000017160014c0a4f9f451ea319f67c6d2535c1e41bd5d333214feffffff02f009aab80000000017a91455f5b5f3afa4751a54205941a45a14b27ad99be787ec8016000000000017a91435ac960b988964007c167c38ea724e034123e6b1870247304402205d6b22bcaf1a58bc41224eecc7437eef0db9b7e7fb709826314a8bd73adb330702204fbbbd49747d75331a89e2f7b486e0b7a786ecef3229b8e3fec0c4be491921c301210233eab1d60449c393c8f22d4b5d98ee103060d9644dc2af665e607a62e2151bbc30091e00
btcdeb 0.4.21 -- type `./btcdeb -h` for start up options
LOG: sign segwit taproot
notice: btcdeb has gotten quieter; use --verbose if necessary (this message is temporary)
input tx index = 0; tx input vout = 1; value = 1474796
got witness stack of size 0
14 op script loaded. type `help` for usage information
script | stack
-------------------------------------------------------------------+--------
30440220658f619cde3c5c5dc58e42f9625ef71e8279f923af6179a90a0474a... |
045128ccd27482b3791228c6c438d0635ebb2fd6e78aa2d51ea70e8be32c9e5... |
<<< scriptPubKey >>> |
OP_HASH160 |
35ac960b988964007c167c38ea724e034123e6b1 |
OP_EQUAL |
<<< P2SH script >>> |
5128ccd2 |
OP_DEPTH |
OP_SIZE |
OP_NOP4 |
OP_PICK |
28c6c438d0635ebb2fd6e78aa2d51ea70e8b |
OP_UNKNOWN |
#0000 30440220658f619cde3c5c5dc58e42f9625ef71e8279f923af6179a90a0474a286a8b9c60220310b4744fa7830e796bf3c3ed9c8fea9acd6aa2ddd3bc54c4cb176f6c20ec1be01
Can anybody give an advice on where to look at?
Judging from the examples in the btcdeb documentation, you should expect to see a valid script message when starting btcdeb, if the script validates correctly.
btcdeb will still allow you to step through the script with the step command, but because the script is invalid in the first place, this may not tell you much, except that it decides to halt after reaching <<< P2SH script >>>, thinking that is the end of the script.
The most obvious fix should be to remove OP_UNKNOWN, which represents an opcode that was not understood by btcdeb, but there are probably other errors lurking that prevent the script from validating also. You could try removing the end of the script, and building it back up incrementally, testing with the debugger, until it works.

User defined (or emmited) username when using the logger(1) linux bash tool command

I am trying to log some custom logs. The problem is that if I use the logger command, the username running the command is also logged. I would like to ommit that info so I can manually fill anything I want. I have read the manual but could not find anything like that. I also tried implementing it in a script (java) but not quit succeed.
Example. Now I am seeing this:
Mar 2 10:31:28 $HOSTNAME $USERNAME: Hello world!
What I would like to see is this:
Mar 2 10:31:28 suhosin[666]: ALERT - canary mismatch on efree() - heap overflow detected (attacker '000.000.000.000', file 'xyz')
Use the -t option to set the tag.
$ logger -t 'nobody' 'hello'
Produces log:
Feb 28 10:25:37 myhostname nobody: hello
Relevant man page section (bold added for emphasis):
-t, --tag tag
Mark every line to be logged with the specified tag. The default tag is the name of the user logged in on the terminal (or a user name based on effective user ID).

Writing to a file in production with sinatra

I cannot write to a file for the life of me using Sinatra in production.
In my development environment, I can use Logger without a problem and log STDOUT to a file.
It seems like in production, the Logger class is overwritten by the RACK middleware's Logger and it makes things more complicated.
I simply want to write to a file like this:
post '/' do
begin
$log_file = File.open("/home/ec2-user/www/logs/app.log", "w")
...do..stuff...
$log_file.write "INFO -- #{Time.now} --\n #{notification['Message']}"
...do..stuff...
rescue
$log_file.write "ERROR -- #{Time.now} --" + "\njob failed"
ensure
$log_file.close
end
end
The file doesn't get created when I receive a POST request to '/'.
However the file DOES get created when I load the app running pry:
pry -r ./app.rb
I am certain the code inside the POST block is effectively running because new jobs are getting added to the database upon receiving requests..
Any help would be greatly appreciated.
I was finally able to get to the bottom of this.
I changed the nginx user in /etc/nginx/nginx.conf from nginx to ec2-user. (Ideally I would just fix the write permissions for the nginx user but this solution suits me for now.)
Then I ps aux | grep unicorn and saw the timestamp next to the process name: unicorn master -c unicorn.rb -D was 3 days old!!
All this time I was pushing my code the the production server, restarting nginx and never killed and restart the unicorn process.
I removed all the code in my POST block and left only the file creation part
post '/' do
$log_file = File.open("/home/ec2-user/www/logs/app.log", "a")
$log_file.write("test log string")
$log_file.close
end
And the the file was successfully written to upon receiving a POST request.

How to print capistrano current thread hash?

An example output from capistrano:
INFO [94db8027] Running /usr/bin/env uptime on leehambley#example.com:22
DEBUG [94db8027] Command: /usr/bin/env uptime
DEBUG [94db8027] 17:11:17 up 50 days, 22:31, 1 user, load average: 0.02, 0.02, 0.05
INFO [94db8027] Finished in 0.435 seconds command successful.
As you can see, each line starts with "{type} {hash}". I assume the hash is some unique identifier for either the server or the running thread, as I've noticed if I run capistrano over several servers, each one has it's own distinct hash.
My question is, how do I get this value? I want to manually output some message during execution, and I want to be able to match my output, with the server that triggered it.
Something like: puts "DEBUG ["+????+"] Something happened!"
What do I put in the ???? there? Or is there another, built in way to output messages like this?
For reference, I am using Capistrano Version: 3.2.1 (Rake Version: 10.3.2)
This hash is a command uuid. It is tied not to the server but to a specific command that is currently run.
If all you want is to distinguish between servers you may try the following
task :some_task do
on roles(:app) do |host|
debug "[#{host.hostname}:#{host.port}] something happened"
end
end

Resources