Elixir with Plug and Cowboy: Disable nice error messages - heroku

I am using heroku-buildpack-elixir to deploy an application to Heroku. My application consists of a simple Plug/Cowboy setup. I noticed that when unhandled exceptions occur, a nice error message appears, showing the stack trace and the lines of code where the error appeared.
This is ok for development environments, however on production environments I do not want my code to be visible to visitors. How can I disable or override the default behaviour?
I tried setting the MIX_ENV environment variable to prod in Heroku with no effect.

wrap the Plug.Debugger statement in an if clause. Running in prod environment no longer show errors as html pages. source
if Mix.env == :dev do
use Plug.Debugger, otp_app: :my_app
end

Related

sinatra app stalls in production mode when deployed via vagrant to a VM and provisioned by varnish and run as a deamon

I have a sinatra app, when I run it in production it acts odly.
The first requests works, assets download and the page loads.
If you refresh the page however the request just stalls, and nothing gets logged out to the log file.
I'm running with sinatra-asset-pack and I did precompile the assets before starting it.
I'd post code but I'm not sure what would be needed to work out the issue
EDIT: it works fine on my own box, but when I deploy it to a VM using vagrant it just ceases up in production mode, it's fine in development mode though.
EDIT: I was able to get it to spit out this error:
Errno::EPIPE
Broken pipe # io_write -
And narrow it down to an action, however posting the code in the action is pointless as it doesn't log anything out and the first line of the action is a logging action so I'm not sure the action gets run at all; the logging was added after the problem hit so whatever it is, I don't think it's that.
EDIT: the error actually occurs here (base.rb(1144) of sinatra:
1142 def dump_errors!(boom)
1143 msg = ["#{Time.now.strftime("%Y-%m-%d %H:%M:%S")} - #{boom.class} - #{boom.message}:", *boom.backtrace].join("\n\t")
1144 #env['rack.errors'].puts(msg)
1145 end
EDIT: Ok, so when I run the deployment command manually it works fine; weirdly the output from the server is still being outputted to the terminal despite it being forked, I wonder if that's the problem. The broken pipe is the terminal that no-longer exists (when deployed via chef) and as such it's breaking... maybe?
Ok, turns out that the broken pipe thing was the cause, for some reason even after being forked it was trying to write stdout and stderr to the terminal.
However because the terminal no longer existed (It's started by chef) it could no longer write to the output and thus locked up, starting the app manually on the VM allowed it to work and further evidence to this conclusion is that adding a redirect (>> app.log 2>&1) to the end of the start command allowed the app to work.
Why sinatra is still writing logs to the terminal instead of the file I don't know and I need to work that out, but the main issue of the why is solved.
EDIT:
In the end I'm just doing this:
$stderr = #log.file
$stdout = #log.file
to redirect it to the same place as my logs go so it should be fine now... I think?

Unable to run tests in Microsoft Test Manager data and diagnostics error

I am trying to run a manual test case in Microsoft Test Manager (2013) using a test lab. When I run the test it shows the following error:
Data and diagnostics cannot be collected
An error occurred while initializing diagnostic data adapters. Abort your session and start again.
Timed out while initializing data and diagnostics adapters.
If the Windows Firewall does not have Microsoft Test Manager added to the exceptions list and set to be enabled, the initialization for the data and diagnostics adapters can time out. Verify that the exceptions list for the Windows Firewall includes Microsoft Test Manager (mtm.exe). For more information about this, see:
http://go.microsoft.com/fwlink/?LinkId=83134
For more information about issues that can cause initialization of data and diagnostics adapters to time out, see:
http://go.microsoft.com/fwlink/?LinkId=254562.
I have been searching for an answer and trying various things now for a couple of days and can't resolve the issue. Helpfully of the 2 links it gives you in the error the first did not help and the second didn't link to a working page. People who have posted a similar error in forums have resolved their issues by correcting the firewall however the firewall on my local PC and the firewall on the virtual machine are both off.
This is what I have checked:
Firewalls are all off
My test agent is set up on the virtual machine and shows under my test controllers correctly
The lab has a ready status and I can see the agent is online.
My test settings are currently set up to collect no data (in the hope that would help but it has not).
The test environment for running the tests is set to the correct environment.
I have tried extending the time out period in the mtm.exe.config and the QTAgent configs on the remote machine for when I kick off the test runner.
I have checked the firewall logs on the virtual machine when test runner fails and there appears to be no issues there.
As you can probably tell I have been trying to fix this for a while!
Has anyone seen this error before and been able to resolve it? Any guidance on some things to try that I haven't already listed would be really appreciated.
Thank you for your help!
I have resolved this issue, for a manual test the test agent needs to be installed on the local machine as well as on the virtual machine for the automation tests. This was the step I had missed. Once the test agent was installed on my local machine this error disappeared. It is a misleading error message but if you have this problem, make sure the test agent is on all of the machines in the environment and configured correctly. Part of the issue here was my misunderstanding as I thought I could use the VM for my manual tests too which is not the case.

Ruby Stack failed to deploy on Google Developers Console

I tried to deploy Ruby stack using Google Developers Console, but no success. I tried several times at other project, error was always the same (below).
Do you have any idea why it keeps failing?
2014/10/23 15:59:44
rubyStackBox: PENDING
2014/10/23 15:59:55~2014/10/23 16:06:01
rubyStackBox: DEPLOYING
2014/10/23 16:06:11
rubyStackBox: DEPLOYMENT_FAILED
Replica rubystackbox-eaeo failed with status PERMANENTLY_FAILING: Replica State changed to PERMANENTLY_FAILING. Replica was unhealthy 2 consecutive times.
I replicated the issue you experienced several times and it also failed. What finally worked was playing with the zones/regions when deploying the ruby stack :
Developers console > Click-to-deploy > Set MySQL password > Advanced Options, choose a different zone and click Deploy.
Another useful tool when investigating this is Console Output. Even if the deployment fails, you can go to the VM instance and check View Output towards the bottom of the page. It will list all the packages and any errors encountered. The following command will achieve the same thing:
$ gcloud compute instances get-serial-port-output <INSTANCE_NAME> --project <PROJECT_ID> --zone <ZONE_NAME>
Please advise if still seeing issues.

How to run/debug dashing dashboards on a client PC with Eclipse

I am trying to create a dashboard for work via dashing. I have an openSUSE server set up (command-line only, no X server), and dashing running on it successfully. I want to be able to use my work Windows 7 PC to configure the ruby-based jobs scripts, etc. I have Eclipse set up with Ruby, installed Ruby on Windows and have the debugger configured in eclipse. Git is also set up on the server, for the dashing folder. I have two questions about my methods:
Question 1:
Now, I can configure breakpoints in the ruby jobs and debug my variables, etc., but the debugger throws an error when it reaches the SCHEDULER part (see code pasted below) stating that it is an "uninitialised constant". I'm guessing Eclipse doesn't understand how to run/debug the specific dashing code; apparently dashing uses rufus-scheduler. How can I get Eclipse to run and/or debug my dashing dashboards?
Example of a ruby job in dashing, with rufus-scheduler, from the dashing website:
# :first_in sets how long it takes before the job is first run. In this case, it is run immediately
SCHEDULER.every '1m', :first_in => 0 do |job|
send_event('karma', { current: rand(1000) })
end
Question 2:
Currently the way I move code from my Windows PC to openSUSE, is via git. This means that when I want to test any change (simple or complicated) I must commit to git on the client, then push to the git branch on the server. This means that my commit history is going to be filled with test changes. Is there a better way to do this? (I'm guessing the only way around this, is to create a test web server on my client PC)
Thanks for any help you can provide.
Try "dashing job JOB_NAME AUTH_TOKEN".
The AUTH_TOKEN is stored in config.ru.
Dennis
me#host:~/Projects/my-dashing$ dashing --help
Tasks:
dashing generate (widget/dashboard/job) NAME # Creates a new widget, dashboard, or job.
dashing help [TASK] # Describe available tasks or one specific task
dashing install GIST_ID # Installs a new widget from a gist.
dashing job JOB_NAME AUTH_TOKEN(optional) # Runs the specified job. Make sure to supply your auth token if you have one set.
dashing new PROJECT_NAME # Sets up ALL THE THINGS needed for your dashboard project.
dashing start # Starts the server in style!
me#host:~/Projects/my-dashing$

Detect ELMAH exceptions during integration (WATIN + NUNIT) tests

Here is my scenario:
I have a suite of WATIN GUI tests that run against a ASP.net MVC3 web app per build in CI. The tests verify that the GUI behaves as expected when given certain inputs.
The web app uses ELMAH xml files for logging exceptions.
Question: How can I verify that no unexpected exceptions occurred during each test run. It would also be nice to provide a link to the ELMAH detail of the exception in the NUNIT output if an exception did occur.
Elmah runs in the web application which is a separate process from the test runner. There is no easy way to directly intercept unhandled errors.
The first thing that comes to mind is to watch the folder (App_Data?) that holds the Elmah XML error reports.
You could clear error reports from the folder when each test starts and check whether it is still empty at the end of the test. If the folder is not empty you can copy the error report(s) into the test output.
This approach is not bullet proof. It could happen that an error has occurred but the XML file was not yet (completely) written when you check the folder. For example when your web application is timing out. You could potentially also run into file locking issues if you try to read an XML file that is still being written.
Instead of reading XML files from disk you could configure Elmah to log to a database and read the error reports from there. That would help you get around file locking issues if they occur.
Why not go to the Elmah reporting view (probably elmah.axd) in the setup of your WatiN test and read the timestamp of the most recent error logged there. Then do the same after your test and assert that the timestamps are the same.
It would be easy to read the url of this most recent error from the same row and record that in the message of your failing assert which would then be in your nunit output.
This is what my tests do:
During test setup record the time
Run test
During test teardown browse to (elmah.axd/download) and parse the ELMAH CSV
Filter out rows that don't match the user that is running the WATIN tests and rows where the UNIX time was before the recorded time in (1)
If there are rows report the exception message and the error URL to NUNIT.
Seems to work well, we have already caught a couple of bugs that wouldn't have surfaced until a human tested the app.

Resources