How I can send rescued exceptions to NewRelic ?
I have a test file rpm.rb:
require 'newrelic_rpm'
NewRelic::Agent.manual_start
begin
"2" + 3
rescue TypeError => e
puts "whoa !"
NewRelic::Agent.agent.error_collector.notice_error( e )
end
I start it with:
NEWRELIC_ENABLE=true ruby rpm.rb
The content of log/newrelic_agent.log:
[05/14/13 ... (87691)] INFO : Reading configuration from config/newrelic.yml
[05/14/13 ... (87691)] INFO : Environment: development
[05/14/13 ... (87691)] WARN : No dispatcher detected.
[05/14/13 ... (87691)] INFO : Application: xxx (Development)
[05/14/13 ... (87691)] INFO : Installing Net instrumentation
[05/14/13 ... (87691)] INFO : Audit log enabled at '.../log/newrelic_audit.log'
[05/14/13 ... (87691)] INFO : Finished instrumentation
[05/14/13 ... (87691)] INFO : Reading configuration from config/newrelic.yml
[05/14/13 ... (87691)] INFO : Starting Agent shutdown
The content of log/newrelic_audit.log
[2013-05-14 ... (87691)] : REQUEST: collector.newrelic.com:443/agent_listener/12/74901a11b7ff1a69aba11d1797830c8c1af41d56/get_redirect_host?marshal_format=json
[2013-05-14 ... (87691)] : REQUEST BODY: []
Nothing is reported to NewRelic, why ?
I saw this already: Is there way to push NewRelic error manually?
I just spent an hour trying to test this from production console.
This is what finally got it working:
Make sure monitor_mode: true is set in newrelic.yml for the appropriate environment
Make sure to run the rails console with NEW_RELIC_AGENT_ENABLED=true NEWRELIC_ENABLE=true rails c
Make sure to use the public-api method call NewRelic::Agent.notice_error(exception)
Naturally, .notice_error will work as expected when called from a non-console process like the web-server.
You need to set monitor_mode: true in config/newrelic.yml
development:
<<: *default_settings
# Turn off communication to New Relic service in development mode (also
# 'enabled').
# NOTE: for initial evaluation purposes, you may want to temporarily
# turn the agent on in development mode.
monitor_mode: true
# Rails Only - when running in Developer Mode, the New Relic Agent will
# present performance information on the last 100 transactions you have
# executed since starting the mongrel.
# NOTE: There is substantial overhead when running in developer mode.
# Do not use for production or load testing.
developer_mode: true
Related
My Vaadin+Spring Boot App suddenly stops starting locally. I already purged IntelliJ Caches, rebuilt the project and made a mvn clean. Does anyone can help me find the issue? At the following point the log stops logging, but wont finishing frontend compiling. the App is also not reachable in the browser at this point
------------------ Starting Frontend compilation. ------------------
2021-12-21 21:07:47.371 INFO 4376 --- [onPool-worker-3] dev-webpack : Running webpack to compile frontend resources. This may take a moment, please stand by...
2021-12-21 21:07:49.753 INFO 4376 --- [onPool-worker-3] dev-webpack : Started webpack-dev-server. Time: 2383ms
Hello All I have a spring scheduler job running which has to be run on google cloud run with a scheduled time gap.
It works perfectly fine with docker-compose local deployment. It gets triggered without any issue.
Although it works fine locally in google cloud run service with CPU throttling off which keeps CPU 100% on always it is not working after the first run.
I will paste the docker file for any once reference but am pretty sure it is working fine
FROM maven:3-jdk-11-slim AS build-env
# Set the working directory to /app
WORKDIR /app
COPY pom.xml ./
COPY src ./src
COPY css-common ./css-common
RUN echo $(ls -1 css-common/src/main/resources)
# Build and create the common jar
RUN cd css-common && mvn clean install
# Build and the job
RUN mvn package -DskipTests
# It's important to use OpenJDK 8u191 or above that has container support enabled.
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM openjdk:11-jre-slim
# Copy the jar to the production image from the builder stage.
COPY --from=build-env /app/target/css-notification-job-*.jar /app.jar
# Run the web service on container startup
CMD ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
And below is the terraform script used for the deployment
resource "google_cloud_run_service" "job-staging" {
name = var.cloud_run_job_name
project = var.project
location = var.region
template {
spec {
containers {
image = "${var.docker_registry}/${var.project}/${var.cloud_run_job_name}:${var.docker_tag_notification_job}"
env {
name = "DB_HOST"
value = var.host
}
env {
name = "DB_PORT"
value = 3306
}
}
}
metadata {
annotations = {
"autoscaling.knative.dev/maxScale" = "4"
"run.googleapis.com/vpc-access-egress" = "all-traffic"
"run.googleapis.com/cpu-throttling" = false
}
}
}
timeouts {
update = "3m"
}
}
Something I noticed in the logs itself
2022-01-04T00:19:39.178057Z2022-01-04 00:19:39.177 INFO 1 --- [ionShutdownHook] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
Standard
2022-01-04T00:19:39.182017Z2022-01-04 00:19:39.181 INFO 1 --- [ionShutdownHook] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown initiated...
Standard
2022-01-04T00:19:39.194117Z2022-01-04 00:19:39.193 INFO 1 --- [ionShutdownHook] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown completed.
It is shutting down the entity manager. I provided -Xmx1024m heap memory to make sure it has enough memory.
Although in google documentation it has mentioned it should work I am not sure for some reason the scheduler not getting triggered. Any help would be really nice.
TL;DR: Using Spring Scheduler on Cloud Run is a bad idea. Prefer Cloud Scheduler instead
In fact, you have to understand what is the lifecycle of a Cloud Run instance. First of all, CPU is allocated to the process ONLY when a request is processed.
The immediate effect of that is that background process, like a scheduler, can't work, because there isn't CPUs allocated out of request processing.
Except if you set the CPU Throttling to off. You did it? Yes great, but there are another caveats!
An instance is created when a request comes in, and live up to 15 minutes without any request processing. Then the instance is offloaded and you scale to 0.
Here again, the scheduler can't work if the instance is shut down. The solution is to set the min instance to 1 AND the CPU throttling to false to keep 1 instance 100% up and let the scheduler do its job.
Final issue with Cloud Run, is the scalability. You set 4 in your terraform, that means, you can have up to 4 instances in parallel, and therefore 4 scheduler running in parallel, one on each instance. Is it really what you want? If not, you can set the max instance to 1 to limit the number of parallel instance to 1.
At the end, you have 1 instance, full time up, and that can't scale up and down. Because it can't scale, I don't recommend you to perform processing on the current instance but to call another API which run on another Cloud Run instance and that will be able to scale up and down according to the scheduler requirement.
And so, you will have only 1 scheduler that will perform API call to another Cloud Run services to perform task. That's the purpose of Cloud Scheduler.
I followed exact same step as per [1] and when I execute
./gradlew tomcatRunWar
Process didn't work after 92%
16:14:48.912 [localhost-startStop-1] INFO o.s.j.c.CachingConnectionFactory - Established shared JMS Connection: ActiveMQConnection {id=ID:Gayans-MacBook-Pro.local-58639-1559299488784-1:1,clientId=null,started=false}
16:14:48.971 [localhost-startStop-1] INFO o.a.f.i.c.b.WarWebApplicationInitializer - Started WarWebApplicationInitializer in 58.585 seconds (JVM running for 141.561)
The following warnings have been detected with resource and/or provider classes:
WARNING: A HTTP GET method, public java.lang.String org.apache.fineract.portfolio.self.savings.api.SelfSavingsApiResource.template(java.lang.Long,java.lang.Long,java.lang.String,javax.ws.rs.core.UriInfo), should not consume any entity.
Started Tomcat Server
The Server is running at http://localhost:8080/fineract-provider
Building 92% > :tomcatRunWar
[1] https://github.com/apache/fineract
./gradlew tomcatRunWar is actually a way to run the Fineract App in Dev mode. The logs you pasted shows Fineract is actually doing the right thing by starting up in Dev mode. What you can do now is get the web app here: https://github.com/openMF/community-app, build it using the instructions in the read me and try to login to the Fineract backend using username: mifos, password: password
I would like to disable NewRelic logs on my Ruby application, but only on test environment.
Since in the documentation we only have the log levels to set, and I want to avoid any logging, is there any option I can use to disable them?
Here is my newrelic.yml:
#
# This file configures the New Relic Agent. New Relic monitors Ruby, Java,
# .NET, PHP, Python and Node applications with deep visibility and low
# overhead. For more information, visit www.newrelic.com.
#
# Generated April 27, 2016, for version 3.15.2.317
#
# For full documentation of agent configuration options, please refer to
# https://docs.newrelic.com/docs/agents/ruby-agent/installation-configuration/ruby-agent-configuration
common: &default_settings
# Required license key associated with your New Relic account.
license_key: <%= ENV["NEW_RELIC_LICENSE_KEY"] %>
# Your application name. Renaming here affects where data displays in New
# Relic. For more details, see https://docs.newrelic.com/docs/apm/new-relic-apm/maintenance/renaming-applications
app_name: <%= ENV["NEW_RELIC_APP_NAME"] || 'Components' %>
# To disable the agent regardless of other settings, uncomment the following:
# agent_enabled: false
#
# Enable or disable the transmission of data to the New Relic collector).
monitor_mode: <%= ENV["NO_INSTRUMENTATION"] == '1' ? false : true %>
# Log level for agent logging: error, warn, info or debug.
log_level: <%= ENV["NEW_RELIC_DEBUG_LEVEL"] || 'info' %>
# Enable or disable SSL for transmissions to the New Relic collector).
# Defaults to true in versions 3.5.6 and higher.
ssl: true
# Enable or disable high security mode, a suite of security features
high_security: false
# Enable or disable synchronous connection to the New Relic data collection
# service during application startup.
sync_startup: false
# Enable or disable the exit handler that sends data to the New Relic
# collector) before shutting down.
send_data_on_exit: true
# Maximum number of seconds to attempt to contact the New Relic collector).
timeout: 120
# =============================== Transaction Tracer ================================
#
# The transaction traces feature collects detailed information from a
# selection of transactions, including a summary of the calling sequence, a
# breakdown of time spent, and a list of SQL queries and their query plans
# (on mysql and postgresql). Available features depend on your New Relic
# subscription level.
transaction_tracer:
# Enable or disable transaction traces.
enabled: true
# The agent will collect traces for transactions that exceed this time
# threshold (in seconds). Specify a float value or apdex_f.
#
# apdex_f is the response time above which a transaction is considered
# "frustrating." Defaults to four times apdex_t. Requests which complete
# in less than apdex_t are rated satisfied. Requests which take more than
# apdex_t, but less than four times apdex_t (apdex_f), are tolerated. Any
# requests which take longer than apdex_f are rated frustrated.
transaction_threshold: <%= ENV['NEW_RELIC_TRANSACTION_THRESHOLD'] || 'apdex_f' %>
# Determines whether Redis command arguments should be recorded within
# Transaction Traces.
record_redis_arguments: false
# Threshold (in seconds) above which the agent will collect explain plans.
# Relevant only when explain_enabled is true.
explain_threshold: 0.2
# Enable or disable the collection of explain plans in transaction traces.
# This setting will also apply to explain plans in Slow SQL traces if
# slow_sql.explain_enabled is not set separately.
explain_enabled: true
# Stack traces will be included in transaction trace nodes when their duration
# exceeds this threshold.
stack_trace_threshold: 0.5
# Maximum number of transaction trace nodes to record in a single
# transaction trace.
limit_segments: 4000
# =============================== Error Collector ================================
#
# The agent collects and reports all uncaught exceptions by default. These
# configuration options allow you to customize the error collection.
error_collector:
# Enable or disable recording of traced errors and error count metrics.
enabled: true
# ============================== Heroku ===============================
heroku:
use_dyno_names: true
# ============================== Thread Profiler ===============================
thread_profiler:
enabled: true
# Environment-specific settings are in this section.
# RAILS_ENV or RACK_ENV (as appropriate) is used to determine the environment.
# If your application has other named environments, configure them here.
development:
<<: *default_settings
app_name: Orchestrator (Development)
# NOTE: There is substantial overhead when running in developer mode.
# Do not use for production or load testing.
developer_mode: true
test:
<<: *default_settings
# It doesn't make sense to report to New Relic from automated test runs.
monitor_mode: false
staging:
<<: *default_settings
app_name: Components (Staging)
production:
<<: *default_settings
After some reading, I've found this post: https://discuss.newrelic.com/t/stop-logging-in-newrelic-agent-log-file/39876/3
I've adapted the code for yaml and ended with:
test:
<<: *default_settings
# It doesn't make sense to report to New Relic from automated test runs.
monitor_mode: false
logging:
enabled: false
and the problem is now solved!
You have log_level: off
Also a "hacky way", if you don't have log folder it won't write log...
the current working directory should contain a log directory
As recommended, I updated honeybadger gem to version 2.0.
I follow all the instructions while upgrading as mention here.
Next when I started my server after upgrading i.e shotgun -p3000, and load page, request gets timeout and throw:
ERROR: Got response code: 500
And Log look like this:
I, [2015-06-02T10:43:01.447813 #11587] INFO -- : Starting Honeybadger version 2.0.12 level=1 pid=11587
I, [2015-06-02T10:43:01.448585 #11586] INFO -- : Starting Honeybadger version 2.0.12 level=1 pid=11586
W, [2015-06-02T10:43:01.454212 #11587] WARN -- : Initializing development backend: data will not be reported. level=2 pid=11587
W, [2015-06-02T10:43:01.454692 #11586] WARN -- : Initializing development backend: data will not be reported. level=2 pid=11586
I, [2015-06-02T10:43:01.462911 #11588] INFO -- : Starting Honeybadger version 2.0.12 level=1 pid=11588
W, [2015-06-02T10:43:01.472935 #11588] WARN -- : Initializing development backend: data will not be reported. level=2 pid=11588
I, [2015-06-02T10:43:04.496411 #11601] INFO -- : Starting Honeybadger version 2.0.12 level=1 pid=11601
W, [2015-06-02T10:43:04.500226 #11601] WARN -- : Initializing development backend: data will not be reported. level=2 pid=11601
I, [2015-06-02T10:43:07.004766 #11614] INFO -- : Starting Honeybadger version 2.0.12 level=1 pid=11614
W, [2015-06-02T10:43:07.008677 #11614] WARN -- : Initializing development backend: data will not be reported. level=2 pid=1161
I am using following tools:
Ruby 2.1.2
Sinatra 1.4.6
Grape 0.11.0
HoneyBadger 2.0.12
Please help me to resolve this issue.
In previous honeybadger version i.e ~> 1.9, we had to specify Honeybadger::Rack::ErrorNotifier in config.ru file. But after upgrading, we have to remove Honeybadger::Rack::ErrorNotifier from config.ru file(upgrading document does not consist this info).
After removing Honeybadger::Rack::ErrorNotifier from config.ru, it works as expected.
Next, If you find your server slow and spawns more pids after upgrading then update your honeybadger gem
gem 'honeybadger', '2.1.0.beta.1'.