I am using “#sentry/react-native”: “^2.1.0” and trying to enable performance monitoring but so far nothing gets sent. According to the docs, doing tracesSampleRate: 1.0, in the initialization is all it takes. Is there anything else needed? Configuration in the dashboard (i saw the number 300 under Performance in the settings but I don’t know what it means exactly).
So far, I am using the app normally but not seeing anything in the performance part of the dashboard. I have tired executing methods that take more than 20 seconds to 1 minute…nothing changes. What kind of events does it take? or how i can simply stimulate some action that can be shown in performance dashboard?
Version 2.1.0 includes the Performance Monitoring API but doesn't start a transaction automatically. You need to use the API to start and finish a transaction.
There are docs on how to do that here.
The next version (due to next week) will include auto instrumentation if you're using the routers React Navigation v4 or v5. And it'll include spans automatically for fetch and xhr requests.
If you're not using either of those routers, you can continue to call startTransaction programatically and the fetch/xhr integrations will add the spans to the transaction.
You can follow the GitHub repository for updates (like releases) to keep up with the additions. More routers and integrations will follow like React Native Navigation from Wix.
There's also a dev community on Sentry's Discord that you can use to discuss developing integrations with Sentry and other SDK related topics.
Related
I work devops for a fairly large company that is in process of transitioning to microservices. This is a new area for most people involved and some of the governing requests seem like bad practice to me but I don't have the expertise to convince otherwise.
The request is to generate a report before deploying that would list any new api/events (Kafka is our messaging service) in a microservice.
The path that's being recommended is for devs to follow a style guide and then scrape the source code during CI/CD pipeline to generate a report that can be compared to previous reports and identify any new apis.
This seems backwards and unsustainable but I've been unable to find another solution that would satisfy their requests. I've recommended deploying to dev first, then using a tracing tool to identify any api changes, or event subscriptions, but they insist on having the report before deploying.
I'm hoping for any advice on best practice to accomplish this.
Tracing and detecting version changes is definitely over engineering. Whats simpler like #zenwraight has mentioned, is to version your APIs. While tracing through services to explore the different versions and schema could be a potential solution, it requires a lot more investment upfront and if thats not the bread and butter of the company, I would rather use a vendor product that might support something like this.
If discovery is a mechanism that is needed, I would recommend something that publishes internal API docs using a tool like Swagger so that you can search if there's an API you can consume.
And finally to support moving to different versions, I would recommend having an API onboarding process for the services so that teams can notify other teams that are using specific versions their services are coming to the end of their lifecycle and they will need to migrate to newer ones.
I plan to set up monitoring for Redmine, with the help of which I can see man-hours spent on tickets, time taken to complete a ticket etc to monitor the productivity of my team. I want to see all of these using Graphana. As of now I think using Prometheus and exposing the Metrics but not sure how. (Might have to create an exporter I think, but not sure if that would work). So basically how can this be possible?
A Prometheus exporter is simply an HTTP server that sits next to your target (Redmine in your case, although I have no experience with it) and whenever it gets a /metrics request it does one or more API calls to the target (assuming Redmine provides an API to query the numbers you need) and returns said numbers as Prometheus metrics with names, labels etc.
Here are the Prometheus clients (that help expose metrics in the format accepted by Prometheus) for Go and Java (look for simpleclient_http or simpleclient_servlet). There is support for many other languages.
Adding on to #Alin's answer to expose Redmine metrics to Prometheus. You would need to install an exporter.
https://github.com/mbeloshitsky/redmine_prometheus.git
Here is a redmine plugin available for prometheus.
You can get the hours and all the data you need through Redmine Rest APIs. Write a little program to fetch and update the data in Graphite or Prometheus. You can perform this task using sensu through creating a metric script in python,ruby or Perl. Next all you have to do is Plotting the graphs. Well thats another race :P
RedMine guide: http://www.redmine.org/projects/redmine/wiki/Rest_api_with_python
Since the Nifi GUI is really making api calls under the hood, is there anyway to capture those requests or logs? I've been using chrome dev tools. Just wondering if there is a way to capture this within nifi for governance purposes.
Chrome Dev tools is the best bet to get the actual API calls.
For auditing purposes there is something a little bit different... from the menu in the top-right there is "Flow Configuration History" which shows every change that has been made to the flow, and who made it (when in a secure instance).
The flow configuration history is also available through the ReportingTask API if you wanted to implement a custom reporting task to push these events somewhere.
My organization is tracking multiple Scrum projects in VersionOne. Each week, we use the Release Forecasting report for each project to create a management dashboard that indicates the health and expected completion date of each project. I would like to automate this. Do any of the VersionOne APIs allow for the execution of this report and retrieving the image that is generated?
There is not an endpoint specific to Release Forecasting. Nor is there an endpoint to generate the image. However, you can get to the underlying data via the existing API endpoints. For reporting, I recommend query.v1. The closest example is the query for burndown data. You would need to take Scope as the focus of the query, not Timebox.
You might also take a look at VersionOne's Reporting and Analytics. While that is not a coding or API way to get the reports, it might still automate the needs you have.
I was able to automate the retrieval of this report, but not through the V1 API. Through careful use of Fiddler and a C# script using WebClient to execute POST requests, it was possible. The resulting code is pretty fragile, though, since it isn't using the API.
My agile team will be adding new features to a existing realty website. As we add the features we want to have a better handle on the site's overall performance as well as the performance of particular pages.
I would like to automate the gathering of performance metrics on a request/response basis for each page (e.g. what sub requests are sent out by the browser, how many are there, how much data is transferred, and how long did each request take to fulfill).
Firebug currently captures this information in its net panel, however, I haven't found any way to programmatically pull this information out.
Does anyone know of a way to pull this information out after a page has loaded?
We are currently running our user acceptance tests with Selenium and I have considered adding this feature to the selenium interface so that our tests could run and collect the data without starting any other service.
All suggestions are welcome, including ones that leverage other tools/methods to gather the performance metrics.
Thank you.
Jan Odvarko has written a Tutorial on how to use the new listener functionality within Firebug to log net panel results:
"Since Firebug 1.4a13 the Net panel introduces, among other things, several new events that allow to easily collect all network requests and also related info gathered and computed by Firebug.
This functionality should be useful also in cases where Firebug extensions want to store network activity info into a local database or send it back to the server for further analysis (I am thinking about performance statistics here)."
Take a look at the NetExport extension for FireBug.
Steps:
enable autoexport in preferences( you can automate this one as well)
choose the folder where the data is to be added
Read the file
While it isn't directly a Firebug solution, perhaps something like Jiffy would help?
Jiffy pretty much works like a server based version of Firebug's measurement tools. I haven't used it in anger yet, but it may do what you're looking for?
http://code.google.com/p/jiffy-web/
Jiffy allows developers to:
measure individual pieces of page rendering (script load, AJAX execution, page load, etc.) on every client
report those measurements and other metadata to a web server
aggregate web server logs into a database
generate reports
There is a way to use ySlow to beacon out performance data to a URL of your choice. It's not well documented, the only info I found was here:
http://tech.groups.yahoo.com/group/exceptional-performance/messages/490?threaded=1&m=e&var=1&tidx=1
Aside from that I would look into writing a Firebug plugin, I think you can access most Firebug properties. Here's a tutorial: http://www.firephp.org/Reference/Developers/ExtendingFirebug.htm
Ben,
I've done this by extended Selenium RC's ProxyHandler to queue up the URLs seen and then allow you to pull them down via some other API. It requires that you proxy everything, which isn't the default behavior of Selenium. The nice thing is that Selenium ends up being both the place to drive automation and collect the results seen.
This is probably a feature we'll soon add to Selenium RC right after we get 1.0 out the door (we're very close!).
Okay I admit this is not a direct answer but how about going right to the source? Cut out FireBug and go to the web server. Can the server log events with sufficient granularity to allow calculation of the information you require? Parsing the log file into useful data should not be particularly difficult and has the advantage of being user-platform independent and has the potential to log a greater set of data than that offered by FireBug (Awesome tool btw).