SonarQube updating manual metric via REST API sets date to January 17, 1970 - sonarqube

We have an install of Sonar 5.1.2, and we're attempting to attach a performance metric to our sonar scan. However, when we update the metric via the REST API, the metric "updated_at" is always set to January 17, 1970. This, unsurprisingly, messes up the timeline view leaving us with only the message "Current timeline is reduced to a shorter period because of limited historical data for one of the metric."
That is, we issue the call to
http://statica:9000/api/manual_measures?resource=<project name>&metric=<metric name>&val=<value>
(And we supply the appropriate authorization for the call.
We get the response
{"id":3,"metric":"<metric name>","resource":"<project name>","val":17.0,"created_at":"2015-10-09T11:41:04-0400","updated_at":"1970-01-17T12:19:04-0500","login":"<user name>","username":"<user name"}
When we go to the site itself, and then go in to the project, then choose "Settings > Manual Measures", we can see our metric there, and in the DATE column, it shows "Jan 17 1970 12:19", which matches what was returned via the REST API.
Also, if you then go to the dashboard of the project where we have the Timeline widget configured to show the metric (as well as LOC and Coverage), we get the simple message at the bottom of "Current timeline is reduced to a shorter period because of limited historical data for one of the metric." and a single flat line in the graph.
Is this expected? Is there any way to capture the date the metric was updated instead of this default date? Is there a parameter we need to supply with the call to updating the metric value?

This WebService is pretty old and does not comply with current WebService guidelines of SQ. It's been removed in 5.2 and replaced by WS api/custom_measures/create which does not have this issue AFAIK.
As of today, no 5.1.3 is planed, so this issue won't be fixed.

Related

Custom timestamp is not taken into account in blob path for Stream analytics

Given a query that looks like this:
SELECT
EventDate,
system.Timestamp as test
INTO
[azuretableoutput]
FROM
[csvdata] TIMESTAMP BY EventDate
According to documentation, EventDate should now be used as timestamp.
However, when storing data into blobstorage with this path:
sadata/Y={datetime:yyyy}/M={datetime:MM}/D={datetime:dd}
I seem to still get ingested time. In my case, ingested time means nothing and I need to use EventDate for the path. Is this possible?
When checking data in Visual Studio, test and EventDate should be equal, however results look like this:
EventDate ;Test
2020-04-03T11:13:07.3670000Z;2020-04-09T02:16:15.5390000Z
2020-04-03T11:13:07.0460000Z;2020-04-09T02:16:15.5390000Z
2020-04-03T11:13:07.0460000Z;2020-04-09T02:16:15.5390000Z
2020-04-03T11:13:07.3670000Z;2020-04-09T02:16:15.5390000Z
2020-04-03T11:13:08.1470000Z;2020-04-09T02:16:15.5390000Z
Late tollerance arrival window is set as: 99:23:59:59
Out of order tollerance is set as: 00:00:00:00 with out of order action set to adjust.
When running same query in Stream Analytics on Azure i get this result:
[{"eventdate":"2020-04-03T11:13:20.1060000Z","test":"2020-04-03T11:13:20.1060000Z"},
{"eventdate":"2020-04-03T11:13:20.1060000Z","test":"2020-04-03T11:13:20.1060000Z"},
{"eventdate":"2020-04-03T11:13:20.1060000Z","test":"2020-04-03T11:13:20.1060000Z"}]
So far so good. When running the query with data on Azure it produces this path:
Y=2020/M=04/D=09
It should have produced this path:
Y=2020/M=04/D=03
Interestingly enough, when checking the data that is actually stored in blobstorage I find this:
EventDate,test
2020-04-03T11:20:39.3100000Z,2020-04-09T19:33:35.3870000Z,
System.timestamp seems to only be altered when testing the query on sampled data, but is not actually altered when the query is running normally and receiving data.
I have tested this with late arrival setting set to 0 and 20 days. In reality I need to disable late arrival adjustment as I might get events that are years old through the pipeline.
This issue has been brought up and closed on the MicrosoftDocs GitHub
The Microsoft folks say:
Maximum days for late arrival is 20, so if the policy is set to 99:23:59:59 (99 days). The adjustment could be causing a discrepancy in System.Timestamp.
By definition of late arrival tolerance window, for each incoming event, Azure Stream Analytics compares the event time with the arrival time; if the event time is outside of the tolerance window, you can configure the system to either drop the event or adjust the event’s time to be within the tolerance.
Consider that after watermarks are generated, the service can potentially receive events with event time lower than the watermark. You can configure the service to either drop those events, or adjust the event’s time to the watermark value.
As a part of the adjustment, the event’s System.Timestamp is set to the new value, but the event time field itself is not changed. This adjustment is the only situation where an event’s System.Timestamp can be different from the value in the event time field, and may cause unexpected results to be generated.
For more information, please see Understand time handling in Azure Stream Analytics.
Unfortunately, testing with sample data in Azure portal doesn't take policies into account at this time.
Potentially other helpful resources:
System.Timestamp()
TIMESTAMP BY
Event ordering policies
Time handling
Job monitoring

Iterating through Google events that are in the past

I'm implementing view for google events in my application using the following end-point:
https://developers.google.com/google-apps/calendar/v3/reference/events/list
The problem that I have is implementing a feature to make it possible to go to the previous page of events. For example: user is having 20 events for the current date and once he presses the button, they have 20 past events.
As I can see, Google provides only:
"nextPageToken": string
That fetches the results for the next page.
The way I see the problem can be solved:
Fetch results in descending order and then traverse them the same way as we do with nextPageToken. The problem is that it is stated in the doc that only asc is available:
"startTime": Order by the start date/time (ascending). This is only
available when querying single events (i.e. the parameter singleEvents
is True)
Fetch all the events for specific time period, traverse the pages until I get to the current date or to the end of the list, memorize all the nextPageTokens. Use memorized values to be able to go backwards. The clear drawback of it is the fact that we need to go through unpredictable number of pages to get the current date. That can dramatically affect the performance. But, at least it is something that Google APIs allow. Updated: Checked that approach with 5 years time span and sometimes it takes up to 20 seconds to get the current date page token.
Is there a more convenient way to implement the ability to go to the previous pages?

What is the difference between startDate and a filter on "published" in the Okta Events API?

I've written a .NET app using the Okta.Core.Client 0.2.9 SDK to pull events from our organization's syslog for import into another system. We've got it running every 5 minutes, pulling events published since the last event received in the previous run.
We're seeing delays on some events showing up. If I do a manual run at the top of the hour for the previous hour's data, it'll include more rows than the 5-minute runs. While trying to figure out why I remembered the startDate param, mutually-exclusive with the filter one I've been using.
The docs don't mention much about it - just that it "Specifies the timestamp to list events after". Does it work the same as published gt "some-date"? We're capturing data for chunks of time, so I needed to include a "less than" filter and ignored startDate. But the delayed events have me looking for a workaround.
Are you facing delayed results using startDate or filter?
Yes published gt "some-date" and startDate work the same way. The following two API calls.
/api/v1/events?limit=100&startDate=2016-07-06T00:00:00.000Z
and
/api/v1/events?limit=100&filter=published gt "2016-07-06T00:00:00.000Z"
returns the same result. Since, they are mutually exclusive filter might come in handy in creating more specific queries including the other query parameters in your query using filter.

Sonarqube report in graph/chart for time (weekly/daily) and number of issues

I want to display a graphical report based on time (weekly/daily) which shows that what is the status of static code analysis over the period of time. E.g. vertical bar will denote number of issue and horizontal will display the time day/month/week. This will help to keep an watch of code quality easily over the period of time (something like burn down chart of scrum). Can someone help me for this?
The 5.1.2 issues search web service includes parameters which let you query for issues by creation date. Your best best is to use AJAX requests to get the data you need and build your widget from there.
Note that you can query iteratively across a date range using &p=1&ps=1 (page=1 and page size=1) to limit the volume of data flying around, and just mine the total value in the top level of the response to get your answer.
Here's an example on Nemo

How can i get a metric from a specific version in sonarqube using the webservice api

For my release automation i'm creating a document generator that includes the current measurements from sonarqube. In this document i would like to report the differences between several versions of the code.
I managed to get the list of versions without any problem using
http://nemo.sonarqube.org/api/events?resource=org.codehaus.sonar:sonar&categories=Version
And i also managed to get a measurement of the current code state using
http://nemo.sonarqube.org/api/resources?resource=org.codehaus.sonar:sonar&metrics=ncloc
Can anybody help me in how to get the ncloc of an older version, say version '4.0'?
The Web Service does not allow to query this information.
Well the solution is roundabout but you could get the data you desire based on the version.
Proposed solution:
Get the version-specific details from the API.
http://nemo.sonarqube.org/api/events?resource=org.codehaus.sonar:sonar&categories=Version&format=json
The response would be something like:
[{"id":"23761","rk":"helloworld","n":"1.1","c":"Version","dt":"2017-07-19T20:28:54-0500"},
{"id":"23731","rk":"helloworld","n":"1.0","c":"Version","dt":"2017-07-18T14:51:20-0500"},
{"id":"5107","rk":"helloworld","n":"1","c":"Version","dt":"2015-12-07T11:37:44-0600"}]
The "dt" value specifies the point of time where the Version is released.
Parse the JSON and get the dt values. Find the minimum and maximum date values from the obtained dt values.
Use the time machine API to query out the metrics you need using the API
http://nemo.sonarqube.org/api/timemachine?resource=helloworld&metrics=coverage,ncloc&rfomDateTime=(min_dt_value)&toDateTime=(max_dt_value)
You will get all the metrics between the timestamps.
Compare our version-specific dt values against the one obtained from the above response and thus get the version specific metric values.

Resources