I need to generate custom reports using Sonar sql server database data. The structure of the database is quite confusing me. How can I get below details of my project?
LoC(lines of code)
Rule Compliance %
Comment %
Public Documented API %
Security Violations
Violations (excluding Info)
Duplicated Line %
Once I get these details how can I stucture my report because root data is having many childs.
I think you probably really want to use the web services to extract data, rather than reading from the database. See http://docs.codehaus.org/display/SONAR/Web+Services for documentation.
I don't recommend to directly request database because it's not considered as an API. It deeply evolves over time.
There are currently two reporting plugins that generate PDF :
http://docs.codehaus.org/display/SONAR/Sonar+PDF+Plugin (open-source + commercial edition)
http://www.sonarsource.com/products/plugins/reporting/report/ (commercial)
If you want to generate your own report, then you should implement a plugin or request web services from a dedicated application.
Related
I see that Sonarqube provides an Webservice API to get all issues and I will load this data in to a database for analysis. Then, I wish to have my reporting database in sync with the changes happening in the system. Do we have a Webservice API that captures change data?
Overall, I want reporting DB to be in sync with the system.
The createdAfter parameter of the issues search web service will get you new issues, but there's no analogous updatedAfter parameter. Note that by only looking at added issues, you'll overlook issues that are closed in a new analysis.
My organization is tracking multiple Scrum projects in VersionOne. Each week, we use the Release Forecasting report for each project to create a management dashboard that indicates the health and expected completion date of each project. I would like to automate this. Do any of the VersionOne APIs allow for the execution of this report and retrieving the image that is generated?
There is not an endpoint specific to Release Forecasting. Nor is there an endpoint to generate the image. However, you can get to the underlying data via the existing API endpoints. For reporting, I recommend query.v1. The closest example is the query for burndown data. You would need to take Scope as the focus of the query, not Timebox.
You might also take a look at VersionOne's Reporting and Analytics. While that is not a coding or API way to get the reports, it might still automate the needs you have.
I was able to automate the retrieval of this report, but not through the V1 API. Through careful use of Fiddler and a C# script using WebClient to execute POST requests, it was possible. The resulting code is pretty fragile, though, since it isn't using the API.
The original post was posted at https://stackoverflow.com/questions/6007097/design-question-for-notification-system
Here is more clarification of the problem: The notification system purpose is to get user notified (via email for now) when content of the site has changed or updated, or new posting is made. This could be treated as a notification system where people define a rule or keyword for 3rd party site and notification system goes out crawle 3rd party site and crate search inverted indexes. Then a new link or document show up for user defined keyword or rule (more explanation at bottom regarding use case),
For clarified used case: Let suppose I am craigslist user and looking for used vehicle. I define a rule “Honda accord”, “year “ 1996 and price range from “$2000 to $3000”.
For above use case to work what is best approach and how can I leverage on open source technology such as Apache Lucent, Apache Solr and Apache Nutch, and Apache Hadoop to solve this use case.
You can thing of building search engine and with rule and keyword notification system. I just need some pointers and help on how to integrate these open source package to solve use case ?
Any help and pointer will be appreciated. We need three important components are :
1) Web Crawler
2) Index Creator
3) Rule or keyword Mather
Any help will be greatly appreciated. I was referring this wiki which integrates Nutch and Solr together for above purpose http://wiki.apache.org/nutch/RunningNutchAndSolr
Your question is a big one but I'll take a stab at it as I've designed and implemented systems like this before.
Ignoring user account management, your system will need to provide the means to:
retrieve new prospect data (web spider)
identify and extract pertinent results from prospect data (filtering)
collect, maintain and organize results (storage)
select results based on various metadata (querying)
format results for delivery to users (templating)
deliver formatted results to users (delivery)
If the scope of your project is small (say less than 100 sites requiring spidering per day), you could probably get along with one of the many open-source web spiders including wget, Nutch, WebSphinx, etc. You might need to provide instrumentation (custom software) for scheduling, monitoring and control. If your project scope is larger than this, you may need to "roll your own" spidering solution (custom software). Typically this would be designed as a distributed, parallel architecture.
For simple filtering, regular expressions would suffice but for more complex tasks requiring knowledge of HTML layout (extract the textual component of the fifth list element (<LI/>) of the fourth table on the page) you'd need to use an XHTML parser. However you proceed, you'll need to provide custom software to conduct filtering based on your users' needs.
While any database technology can be used to store results extracted from retrieved documents, using an engine optimized for text like Apache SOLR will allow you to easily expand your search criteria as your needs dictate. Since SOLR supports the attachment of and search for metadata associated with each document, it would be a good choice. You'll also need to provide custom software here to automate this step.
Once you've selected a list of candidate results from SOLR, any scripting language could be used to template them into one or more emails and would also inject them into your mail transport agent (MTA). This also requires custom software to automate this process (and if required, to inject user-specific data into each message).
You should probably look at Google's Custom Search API also before diving into crawling the web yourself. This way, google can help you with returning keyword based search results, which you could later filter in your application based on your additional algorithms/rules etc, and make the whole thing work.
My company needs to migrate data from a Taleo system to a new HR system.
A little research suggests that traditional ETL may not work against the Taleo cloud based system, but I don't know enough about the setup and am trying to learn.
Does anyone have experience migrating HR data from Taleo to another system, and, if so, how did you do it, and was traditional ETL an option?
Thanks
How you access Taleo depends as much on your platform as theirs.
Example: I'm using Windows:
not sure if this is my mistake ~~ vs2010 Add Service Reference fails
Taleo has just released a new version that as has killed a number of companies temporarily.*
Whether your ETL is one time or continous, Taleo provides a .PDF version of their API that works as follows for employee records (I'm only grabbing their employee records). Other records appear to use the same paradigm.
Employee records have two types of fields: fixed and user defined. The fixed fields which I work with in c# are like simple properties of a class and can be accessed with standard .name notation such as taleoItem.ManagerId. The user defined names are in list of "beans" ... for each bean, one looks first at its name ( *foreach (var taleoItem in taleoEmployeeBean.flexValues) ... if (taleoItem.fieldName == "Social Club Member") { ... ). * currently I'm getting zero of the 50+ flexbeans that I normally get and two flexbeans that I've never before seen. as can be expected, until Taleo fixes this breakage, all that I can to is twiddle my thumbs
When Taleo works properly, retrieving data generally works like this.
access a fixed url to get a url for your company;
authenticate via the url retrieved from step 1 to get a session token.
use the session token from step 2 to invoke the various Taleo API methods.
Warning: the Taleo API has documentation errors. Also, the test cases will not necessarily work.
I'm not familiar with Taleo, but according to their website they have features that allow integration via "XML, Web Services, reusable components, and standard APIs". There are many ETL tools on the market that can interface with web services as a source, or you could optionally write your own.
Taleo provides a PDF which described all the calls that can be made. Basically Taleo uses SOAP as web-service for accessing their data.
For a detailed description visit Taleo Integration in Drupal
I am working a project where I need to generate a series of classes to represent/access data in the database. Third party projects such as hibernate or subsonic are not an option. I am new in this subject domain, so I am looking for information on the topic. The project is in .net and I am using MyGeneration. I am primarily looking for information.
What is your single best resource for topics on code generation of data access?
Please post only one link at a time and look for your resource before posting. If you find your resource, please vote up instead of posting it. .
( I am not interesting in rep, just information)
Are you using .NET? Try MyGeneration
CodeSmith
ORAPig generates Python interfaces for Oracle packages. A Postgresql module is being worked on.
http://code.google.com/p/orapig