SonarQube issue count is different depending on which dashboard I use - sonarqube

I'm using SonarQube 5.3 and it seems that the issue count is different depending on the view I use.
Consider this pic:
if I look in Dashboards -> Issues I see
the numbers on the top left
if I click the grand total (267,877) I end up in the Issues dashboard where I see totally different numbers (bottom right)
Even on the main dashboard I see conflicting data (pic)
Why don't the numbers match? Am I missing something?

There is a difference between Measures and queries run on Issues : measures are collected during analysis time and stay like this until the next analysis. Queries on Issues are updated in real-time according to the changes you do on Issues.
From what I see, we can suppose the 267K Issues is correct and you have some trouble in your SearchServer stack preventing it to be up to date.
You have to check in your sonar.log for ElasticSearch errors and be sure to have enough free disk space available on SQ_HOME/data/es to store and update your Issues.
What you can also do to confirm this, is to stop your SQ Server, clean the data/es directory and restart your SQ. Data should be consistent after that.

Related

Archiving Log files from elasticSearch and bringing them back to minimize the storage cost

please I need some answers from experienced people since it's my first time using elastic stack (Internship). Assuming that I injected logs (coming from multiple servers apache nginx ...) in elasticsearch and for sure after 1 month or maybe less elesaticsearch will be filled up of logs and this will be very expensive in terms of storage and performance, so I need to set a limit (let's assume that when the amount of logs reaches 100gb) I need to remove them from elasticsearch to free some space for new incoming logs, but I should preserve the old logs and not just delete them (I googled some solutions but all those were about deleting old logs to free space which in my case not helpful) and bring those old logs back to elasticsearch if needed. My question is there a way (an optimal way in terms of cost and performance like compressing old logs or something) to get around this with minimal cost.
You can use snapshot and restore feature with custom repository to offload old data and retrieve it when needed. Try the following guide:
https://www.elastic.co/guide/en/kibana/7.5/snapshot-restore-tutorial.html

Why can't I create an ORACLE 12c report with multiple images?

We have an 11g ORACLE Forms/Reports application. Some reports have multiple images which work fine in 11g, but when we move them to our new 12c environment, the report hangs.
Experimentation shows that when all images bar one is removed, the report runs ok. You can introduce multiple copies of the same image into the report and it will still run, but if you have a mix of images, it hangs. It does not matter if the images are linked in or inserted, or in what order or where, it still fails.
By hanging, I mean that the report server says that the report is formatting page X (where X is the page containing the second image), and you cannot cancel the report. Trace logs show that the failure occurs when it is processing an image.
Since I have seen no complaints about 12c images on the web, I assume it is not an ORACLE bug, and I also assume that such a restriction cannot be a feature. I assume that some setting is restricting the number of images which can be processed. Does anyone know what that setting is and how to lift it?
I don't have solution, but I do have a few suggestions:
recompile the report using "all" option (Ctrl + Shift + K). Sometimes it makes magic
I've noticed similar behavior when image size is (too) large; try to make it smaller (how? For example, reduce its quality) (what is "too large"? It depends; to me, it happened when report had several thousands of pages, displaying relatively small images (e.g. 20 KB in size) but - multiplied by number of pages, it just didn't work. Image size reduced to ~4KB fixed it. I'm not saying that you can do the same, but - if possible, try it
I agree - the fact that the same report works OK on 11g makes you crazy, huh ... I sincerely hope that compilation will help as it is the simplest option I can think of.
I managed to find a near identical report in a near identical application, which worked. By creating a report with 2 images which could run in either application, and changing the applications so they used the same report server, I found that the test report worked in 1 app but hung in the other. The only difference then lay in how the report was submitted. For the hanging report, I rewrote the submission code from scratch and the report worked fine. I still don't know the critical difference, but now it doesn't matter.

Sonarqube 6 issues view shows limited results

Sonarqube issues view shows only violations against top 15 rules, Is there a way I could see a list of all rules with issues count.
Both ProjectIssueFilter and IssueFilter show only 15 issues in the view. The filter shows this message "Only the first 15 results are displayed". Is there a way to see the complete list?
No, it is not possible to see more rules (writing at the time of SonarQube 6.1). It is planned to be improved. You can vote and follow the ticket https://jira.sonarsource.com/browse/SONAR-6400.
A trick I use to see more rules:
filter on Directory and/or Files.
You can sometimes see more rules, which was previously not visible.
Select the previously hidden rule(s).
Remove the filter on Directory and/or Files
You will still see the hidden rules with all the occurrences
It's a bit a pain, but it works...

XPages performance - 2 apps on same server, 1 runs and 1 doesn't

We have been having a bit of a nightmare this last week with a business critical XPage application, all of a sudden it has started crawling really badly, to the point where I have to reboot the server daily and even then some pages can take 30 seconds to open.
The server has 12GB RAM, and 2 CPUs, I am waiting for another 2 to be added to see if this helps.
The database has around 100,000 documents in it, with no more than 50,000 displayed in any one view.
The same database set up as a training application with far fewer documents, on the same server always responds even when the main copy if crawling.
There are a number of view panels in this application - I have read these are really slow. Should I get rid of them and replace with a Repeat control?
There is also Readers fields on the documents containing Roles, and authors fields as it's a workflow application.
I removed quite a few unnecessary views from the back end over the weekend to help speed it up but that has done very little.
Any ideas where I can check to see what's causing this massive performance hit? It's only really become unworkable in the last week but as far as I know nothing in the design has changed, apart from me deleting some old views.
Try to get more info about state of your server and application.
Hardware troubleshooting is summarized here: http://www-10.lotus.com/ldd/dominowiki.nsf/dx/Domino_Server_performance_troubleshooting_best_practices
According to your experience - only one of two applications is slowed down, it is rather code problem. The best thing is to profile your code: http://www.openntf.org/main.nsf/blog.xsp?permaLink=NHEF-84X8MU
To go deeper you can start to look for semaphore locks: http://www-01.ibm.com/support/docview.wss?uid=swg21094630, or to look at javadumps: http://lazynotesguy.net/blog/2013/10/04/peeking-inside-jvms-heap-part-2-usage/ and NSDs http://www-10.lotus.com/ldd/dominowiki.nsf/dx/Using_NSD_A_Practical_Guide/$file/HND202%20-%20LAB.pdf and garbage collector Best setting for HTTPJVMMaxHeapSize in Domino 8.5.3 64 Bit.
This presentation gives a good overview of Domino troubleshooting (among many others on the web).
Ok so we resolved the performance issues by doing a number of things. I'll list the changes we did in order of the improvement gained, starting with the simple tweaks that weren't really noticeable.
Defrag Domino drive - it was showing as 32% fragmented and I thought I was on to a winner but it was really no better after the defrag. Even though IBM docs say even 1% fragmentation can cause performance issues.
Reviewed all the main code in the application and took a number of needless lookups out when they can be replaced with applicationScope variables. For instance on the search page, one of the drop down choices gets it's choices by doing an #Unique lookup on all documents in the database. Changed it to a keyword and put that in the application Scope.
Removed multiple checks on database.queryAccessRole and put the user's roles in a sessionScope.
DB had 103,000 documents - 70,000 of them were tiny little docs with about 5 fields on them. They don't need to be indexed by the FTIndex so we moved them in to a separate database and pointed the data source to that DB when these docs were needed. The FTIndex went from 500mb to 200mb = faster indexing and searches but the overall performance on the app was still rubbish.
The big one - I finally got around to checking the application properties, advanced tab. I set the following options :
Optimize document table map (ran copystyle compact)
Dont overwrite free space
Dont support specialized response hierarchy
Use LZ1 compression (ran copystyle compact with options to change existing attachments -ZU)
Dont allow headline monitoring
Limit entries in $UpdatedBy and $Revisions to 10 (as per domino documentation)
And also dont allow the use of stored forms.
Now I don't know which one of these options was the biggest gain, and not all of them will be applicable to your own apps, but after doing this the application flies! It's running like there are no documents in there at all, views load super fast, documents open like they should - quickly and everyone is happy.
Until the http threads get locked out - thats another question of mine that I am about to post so please take a look if you have any idea of what's going on :-)
Thanks to all who have suggested things to try.

Performance Issue with Doctrine, PostGIS and MapFish

I am developing a WebGIS application using Symfony with the MapFish plugin http://www.symfony-project.org/plugins/sfMapFishPlugin
I use the GeoJSON produced by MapFish to render layers through OpenLayers, in a vector layer of course.
When I show layers up to 3k features everything works fine. When I try with layers with 10k features or more the application crash. I don't know the threshold, because I either have layers with 2-3k features or with 10-13k features.
I think the problem is related with doctrine, because the last report in the log is something like:
Sep 02 13:22:40 symfony [info] {Doctrine_Connection_Statement} execute :
and then the query to fetch the geographical records.
I said I think the problem is the number of features. So I used the OpenLayers.Strategy.BBox() to decrease the number of feature to get and to show. The result is the same. The app seems stuck while executing the query.
If I add a limit to the query string used to get the features' GeoJSON the application works. So I don't think this is related to the MapFish plugin but with Doctrine.
Anyone has some enlightenment?
Thanks!
Even if theorically possible, it’s a bad idea to try to show so many vector features on a map.
You'd better change the way features are displayed (eg. raster for low zoom levels, get feature on click…).
Even if your service answer in a reasonable time, your browser will be stuck, or at least will have very bad performance…
I’m the author of sfMapFishPlugin and I never ever tried to query so many features, and even less tried to show them on a OL map.
Check out OpenLayers FAQ on this subject: http://trac.osgeo.org/openlayers/wiki/FrequentlyAskedQuestions#WhatisthemaximumnumberofCoordinatesFeaturesIcandrawwithaVectorlayer , a bit outdated with recent browsers improvements, but 10k vector features on a map is not reasonable.
HTH,

Resources