I am using sonarqube 6.7.
On Sonarqube, Measure, Duplication, there is Duplicated Files. If I click on Duplicated Files, detail will be displayed as seen on the picture.
I want to know can the number be more than 1 on each file? (as seen on picture, the number I have circled)
Thanks a lot
duplicatedFiles
Related
I have a list of 96 times in 15 minute intervals that I'm trying to use for data validation. It turns out that it works only up to a certain point. When selecting times from 00:00 to 18:30 it works fine. When selecting times from 18:45 to 23:45 it throws an error.
The data that you entered in cell B1 violates the data validation
rules set on this cell.
I made a test file to see if the same thing would happen if I used a different type of data, but when using a lists of numbers or text with the same amount of items it seems to work without issues.
https://docs.google.com/spreadsheets/d/1ClBhZk6fysOq0wqIrE5IxJgPMs_A05gJsN_kEiXoFGI/edit?usp=sharing
Does anyone know the reason why it doesn't work with the list of times? More importantly, does anyone know a way I could get it to work?
Edit:
The steps provided by player0 worked to fix it in my test file linked above, but didn't make a difference in the real file I'm working on. Here's a copy of the real file which exhibits the same problem.
https://docs.google.com/spreadsheets/d/1cOMB0BpzSBR7ZM7fJQQfhA56fniKTBJb-3TePUky6gM/edit?usp=sharing
Please try setting a time of greater than 18:30 in a couple of cells. I keep getting the same errors.
I suppose a workaround could be to not reject input on invalid data, but I feel this should just work with rejecting it and would like to know where I went wrong.
formatting. select columns A & B and force Time (or Plain text):
I have been running several independent multi-class NL models on an identical data set (to compare performance to a multi-label model) and had no problems importing the data or running the models. I've just been through the identical preparation process, uploaded the file to the bucket and now get this error on import:
Uri is not found in CSV row ",NotWarm".
Warm and NotWarm are my labels. A sample of the csv is below so you can see the format:
"To ensure you get the best possible service, we stagger the cut-off time for next day delivery from 5pm right up until Midnight.",Warm
You’ll be able to see if Next Day Delivery is still available when you place your order.,NotWarm
"You can choose a home delivery option, which lets you have your order delivered to an address of your choice.",Warm
"Some eligible items also let you choose Click + Collect, where your order is delivered to a local store.",NotWarm
I've double checked all the advice about preparing datasets on the AutoML help pages. The file itself has been encoded in UTF-8 using Notepad++ so there should be nothing amiss with the CSV format. The file is identical to those I've used previously except for the labels.
Has something changed on the AutoML NL process as it was over a month since my last model was created?
Thanks in advance for any guidance.
SOLVED
I tagged all my labels with unique numbers to determine which line of data it was failing on upload. Turned out some blank lines had crept into the file so I was trying to assign a label to a null string. Removed the empty lines and all works. Hopefully this may help someone else.
SonarQube Metrics graphs are not being displayed on my project dashboards behind the total numbers and Quality Gate ratings.
Current versions of SonarQube and plugins and MySQL 5.7. I am creating a new SQ project through its Administration->Projects->Management->Create Project and then performing analyses as follows (anything capitalized is either a variable or anonymized): *
MSBuild.SonarQube.Runner.exe begin /k:KEY /v:VERSION /d:sonar.host.url=http://localhost:9000/ /d:sonar.login=TOKEN /d:sonar.projectDate=YYYY-MM-DDTHH:MM:SS+0000
MSBuild.exe /maxcpucount /nr:false /nologo /target:rebuild /verbosity:quiet PROJECT\PROJECT.sln
MSBuild.SonarQube.Runner.exe end /d:sonar.login=TOKEN
I have tried VERSION equal to a constant value "1.0" and VERSION equal to a string of the UNIX time (seconds since 1/1/1970) of each git commit I analyze. I've also tried configuring project leak periods of the last 90 days and also previous_analysis, though I think that would only affect the graphs in the right column. If someone could tell me what I am doing incorrectly, I would appreciate it.
* These are examples of the commands executed by a Python script that is iterating over a list of git commit hashes and their associated timestamps, in increasing order, to populate the project history. The python script in turn is mimicking a Jenkins job that will eventually take over calling SonarQube.
Background Tasks page:
Your project homepage screenshot shows the graph in the leak period, but not extending left into the Overall section.
This is going to be a question of your analysis date and your definition of "leak period". If your leak period is set to previous_version, then you need to take a look at the sonar.version values in your analyses. So far, it looks like all your analyses are leak period analyses, which is why nothing has filtered left into the overall view.
My project has over 150k lines of code according to Coverity Scan, while Cloc reports 30k (which is a lot more reasonable).
I am trying to figure out where those LOCs come from, but failing. How do I get coverity scan to report the actual lines of code? Or report where they come from.
By default the LOC count includes the system headers pulled in by your application. You may be able to configure component maps to filter these out if it matters enough to you.
I am using JMeter and have 2 questions (I have read the FAQ + Wiki etc):
I use the Graph Results listener. It seems to have a fixed span, e.g. 2 hours (just guessing - this is not indicated anywhere AFAIK), after which it wraps around and starts drawing on same canvas from the left again. Hence after a long weekend run it only shows the results of last 2 hours. Can I configure that span or other properties (beyond the check boxes I see on the Graph Results listener itself)?
Can I save the results of a run and later open them? I know I can save the test plan or parts of it. I am unclear if I can save separately just the test results data, and later open them and perform comparisons etc. And furthermore can I open them with different listeners even if they weren't part of original test (i.e. I think of the test as accumulating data, and later on I want to view and interpret the data using different "viewers").
Thanks,
-- Shaul
Don't know about 1. Regarding 2: listeners typically have a configuration field for "Write All Data to a File", which lets you specify the file name. You can use the Simple Data Writer to store results efficiently for later analysis.
You can load results from a previous test into a visualizer by choosing "Write All Data to a File" and browsing for the file you wish to load. Somewhat counterintuitively, selecting a file for writing also loads that file into the visualizer and displays the results. Just make sure you don't run the test again while that file is selected, otherwise you will lose your saved test data. :-)
Well, I later found a JMeter group that was discussing the issue raised in my first question, and B.Ramann gave me an excellent suggestion to use instead a better graph found here.
-- Shaul