I am using Sonarqube web API to detect bugs in spoon. But I'm not getting the full list of about 189 bugs, but only about 100 even when I used types=BUG parameter. The GET request I'm using is https://sonarqube.ow2.org/api/issues/search?componentKeys=fr.inria.gforge.spoon:spoon-core&types=BUG . Is there any way to get the full JSON response?
You get only 100 items as 100 is the default pagesize for Web api pagination.
In your example when using:
https://sonarqube.ow2.org/api/issues/search?componentKeys=fr.inria.gforge.spoon:spoon-core&types=BUG&ps=200
you'll get all 189 bugs. The max value for pagesize is 500.
If you want to know the total count for issues you'll need to check the response:
{
"paging": {
"pageIndex": 1,
"pageSize": 100,
"total": 189 <<---------------------------
},
"issues": [
{
...
A groovy snippet using total to get all issues with looping:
import groovy.json.*
def sonarRest(url,method) {
jsonSlurper = new JsonSlurper()
raw = '...:'
bauth = 'Basic ' + javax.xml.bind.DatatypeConverter.printBase64Binary(raw.getBytes())
conn = new URL(url).openConnection() as HttpURLConnection
conn.setRequestMethod(method)
conn.setRequestProperty("Authorization", bauth)
conn.connect()
httpstatus = conn.responseCode
object = jsonSlurper.parse(conn.content)
}
issues = sonarRest('https://sonarhost/api/issues/search?severities=INFO&ps=1', 'GET')
total = (issues.total.toFloat()/100).round()
counter = 1
while(counter <= total)
{
issues = sonarRest("https://sonarhost/api/issues/search?severities=INFO&ps=100&p=$counter", 'GET')
println issues
counter++
}
Sorry I don't even need it. I can add rule to the parameter since I'm only using one rule at a time.
Related
I needed to construct an HTTP request body from a CSV file.
There are 3 columns (userID, SessionId, groupId) and 1000 userIDs in the CSV file.
The API I was testing had a requirement for bulk loading, and each bulk contains 200 userIDs.
Below is the sample of the payload:
{
"data": [
{
"username": "<userID>",
"remoteMeetingGroupName": "<groupID>"
},
{
"username": "<userID>",
"remoteMeetingGroupName": "<groupID>"
},
...
]
}
So based on the requirement of 200 users per bulk, I will need to create 5 concurrent users, each of while containing 200 users in the CSV file. Is ForEach controller able to do this? Could anyone gimme some hints? Thanks.
The request body can be constructed from the CSV file using JSR223 PreProcessor, something like:
def start = (vars.get('__jm__Thread Group__idx') as int)
def offset = (start + 1) * 200
def payload = [:]
def data = []
start.upto(offset, { index ->
def lineFromCsv = new File('test.csv').readLines().get(index)
data.add(['username': lineFromCsv.split(',')[0]])
data.add(['remoteMeetingGroupName': lineFromCsv.split(',')[1]])
})
payload.put('data', data)
vars.put('payload', new groovy.json.JsonBuilder(payload).toPrettyString())
Refer generated request body as ${payload} where required.
More information:
Apache Groovy - Parsing and producing JSON
Apache Groovy - Why and How You Should Use It
I am doing a chat app using parse server, everything is great but i tried make to list just last message for every remote peer. i didn't find any query limitation how to get just one message from every remote peer how can i make this ?
Query limitation with Parse SDK
To limit the number of object that you get from a query you use limit
Here is a little example:
const Messages = Parse.Object.extend("Messages");
const query = new Parse.Query(Messages);
query.descending("createdAt");
query.limit(1); // Get only one result
Get the first object of a query with Parse SDK
In you case as you really want only one result you can use Query.first.
Like Query.find the method Query.first make a query and will return only the first result of the Query
Here is an example:
const Messages = Parse.Object.extend("Messages");
const query = new Parse.Query(Messages);
query.descending("createdAt");
const message = await query.first();
I hope my answer help you 😊
If you want to do this using a single query, you will have to use aggregate:
https://docs.parseplatform.org/js/guide/#aggregate
Try something like this:
var query = new Parse.Query("Messages");
var pipeline = [
{ match: { local: '_User$' + userID } },
{ sort: { createdAt: 1 } },
{ group: { remote: '$remote', lastMessage: { $last: '$body' } } },
];
query.aggregate(pipeline)
.then(function(results) {
// results contains unique score values
})
.catch(function(error) {
// There was an error.
});
I am using Elasticsearch.NET (5.6) on ASP.NET API (.NET 4.6) on Windows, and try to publish to elasticsearch hosted on AWS (I have tried both 5.1.1 and 6, both same behaviour).
I have the following code which bulk index the documents to Elasticsearch. Image calling the below code block many times:
var node = new System.Uri(restEndPoint);
var settings = new ConnectionSettings(node);
var lowlevelClient = new ElasticLowLevelClient(settings);
var index = indexStart + indexSuffix;
var items = new List<object>(list.Count() * 2);
foreach (var conn in list)
{
items.Add(new { index = new { _index = index, _type = "doc", _id = getId(conn) } });
items.Add(conn);
}
try
{
var indexResponse = lowlevelClient.Bulk<Stream>(items);
if (indexResponse.HttpStatusCode != 200)
{
throw new Exception(indexResponse.DebugInformation);
}
return indexResponse.HttpStatusCode;
}
catch (Exception ex)
{
ExceptionManager.LogException(ex, "Cannot publish to ES");
return null;
}
It runs fine, can publish documents to Elasticsearch, but it only can run 80 times, after 80 times, it will always get exception:
# OriginalException: System.Net.WebException: The operation has timed out
at System.Net.HttpWebRequest.GetRequestStream(TransportContext& context)
at System.Net.HttpWebRequest.GetRequestStream()
at Elasticsearch.Net.HttpConnection.Request[TReturn](RequestData requestData) in C:\Users\russ\source\elasticsearch-net-5.x\src\Elasticsearch.Net\Connection\HttpConnection.cs:line 148
The most interesting part is that: I have tried to change the bulk size to be 200 or 30, and it turned out to be 16000 and 2400, meaning both ends up at 80 times. (Each document size is very similar)
Any ideas? Thanks
There is a connection limit (Also refer to comments from #RussCam under the question). So the real issue is that the Stream in the response holding the connections.
So the fix is either indexResponse.Body.Dispose (I haven't tried this one) or use VoidResponse: reportClient.BulkAsync<VoidResponse>(items); which does not require the response stream. I've tried the second and it works.
I have annotated my method like,
#ApiOperation( value = "Get time spent on category", response = CategoryBean.class, responseContainer = "List", notes = "API to get the time spent on all tasks based on category" )
#ApiImplicitParams( {
#ApiImplicitParam( name = "x-auth-token", value = "", dataType = "string", required = true, paramType = "header" ) } )
#ApiResponses( value = {
#ApiResponse( code = 200, message = "Success", response = CategoryBean.class, responseContainer = "List" ) } )
#RequestMapping( value = "/getTimeSpentOnCategory", method = RequestMethod.POST )
public ResponseEntity<?> getTimeSpentOnCategory( #RequestBody DashboardTaskRequestBean bean )
{/**some operation**/}
But in my swagger UI, I'am not able to get the Status code 200 and its message. Please explain why?
The following picture is the snapshot of the UI,
This is a known issue and looks like it is fixed with version 3.0.
As I see it, you are able to see the response structure at the top, but it is not visible in the table at the bottom of the screenshot.
This is also raised here and is fixed with version 3.0 :
https://github.com/swagger-api/swagger-ui/issues/1505
https://github.com/swagger-api/swagger-ui/issues/1297
I'm trying to insert an alert in elasticsearch from bosun but I don't know how to fill the variable $timestamp (Have a look at my example) with the present time. Can I use functions in bosun.conf? I'd like something like now().
Can anybody help me, please?
This is an extract of an example configuration:
macro m1
{
$timestamp = **???**
}
notification http_crit
{
macro = m1
post = http://xxxxxxx:9200/alerts/http/
body = {"#timestamp":$timestamp,"level":"critical","alert_name":"my_alert"}
next = http_crit
timeout = 1m
}
alert http
{
template = elastic
$testHTTP = lscount("logstash", "", "_type:stat_http,http_response:200", "1m", "5m", "")
$testAvgHTTP = avg($testHTTP)
crit = $testAvgHTTP < 100
critNotification = http_crit
}
We use .State.Touched.Format which was recently renamed to .Last.Time.Format in the master branch. The format string is a go time format, and you would have to get it to print the correct format that elastic is expecting.
template elastic {
subject = `Time: {{.State.Touched.Format "15:04:05UTC"}}`
}
//Changed on 2016 Feb 01 to
template elastic {
subject = `Time: {{.Last.Time.Format "15:04:05UTC"}}`
}
Which when rendered would look like:
Time: 01:30:13UTC