Above is the data table in which I am pulling data from database and displaying in different column.
as shown in the screen.
I want to show pop at every failed record which will show reason for failed record.
but I am confuse how do I do that.since all the data are dynamic how do i select the that data. for an example 121 ,2 and 72 are failed, so I need to show a pop after clicking on 121,2 and 72. As we can see in failed column we have 0 also. then if user click on zero it should not allow to open popup.
list is dynamic I can get more than 100 failed records. so at every failed records I need to show pop.
Popup would have table containing four columns. for an example a b c d.
1. How do I identify `121` `2` `72` or any other failed record.
2. How do I show a pop-up after clicking on failed records.
this is the code for Data Table
var oTable = $('#example')
.dataTable(
{
"processing" : true,
"serverSide": true,
"autoWidth" : true,
"order": [[ 0, "desc" ]],
"ajax" : {
"url" : '//myurl',
"dataSrc" : ""
},
"columns" : [
{
"data" : "recordcount"
},
{
"data" : "record failed"
] }
}
Since we are getting the data in compact form. how to know which are failed.
I am beginner, trying to learn new things. I dint have any idea how to achieve this. please help me.
Related
I have a JSON Array with the following structure:
{
"InvoiceNumber": "11111",
"AccountName": "Hospital",
"items": {
"item": [
{
"Quantity": "48.000000",
"Rate": "0.330667",
"Total": "15.87"
},
{
"Quantity": "1.000000",
"Rate": "25.000000",
"Total": "25.00"
}
]
}
}
I would like to use Data Operation "Select" to select invoice numbers with invoice details.
Select:
From body('Parse_Json')?['invoices']?['invoice']
Key: Invoice Number;Map:item()['InvoiceNumber'] - this line works
Key: Rate; Map: item()['InvoiceNumber']?['items']?['item']?['Rate']- this line doesnt work.
The error message says "Array elements can only be selected using an integer index". Is it possible to select the Invoice Number AND all the invoice details such as rate etc.? Thank you in advance! Also, I am trying not to use "Apply to each"
You have to use a loop in some form, the data resides in a array. The only way you can avoid looping is if you know that the number of items in the array will always be of a certain length.
Without looping, you can't be sure that you've processed each item.
To answer your question though, if you want to select a specific item in an array, as the error describes, you need to provide the index.
This is the sort of expression you need. In this one, I am selecting the item at position 1 (arrays start at 0) ...
body('Parse_JSON')?['items']?['item'][1]['rate']
Using your JSON ...
You can always extract just the items object individually but you'll still need to loop to process each item IF the length is never a static two items (for example).
To extract the items, you select the object from the dynamic content ...
Result ...
I have a big table (15000 x 2000 entries). In this table, I need to count rows with certain properties like "all rows, that have a 1 or 2 in column 5 and a 0 in column 6". I will call this type of operation a count operation. For my use case, the count operation needs to be very fast, as I executing several hundreds of those count operations.
I tried to do so with elastic search, but the performance seems to be very bad (like 10 seconds for 180 count operations). I was wondering, if I am building my queries the wrong way, or if maybe Elasticsearch is the wrong technology to do so?
My queries are all of the same form. I create them with java, so it's kind of hard to post here, how they do look like but I do my best to explain
I build each single coun operation as a BoolQuery. For the example above it would be a query that looks similar to this (don't blame me if it's wrong, I cannot copy the correct query, as it is built in java):
"query": {
"bool" : {
"must" : [
"should" : [
{ "column 5" : "1" },
{ "column 5" : "2" }
],
"should" : [
{ "column 6" : "0" }
],
"minimum_should_match" : 1
],
"boost" : 1.0
}
}
The many bool queries of this form are then grouped into a MultiSearchRequest. I use the option "fetchSource = false" to prevent Elasticsearch from loading the entities themselves.
Please tell me, if you need any further information, or if it is unclear, what I am trying to do!
I just fixed the problem myself. For all with a similar question, here is how:
I changed the SearchSourceBuilder, so that it now uses a ValueCountAggregator. This one counts the values and allows me to set the SearchSourceBuilder.size() to 0. In this way I get rid of the hits themselves and retrieve only the aggregation values.
Requests that took 4 seconds before are now executed in less than 100ms.
The feature I try to fullfit is to create a metric in kibana that display the number of users "unvalidated".
I send a log sent when a user registers, then a log when a user is validated.
So the count I want is the difference between the number of registered and the number of validated.
In kibana I cannot do such a math operation, so I found a workaround:
I added a "scripted field" named "unvalidated" which is equal to 1 when a user registers and -1 when a user validates his account.
The sum of the "unvalidated" field should be the number of unvalidated users.
This is the script I defined in my scripted field:
doc['ctxt_code'].value == 1 ? 1 : doc['ctxt_code'].value == 2 ? -1 : 0
with:
ctxt_code 1 as the register log
ctxt_code 2 as the validated log
This setup works well when all my logs have a "ctxt_code", but when a log without this field is pushed kibana throws the following error:
Field [ctxt_code] used in expression does not exist in mappings
I can't understand this error because kibana says:
If a field is sparse (only some documents contain a value), documents missing the field will have a value of 0
which is the case.
Anyone has a clue ?
It's OK to have logs without the ctxt_code field... but you have to have a mapping for this field in your indices. I see you're querying multiple indices with logstash-*, so you are probably hitting one that does not have it.
You can include a mapping for your field in all indices. Just go into Sense and use this:
PUT logstash-*/_mappings/[your_mapping_name]
{
"properties": {
"ctxt_code": {
"type": "short", // or any other numeric type, including dates
"index": "not_analyzed" // Only works for non-analyzed fields.
}
}
}
If you prefer you can do it from the command line: CURL -XPUT 'http://[elastic_server]/logstash-*/_mappings/[your_mapping_name]' -d '{ ... same JSON ... }'
So I have a MongoDB instance where I am trying to update data in one collection with data from another collection. The two collections are participants with about 180k documents and questions with about 95k documents.
Documents in participants typically look something like this:
{
"_id" : ObjectId("52f90b8bbab16dd8594b82b4"),
"answers" : [
{
"_id" : ObjectId("52f90b8bbab16dd8594b82b9"),
"question_id" : 2081,
"sub_id" : null,
"values" : [
"Yes"
]
},
{
"_id" : ObjectId("52f90b8bbab16dd8594b82b8"),
"question_id" : 2082,
"sub_id" : 123,
"values" : [
"Would prefer to go alone"
]
},
{
"_id" : ObjectId("52f90b8bbab16dd8594b82b7"),
"question_id" : 2082,
"sub_id" : 456,
"values" : [
"Yes"
]
}
],
"created" : ISODate("2012-03-01T17:40:21Z"),
"email" : "anonymous",
"id" : 65,
"survey" : ObjectId("52f41d579af1ff4221399a7b"),
"survey_id" : 374
}
I am using the query below to perform the update:
db.participants.ensureIndex({"answers.question_id": 1, "answers.sub_id": 1});
print("created index for answer arrays!")
db.questions.find().forEach(function(doc){
db.participants.update(
{
"answers.question_id": doc.id,
"answers.sub_id": doc.sub_id
},
{
$set:
{
"answers.$.question": doc._id
}
},
false,
true
);
});
db.participants.dropIndex({"answers.question_id": 1, "answers.sub_id": 1});
But this takes about 20 minutes to run. I was hoping that adding the index would help with the performance, but it is still pretty slow. Is this index setup correctly considering that I am indexing fields in an array of objects? Can anyone see anything that I am doing that would cause the slowness? Suggestions on where to start looking to improve the performance of this query?
I think you need to consider what you are actually doing here in order to understand why the index is not helping and indeed why this operation takes so long.
The first part of the answer is explained by what you are doing here:
db.questions.find()
Now that part alone basically says that you are asking to retrieve every document in your questions collection. So we can see what you are trying to do is exactly that, as you want to update that content into your participants collection, particularly the document _id for the "question". But here, by definition of getting all documents, no index will be used.
So what you are doing is looping every document in the questions, then asking with your update operation to match the participants record with data from the "question". And what that means is you are pulling "over the wire" all of your 95K documents and sending back "over the wire" your update operation, 95K times. This is not happening on the server and there is network traffic between your application and your MongoDB.
The index itself is not going to do much other than improve the search of each participants record, which is better than scanning and you should be getting the match. But that's not the part that taking the time, its the fetching of the questions that will be the largest issue. Also note that if you were updating
So if it's possible to run your update process on a machine that is as close as possible in networking terms to the MongoDB server then that is going to be your best performance improvement. You could also wind back your Write Concern if you want to be a little daring and/or can live with checking the integrity in another opertation, and that will reduce your network traffic and waiting for a response to the update (which is actually happening) if you put it in "fire and forget" mode.
Also see the guide if you are not sure of the concepts:
http://docs.mongodb.org/manual/core/write-concern/
In case anyone is interested I was able to take the run time of this update query from 20 minutes down to about a minute and a half by using projection when selecting the questions documents. Since I am only using the _id, id and sub_id fields I was able to do the following:
db.questions.find({},{_id: 1, id: 1, sub_id: 1}).forEach(function(doc){
....
Which drastically improved performance. Hope this helps someone!
I need to grab the top 3 results for each of the 8 users. Currently I am looping through for each user and making 8 calls the the db. Is there a way to structure the query to pull the same 8X3 dataset in a single db pull?
selected_users = users.sample(8)
cur = 0
while cur <= selected_users .count-1
cursor = status_store.find({'user' => selected_users[cur]},{:fields =>params}).sort('score', -1).limit(3)
*do something*
cur+=1
end
The collection I am pulling from looks like the below. Each user can have an unbound number of tweets so I have not embedded them within within a user document.
{
"_id" : ObjectId("51e92cc8e1ce7219e40003eb"),
"id_str" : "57915476419948544",
"score" : 904,
"text" : "Yesterday we had a bald eagle on the show. Oddly enough, he was in the country illegally.",
"timestamp" : "19/07/2013 08:10",
"user" : {
"id_str" : "115485051",
"name" : "Conan O'Brien",
"screen_name" : "ConanOBrien",
"description" : "The voice of the people. Sorry, people.",
}
}
Thanks in advance.
Yes you can do this using the aggregation framework.
Another way would be to keep track of the top 3 scores for in the user documents. If this is faster or not depends on how often you write to scores vs read to top scores by users.