I want to run a simple sql group by query in kibana 4 "Discover" page.
Each record in my elastic search index represent a log and has 3 columns: process_id (not unique value), log_time, log_message.
example:
process_id log_time log_message
---------------- -------------------- --------------------
1 2014/12/11 01:00 msg1
1 2014/12/11 01:10 msg2
1 2014/12/11 01:20 msg3
2 2014/12/11 11:00 msg4
2 2014/12/11 11:10 msg5
I want to generate a table in kibana that looks like:
process_id first log_time last log_time
---------------- ------------------------ --------------------
1 2014/12/11 01:00 2014/12/11 01:20
2 2014/12/11 11:00 2014/12/11 11:10
In sql the query is simple:
select process_id, max(log_time), min(log_time)
from logs_table
group by process_id
How can I run this query in Kibana? Is it possible to run the query in "Discover" page or should I create a panel (Visualize page)?
thanks.
I'm on Kibana 4.3, but this is possible on any version of Kibana. You need to create a Visualization panel of type Data Table.
Before that you need to make sure that you've created an index pattern for your index, such as this one, with the log_time date field as the timestamp for your index.
Then you can create your Data Table visualization and it must look like this, i.e. a split rows terms aggregation on the process_id field and then two metrics aggregation (one min and one max) on the log_time date field
Finally, your results will look like this as expected:
Related
I have a table in oracle that I'm trying to write a query for but having a problem writing it correctly. The data of the table looks like this:
Name
ID
DATE
Shane
1
01JAN2023
Angie
2
02JAN2023
Shane
1
02JAN2023
Austin
3
03JAN2023
Shane
1
03JAN2023
Angie
2
03JAN2023
Tony
4
05JAN2023
What I was trying to come up with was a way to iterate over each day, look at all the records for that day and compare with the rest of the records in the table that came before it and only pull back the first instance of the record based on the ID & Date. The expected output would be:
Name
ID
DATE
Shane
1
01JAN2023
Angie
2
02JAN2023
Austin
3
03JAN2023
Tony
4
05JAN2023
Can anyone tell me what the query should be to accomplish this?
Thank you in advance.
You'll need to convert your date field to a real date so it orders correctly
SELECT name,id,MIN(TO_DATE(date,'DDMONYYYY')) date
FROM table
GROUP BY name,id
Isn't that just
select name, id, min(date_column)
from your_table
group by name, id;
If you don't want to use aggregation, you can use FETCH NEXT ROWS WITH TIES:
SELECT tab.*
FROM tab
ORDER BY ROW_NUMBER() OVER(PARTITION BY Name, Id ORDER BY DATE_)
FETCH NEXT 1 ROWS WITH TIES
Output:
NAME
ID
DATE_
Angie
2
02-JAN-23
Austin
3
03-JAN-23
Shane
1
01-JAN-23
Tony
4
05-JAN-23
Check the demo here.
I have 2 tables in MySQL like this
Table DEPARTMENT
Id
Name
1
Department 1
2
Department 2
Table STAFF
Id
Department_Id
Name
1
1
Staff 1
2
1
Staff 2
3
2
Staff 3
4
1
Staff 4
STAFF table has about 10 million records.
All STAFF's informations has been pushed by Logstash to ElasticSearch. Each document in ElasticSearch now only have 3 fields are Staff_Id, Staff_Name and Department_Name. Something like this:
{
"Staff_Id": 1,
"Staff_Name": "Staff 1",
"Department_Name": "Department 1"
}
Because of practical needs, I need to add one more field called Department_Id to each document. Note that this field (Department_Id) does not exist on existing documents.
I am a newbie to both Logstash and ElasticSearch. How can I do this with Logstash? Interpreted in the SQL way would be:
SELECT * FROM DEPARTMENT;
UPDATE STAFF SET Department_Id = XXX WHERE Department_Name = YYY
Note that DEPARTMENT table has about 100.000 records and ElasticSearch has about 10 million documents.
Can you take a look?
There is 1 partition for date in my table. For that partition, there are 2 folders in hdfs, e.g. 01-01-2018 and 02-01-2018. In 02-01-2018 there is no data at all.
So when I query like below:
select date, count(*) from table1 group by date
It will only return the output for partition with data like below:
01-01-2018 168
I need the output to be like below:
01-01-2018 168
02-01-2018 0
How to achieve that output? Thank you.
I have someting like this
id day descrition
1 1 hi
1 1 today
1 1 is a beautifull
1 1 day
1 2 exemplo
1 2 for
1 2 this case
I need to do a funtion that for each day concatenate the descrtiomn colunm and return the result like this
id day descrition
1 1 hi today is a beautifull thay
1 2 exemplo for this case
Anny ideia about how can i do this usisng a loop in a function in oracle
You need a way of determining which order the values should be aggregated. The snippet below will rely on the implicit order in which Oracle reads the rows from the datafiles - if you have row movement enabled then you may get inconsistent results as the rows can be read in different orders as they are relocated in the underlying datafiles.
SELECT LISTAGG( description, ' ' ) WITHIN GROUP ( ORDER BY ROWNUM ) AS description
FROM your_table
GROUP BY id, day
It would be better to have another column that stores the order within each day.
I have following data table.
ID salary occupation
1 5000 Engineer
2 6000 Doctor
3 8000 Pilot
4 1000 Army
1 3000 Engineer
2 4000 Teacher
3 2000 Engineer
1 1000 Teacher
3 1000 Engineer
1 5000 Doctor
Now I want to add another column flag to this table so that it looks in the following way.
ID salary occupation Flag
1 5000 Engineer 0
2 6000 Doctor 0
3 8000 Pilot 0
4 1000 Army 0
1 3000 Engineer 1
2 4000 Teacher 1
3 2000 Engineer 1
1 1000 Teacher 2
3 1000 Engineer 2
1 5000 Doctor 3
Now how can I update my original table to the above format using HIVE?
Kindly help me.
Provided that you have data in your files for the additional column you can use Add Column clause for Alter Table.
In your example do something like this:
Alter table Test ADD COLUMNS (flag TINYINT);
Or you can try REPLACE COLUMNS as well:
Alter Table test REPLACE COLUMNS (id int, salary int, occupation String, flag tinyint);
You might need to load(overwrite) your dataset again though(just a speculation!!!).
You can definitely add new columns in HIVE table using alter command as told above
hive>Alter table Test ADD COLUMNS (flag TINYINT);
In Hive 0.13 and earlier releases, column will have NULL values but HIVE 0.14.0 and later release, you can update the column values using UPDATE command
Another way is, after adding column using ALTER command, you can overwrite the existing data with the new data(having Flag column)
hive> LOAD DATA LOCAL INPATH 'flagfile.txt' OVERWRITE INTO TABLE <tablename>;