How to convert the Date into Epoch in iReport and Jasperserver - ireport

I want to create a report for login history between two dates.
The login history table has the following fields:
login_date (data type is integer)
user_id (data type is integer)
Sample records
|login_date |user_id|
+-----------+-------+
|1299954600 | 105 |
|1299954600 | 105 |
|1299954600 | 105 |
|1299954600 | 105 |
|1301164200 | 114 |
|1301164200 | 106 |
|1301769000 | 110 |
|1301164200 | 106 |
|1301769000 | 106 |
|1301769000 | 106 |
|1301769000 | 106 |
|1302978600 | 102 |
|1302373800 | 112 |
|1302978600 | 111 |
|1302978600 | 111 |
|1302978601 | 111 |
Note: I have stored the login date in Epoch time.
I want to create the report for following query.
SELECT
user_id,
count(user_id)
FROM
rep_time_tracking
WHERE
login_date between 1301596200 AND 1303496940
GROUP BY 1;
I have added two parameters for run time parameter.
Example. ${FROM} and $P{TO}, value expression is Date/Time.
Now the user selects the date and time using the input I want to convert into Epoch.
How to achieve this in iReport and JasperServer?

Its pretty Simple, All You have to do is call a function on Date which is getTime() it gives you UTC time and then use it as Parameter in query.!
See the Screent Shot.

Related

Converting Column headings into Row data

I have a table in an Access Database that has columns that I would like to convert to be Row data.
I found a code in here
converting database columns into associated row data
I am new to VBA and I just don't know how to use this code.
I have attached some sample data
How the table currently is set up it is 14 columns long.
+-------+--------+-------------+-------------+-------------+----------------+
| ID | Name | 2019-10-31 | 2019-11-30 | 2019-12-31 | ... etc ... |
+-------+--------+-------------+-------------+-------------+----------------+
| 555 | Fred | 1 | 4 | 12 | |
| 556 | Barney| 5 | 33 | 24 | |
| 557 | Betty | 4 | 11 | 76 | |
+-------+--------+-------------+-------------+-------------+----------------+
I would like the output to be
+-------+------------+-------------+
| ID | Date | HOLB |
+-------+------------+-------------+
| 555 | 2019-10-31| 1 |
| 555 | 2019-11-30| 4 |
| 555 | 2019-12-31| 12 |
| 556 | 2019-10-31| 5 |
| 556 | 2019-11-30| 33 |
| 556 | 2019-12-31| 24 |
+-------+--------+-------------+---+
How can I modify this code into a Module and call the module in a query?
Or any other idea you may have.

sqlite: wide v. long performance

I'm considering whether I should format a table in my sqlite database in "wide or "long" format. Examples of these formats are included at the end of the question.
I anticipate that the majority of my requests will be of the form:
SELECT * FROM table
WHERE
series in (series1, series100);
or the analog for selecting by columns in wide format.
I also anticipate that there will be a large number of columns, even enough to need to increase the column limit.
Are there any general guidelines for selecting a table layout that will optimize query performance for this sort of case?
(Examples of each)
"Wide" format:
| date | series1 | series2 | ... | seriesN |
| ---------- | ------- | ------- | ---- | ------- |
| "1/1/1900" | 15 | 24 | 43 | 23 |
| "1/2/1900" | 15 | null | null | 23 |
| ... | 15 | null | null | 23 |
| "1/2/2019" | 12 | 12 | 4 | null |
"Long" format:
| date | series | value |
| ---------- | ------- | ----- |
| "1/1/1900" | series1 | 15 |
| "1/2/1900" | series1 | 15 |
| ... | series1 | 43 |
| "1/2/2019" | series1 | 12 |
| "1/1/1900" | series2 | 15 |
| "1/2/1900" | series2 | 15 |
| ... | series2 | 43 |
| "1/2/2019" | series2 | 12 |
| ... | ... | ... |
| "1/1/1900" | seriesN | 15 |
| "1/2/1900" | seriesN | 15 |
| ... | seriesN | 43 |
| "1/2/2019" | seriesN | 12 |
The "long" format is the preferred way to go here, for so many reasons. First, if you use the "wide" format and there is ever a need to add more series, then you would have to add new columns to the database table. While this is not too much of a hassle, in general once you put a schema into production, you want to avoid further schema changes.
Second, the "long" format makes reporting and querying much easier. For example, suppose you wanted to get a count of rows/data points for each series. Then you would only need something like:
SELECT series, COUNT(*) AS cnt
FROM yourTable
GROUP BY series;
To get this report with the "wide" format, you would need a lot more code, and it would be as verbose as your sample data above.
The thing to keep in mind here is that SQL databases are built to operate on sets of records (read: across rows). They can also process things column wise, but they are not generally setup to do this.

Get duplicate rows based on one column using BIRT

I have one table in BIRT Report :
| Name | Amount |
| A | 200 |
| B | 100 |
| A | 150 |
| C | 80 |
| C | 100 |
I need to summarize this table in to another table as : I name is same and add corresponding values.
Summarized table would be :
| A | 350 |
| B | 100 |
| C | 180 |
Here A = 200 + 150 , B = 100 , C = 80 + 100
How I can summarize table from another table present in BIRT Report ?
That is quite easy. Just add another table to your report, select the same datasource as the first table (on the tab binding)
Go to the tab groups and add a group on the your 'Name' column.
You'll see the table change. It added group header row and group footer row. The header will also have an element on which you grouped (in this case name)
Now right click next to name in the amount column. Select Insert->Aggregation.
Select function SUM, expression should be amount, Aggregate On should be your newly created group.
Now you can see the results but it will be something like:
| A | 350 |
| A | 200 |
| A | 150 |
| B | 100 |
| B | 100 |
| C | 180 |
| C | 100 |
| C | 80 |
If you delete the detail row from the table, you'll have the result your after.
For you information:
Have a play with this, its good excersise. Move the new aggregation to the group footer, add a top border to that cell, put a label total in front if it and you'll have something like this:
| A | |
| A | 200 |
| A | 150 |
----------
| total | 350 |
| B | |
| B | 100 |
----------
| total | 100 |
| C | |
| C | 100 |
| C | 80 |
----------
| total | 180 |
Also, you don't have to select the datasource as the binding, you can also select your first table for the bindings:
select the table, open the tab biding, select report item and pick your first table from the dropdown.
This can create very complex situations, therefor I usually try to work from the original dataset.

Compute value based on next row in BIRT

I am creating a BIRT Report where each row is a receipt matched with a purchase order. There are usually more than one receipt per purchase order. My client wants the qty_remaining on the purchase order to show only on the last receipt for each purchase order. I am not able to alter the data before BIRT gets it. I see two possible solutions, but I am unable to find how to implement either. This question will deal with first possible solution.
If I can compare the purchase order number(po_number) with the next row, then I can set the current row's qty_remaining to 0 if the po_numbers match else show the actual qty_remaining. Is it possible to access the next row?
Edit
The desired look is similar to this:
| date | receipt_number | po_number | qty_remaining | qty_received |
|------|----------------|-----------|---------------|--------------|
| 4/9 | 723 | 6026 | 0 | 985 |
| 4/9 | 758 | 6026 | 2 | 1 |
| 4/20 | 790 | 7070 | 58 | 0 |
| 4/21 | 801 | 833 | 600 | 0 |
But I'm currently getting this:
| date | receipt_number | po_number | qty_remaining | qty_received |
|------|----------------|-----------|---------------|--------------|
| 4/9 | 723 | 6026 | 2 | 985 |
| 4/9 | 758 | 6026 | 2 | 1 |
| 4/20 | 790 | 7070 | 58 | 0 |
| 4/21 | 801 | 833 | 600 | 0 |
I think you looking at this the wrong way. If you want behavior that resembles for loops you should use grouping and aggregate functions. You can build quite complex stuff by using (or not using) the group headers and footers.
In your case I would try to group the receipts on po_number. Order them by receipt_number then have a aggregate function like MAX or LAST on the receipts_number and name it 'last_receipt'. It should aggregate on the group, not the whole table. This 'total' is available on every row within the group.
Then you can use the visibitly setting to only show the qty_remaining when the row['receipt_number'] == row['last_receipt']

LINQ query to get result

I have Student list with below data
/*-----------------------
|Student |
-------------------------
| ID | Name | Dept |
-------------------------
| 101 | Peter | IT |
| 102 | John | IT |
| 103 | Ronald | Mech |
| 104 | Sam | Comp |
-----------------------*/
Other list say Extra with below data
/*----------------------
| StudentId | Dept |
------------------------
| 101 | Civil |
| 103 | Chemical |
----------------------*/
Now I want following result
/*-------------------------
|Student |
---------------------------
| ID | Name | Dept |
---------------------------
| 101 | Peter | Civil |
| 102 | John | IT |
| 103 | Ronald | Chemical |
| 104 | Sam | Comp |
-------------------------*/
Currently I have written below logic:
foreach(item in Extra)
{
//Search item in Student list
//Update it
}
I need more efficient way (don't want to use iteration) using LINQ.
Try something like this. Didn't test it though.
var query = from s in student
join e in extra on s.ID == e.StudentId
select new {s.ID, s.Name, (e.Dept != null) ? e.Dept:s.Dept};
LINQ is using iteration internally, too, so there is no performance benefit in using LINQ. Because LINQ has very general implementations, it might even be slower, because it can't make assumptions about your data, which you can make in your custom loop.

Resources