Sort ALV grid by inverted date - sorting

I had a need to output TCURR table into ALV grid. All went fine but when user sorts table by "valid from" date (GDATU) strange things happened.
Sorting in ascending order behaves as sorting in descending order and vice versa. This is so because field GDATU contains date in inverted format and has domain GDATU_INV with the conversion routine INVDT which converts date on the fly. ALV grid shows the date correctly but sorting is done by inverted dates.
I solved it like this:
I declared table structure similar to TCURR and replaced GDATU_INV domain with simple DATUM.
I converted inverted dates into usual ones
I filled my table with the converted dates
After generating field catalog through FM LVC_FIELDCATALOG_MERGE according to TCURR structure I write to
CONVEXIT,
REF_TABLE,
DATATYPE,
DOMNAME
fields of the GDATU line values
'',
'',
'DATS',
'DATUM' respectively.
Is there more efficient and simple solution to this problem?

You should be able to use your new structure in LVC_FIELDCATALOG_MERGE, in stead of using TCURR which would mean that you don't have to overwrite the settings in the field catalog after the fact. But that's a pretty minor thing.
I don't think you had any choice but to use a structure with a data element that behaves the way you need it to.
Do look into the Simple ALV classes, though. (CL_SALV*) They are well documented and much easier to use than then now out-of-date ALV function modules. (Particularly generating the field catalog is a lot less hassle).

Related

Gnumeric Sort function

Can someone please direct me to a detailed explanation (link) of the Gnumeric sort function? The Gnumeric manual is abbreviated and has no examples. I haven't been able to find any appropriate info through the search engines and even Stackoverflow only has half a dozen questions on it which don't suit.
My problem is:
I have a table with rows of dates, names, and columns of data. (pretty straightforward stuff).
I want to sort ALL columns by the NAME column.
That is: keep each row intact for data but move them in the table up or down so that the order is alphabetic by name.
I can do this easily with Libercalc but prefer the feel and simplicity of Gnumeric, yet I have never been able to understand from the drop-down sort menu how to get this done. I can sort any column fine by itself, but can't seem to lock the other data in the row to be taken with it.
This is such a frequent function I'm surprised it's not made clearer in the drop-down menu. That is: Order by column x
The only way one can sort with Gnumeric, apparently, is to move the key column (i.e. in my case the NAME column) to be the left-most column (column A) in the table, and then sort, subsequently moving the columns back into their required format (date and time in first column) as I want it. This seems very clumsy to me and I wondered if there was an easier way of ordering a table in any format (e.g. just as it is imported from the csv file) by simply selecting the column to sort wherever it is in the table, as can be done in LiberCalc?
1) You need to select ALL the columns you want to sort:
menu > data > sort
2) Keep the column with the NAMEs to be sorted, and remove the rest of the columns in:
sort specification

Is it possible to create the compound table using p:dataTable?

This is rather conceptual question, I have a compound table to be implemented and I am not sure if I should use <p:dataTable> for the same.
The structure is like there will be values for weekly basis and depending on those cumulative values will have to be calculated which will be part of the same table. Is it possible using <p:dataTable>? or I will have to create the structure using panel grids..rows and column..Any suggestions?
Structure: (the value are arbitrary for now)
please see here

CouchDb filter and sort in one view

I'm new to the CouchDb.
I have to filter records by date (date must be between two values) and to sort the data by the name or by the date etc (it depends on user's selection in the table).
In MySQL it looks like
SELECT * FROM table WHERE date > "2015-01-01" AND date < "2015-08-01" ORDER BY name/date/email ASC/DESC
I can't figure out if I can use one view for all these issues.
Here is my map example:
function(doc) {
emit(
[doc.date, doc.name, doc.email],
{
email:doc.email,
name:doc.name,
date:doc.date,
}
);
}
I try to filter data using startkey and endkey, but I'm not sure how to sort data in this way:
startkey=["2015-01-01"]&endkey=["2015-08-01"]
Can I use one view? Or I have to create some views with keys order depending on my current order field: [doc.date, doc.name, doc.email], [doc.name, doc.date, doc.email] etc?
Thanks for your help!
As Sebastian said you need to use a list function to do this in Couch.
If you think about it, this is what MySQL is doing. Its query optimizer will pick an index into your table, it will scan a range from that index, load what it needs into memory, and execute query logic.
In Couch the view is your B-tree index, and a list function can implement whatever logic you need. It can be used to spit out HTML instead of JSON, but it can also be used to filter/sort the output of your view, and still spit out JSON in the end. It might not scale very well to millions of documents, but MySQL might not either.
So your options are the ones Sebastian highlighted:
view sorts by date, query selects date range and list function loads everything into memory and sorts by email/etc.
views sort by email/etc, list function filters out everything outside the date range.
Which one you choose depends on your data and architecture.
With option 1 you may skip the list function entirely: get all the necessary data from the view in one go (with include_docs), and sort client side. This is how you'll typically use Couch.
If you need this done server side, you'll need your list function to load every matching document into an array, and then sort it and JSON serialize it. This obviously falls into pieces if there are soo many matching documents that they don't even fit into memory or take to long to sort.
Option 2 scans through preordered documents and only sends those matching the dates. Done right this avoids loading everything into memory. OTOH it might scan way too many documents, trashing your disk IO.
If the date range is "very discriminating" (few documents pass the test) option 1 works best; otherwise (most documents pass) option 2 can be better. Remember that in the time it takes to load a useless document from disk (option 2), you can sort tens of documents in memory, as long as they fit in memory (option 1). Also, the more indexes, the more disk space is used and the more writes are slowed down.
you COULD use a list function for that, in two ways:
1.) Couch-View is ordered by dates and you sort by e-amil => but pls. be aware that you'd have to have ALL items in memory to do this sort by e-mail (i.e. you can do this only when your result set is small)
2.) Couch-View is ordered by e-mail and a list function drops all outside the date range (you can only do that when the overall list is small - so this one is most probably bad)
possibly #1 can help you

Crystal Reports - Sorting two different date fields chronologically

I've got two date fields from two tables and I'm trying to show receipts of POs in line with work order consumption sorted chronologically.
Is there any way to sort two date fields together?
For instance:
1/1/14 work order date
1/5/14 work order date
1/7/14 PO receipt date
1/9/14 work order date
1/20/14 work order date
The two fields are 'duedate' from table 'porel' and 'reqdate' from table 'jobmtl'
Usually the simplest solution in such cases is to perform the ordering at the server side (e.g. using SQL Server stored procedure, Access query, etc.), and then use the stored procedure or query as the source for the data.
An alternative that I read about is to create global variables in the report, assign your dates values to these variables using 'WhilePrintingRecords;' in formula fields, and using these variables that then does the actual reporting for you.
Slightly complicated.
Another solution which I am not sure if applies to you is :
Click on the main menu > Report > Record Sort Expert
Select your date field in the box on the left and add it to the box on the right
Check the Ascending checkbox and click Ok
Let us know how it goes.
you should create a formula saying
if (table1.duedate = null) then
{table2.duedate}
else
{table1.duedate}
Then sort on this formula. Check the syntax yourself.

Cassandra DB: is it favorable, or frowned upon, to index multiple criteria per row?

I've been doing a lot of reading lately on Cassandra, and specifically how to structure rows to take advantage of indexing/sorting, but there is one thing I am still unclear on; how many "index" items (or filters if you will) should you include in a column family (CF) row?
Specifically: I am building an app and will be using Cassandra to archive log data, which I will use for analytics.
Example types of analytic searches will include (by date range):
total visits to specific site section
total visits by Country
traffic source
I plan to store the whole log object in JSON format, but to avoid having to go through each item to get basic data, or to create multiple CF just to get basic data, I am curious to know if it's a good idea to include these above "filters" as columns (compound column segment)?
Example:
Row Key | timeUUID:data | timeUUID:country | timeUUID:source |
======================================================
timeUUID:section | JSON Object | USA | example.com |
So as you can see from the structure, the row key would be a compound key of timeUUID (say per day) plus the site section I want to get stats for. This lets me query a date range quite easily.
Next, my dilemma, the columns. Compound column name with timeUUID lets me sort & do a time based slice, but does the concept make sense?
Is this type of structure acceptable by the current "best practice", or would it be frowned upon? Would it be advisable to create a separate "index" CF for each metric I want to query on? (even when it's as simple as this?)
I would rather get this right the first time instead of having to restructure the data and refactor my application code later.
I think the idea behind this is OK. It's a pretty common way of doing timeslicing (assuming I've understood your schema anyway - a create table snippet would be great). Some minor tweaks ...
You don't need a timeUUID as your row key. Given that you suggest partitioning by individual days (which are inherently unique) you don't need a UUID aspect. A timestamp is probably fine, or even simpler a varchar in the format YYYYMMDD (or whatever arrangement you prefer).
You will probably also want to swap your row key composition around to section:time. The reason for this is that if you need to specify an IN clause (i.e. to grab multiple days) you can only do it on the last part of the key. This means you can do WHERE section = 'foo' and time IN (....). I imagine that's a more common use case - but the decision is obviously yours.
If your common case is querying the most recent data don't forget to cluster your timeUUID columns in descending order. This keeps the hot columns at the head.
Double storing content is fine (i.e. once for the JSON payload, and denormalised again for data you need to query). Storage is cheap.
I don't think you need indexes, but it depends on the queries you intend to run. If your queries are simple then you may want to store counters by (date:parameter) instead of values and just increment them as data comes in.

Resources