I'm novice to ClearCase, and recently I merged list of files from branch A to branch B.
At that time I checked preserved the time stamp option for every files that I merged.
Now, Client needs to review that one it seems, so they are asking for snapshot for every file with its "created time stamp" and its "modified time stamp" as a list.
I tried using History option, but I did not get the Created date time stamp.
How can I get that timestamp information?
One way to review those files is to select them into a view:
You can try a view with a time-based selection rule in its config spec.
I would recommend a dynamic view (that way, you can quickly modify the config spec and do multiple tries in order to select the right versions)
See "how to find out all the activities happened in a branch in the last month?" for an example or such a config spec.
But if you are after a list of files, as selected by a view, but with the "modified date" as well as the "creation date", then a simple cleartool find, using fmt_ccase syntax, is enough:
This will give you all the creation dates:
ct find . -type f -exec "cleartool descr -fmt \"%n %e:%d\n\" \"%CLEARCASE_PN%##\""
This will give you all the last modification dates:
ct find . -type f -exec "cleartool descr -fmt \"%n %e:%d\n\" \"%CLEARCASE_PN%\""
The only difference is in the '##', which in one case is for referencing the element itself (which has a creation date), and in the other (no '##'), is for referencing the version (which represent the last modification date).
Related
I am working on a Talend transformation process (we are using Talend 6.4).
, and I don't know how to implement the current requirement.
I have an input consisting in :
Two columns that are my group keys (Account and Product), but are not unique (the same Account x Product couple can happen in multiple rows)
A criterion column (Contract end date), which will help me decide which row I want to keep for each group
Some "tail" data that need to be passed to the following step of the processing (the contract number)
The rule to implement is:
Keep only one record per group
The selected record must be one with no end date or, if all have end date, with the biggest end date
The selected record can be random in case there is a tie
See the transformation applying those rules on some dummy data:
I thought first to do the following:
sort by Account, Product, End_date (nulls first)
"select first" in each group
but I am not skilled enough to know whether the second transformation exists in Talend.
Regards,
Pierre
Very interesting Talend question.
You need to create something like this job.
here a link to the zip file to import in your Talend
The answer from #MBDIA seem to be working, however I would like to share what we did to fulfill our requirement.
See our Talend process here:
The first tMap (tMap_3) acts like a tReplicate and a tMap, and sends:
in the upper branch only the Account and Product references, that are then deduplicated by the tAggregateRow_1.
in the lower branch all data and computed fields that enables us to take care of the case where the date is missing (instead of defaulting to 31/12/9999, we compute a flag (0 or 1) that we use in the sort step afterwards).
In the second part of the process, we first apply the sort to the whole data on Account, Product, Empty date flag (computed before), End date (desc) and use a second tMap to make a join on both branches (on Account x Product), only keeping First Match in order to keep the first record as per our requirement.
Given a list of items which have a date as one field, how can I separate one set which have a date in the first few days of the month from those which have a date in the last few days?
The items are gas bills, generally one per month, in a bank statement which relate to each of two separate buildings and need to go into two separate accounts. They were imported from a CSV file.
In practice, the number of lines involved is small, so I've just done it by hand, but the question of how to do it by formula and sort occurred to me, and I neither have nor found an answer.
I hope it is a slightly interesting question.
The function is simply called DAY. You can find it by clicking on the Function Wizard toolbar icon and looking under the Date&Time category.
For example, in cell B1 enter a formula like =DAY(A1) and fill down. Then go to Data -> Sort.
I know about "HISTTIMEFORMAT="%m/%d - %H:%M:%S: " but this will give me history with timestamp from present session to onward and all commands from previous history will have same but incorrect timestamp.
Is there any way I can get let's say one week old history with proper timestamp?
No. You can't.
From here.
If you set the HISTTIMEFORMAT in bash your new entries get stored in the history file with a timestamp, older commands that don't have a timestamp (those before you ever set HISTTIMEFORMAT) will display one and the same date-time-stamp (I assume the one from the first entry found with a real timestamp).
I have been struggling with this all day. I have an Infopath form that is connected to two Sharepoint Lists.
SP List 1 (Project Charters):
Charter Title
Charter Opportunity
Charter Start Date
SP List 2 (Project Weekly):
Weekly Title
Weekly Opportunity.
I am attempting to combine these two lists into 1 repeating table on a new form. The Weekly Title field holds the same values as the Charter Title field, so I can match using Titles. I have found some resources, with this being the best:
http://www.infopathdev.com/forums/t/21262.aspx?PageIndex=2
There is a sample toward the end that does exactly what I want, but it will not work with my setup. Here is what I get:
As you can see in the picture, the "From my weeklys" column is always the same, and this is the problem. It should match with the corresponding field highlighted in red two the left. This is the formula in the calculated field:
Weekly_x0020_Opportunity[Weekly_x0020_Title = current()/Charter_x0020_Title]
The logic is to return the Weekly Opportunity description, where the Weekly Title matches the Charter Title; however, it only ever returns the same description. The testing column was used to prove that current()/Charter_x0020_Title was producing the unique titles for that row.
I feel like I am really close. The 3rd charter is missing a description, because my Weekly does not have a 3rd charter, so this is working properly. I just need to figure out how to bring-in the proper description.
Note: I am hoping for an Out-of-the-Box solution without coding.
FULL xPATH
xdXDocument:GetDOM("Project Weekly")/dfs:myFields/dfs:dataFields/d:SharePointListItem_RW/d:Weekly_x0020_Opportunity[xdXDocument:GetDOM("Project Weekly")/dfs:myFields/dfs:dataFields/d:SharePointListItem_RW/d:Weekly_x0020_Title = current()/d:Charter_x0020_Title]
The problem is that the path inside your predicate (between the []s) is using an absolute path rather than a relative one. You need to use a relative path.
This path (what you have now):
xdXDocument:GetDOM("Project Weekly")
/dfs:myFields/dfs:dataFields/d:SharePointListItem_RW/d:Weekly_x0020_Opportunity
[xdXDocument:GetDOM("Project Weekly")/dfs:myFields/dfs:dataFields
/d:SharePointListItem_RW/d:Weekly_x0020_Title = current()/d:Charter_x0020_Title]
means "Get the first1 Weekly Opportunity field where any Weekly Title field in the Project Weekly data source has the value of the current Charter Title."
Or in other words, get the first Weekly Opportunity field in the Project Weekly data source any time the stuff between the square brackets is true.
This path:
xdXDocument:GetDOM("Project Weekly")
/dfs:myFields/dfs:dataFields/d:SharePointListItem_RW
[d:Weekly_x0020_Title = current()/d:Charter_x0020_Title]/d:Weekly_x0020_Opportunity
means "Find the first1 SharePointListItem_RW where its Weekly Title is equal to the current Charter Title, and then get its Weekly Opportunity field."
So that's what you should use.
1 I am oversimplifying a bit here by saying "first". That path selects all of the nodes where that path is applicable, and then when InfoPath evaluates it, it takes the first result.
I am doing a transformation on Pentaho Data Integration and I have a list of files in a directory of my SFTP server. This files are named with FILE_YYYYMMDDHHIISS.txt format, my directory looks like that:
mydirectory
FILE_20130701090000.txt
FILE_20130701170000.txt
FILE_20130702090000.txt
FILE_20130702170000.txt
FILE_20130703090000.txt
FILE_20130703170000.txt
My problem is that I need get the last file of this list in accordance of its creation date, to pass it to other transformation step...
How can I do this in Pentaho Data Integration?
In fact this is quite simple because your file names can be sorted textually, and the max in the sort list will be your most recent file.
Since a list of files is likely short, you can use a Memory Group by step. A grouping step needs a separate column by which to aggregate. If you only have column and you want to find the max in the entire set, you can add a grouping column with an Add Constants step, and configure it to add a column with, say an integer 1 in every row.
Configure your Memory Group by to group on the column of 1s, and use the file name column as the subject. Then simply select the Maximum grouping type. This will produce a single row with your grouping column, the file name field removed and the aggregate column containing your max file name. It would look something like this: