In EmEditor: How to sum up values from two column's cell (in the same row) and store the results in third column? - emeditor

I have a pipe separated file:
Col1| Col2| Col3
12 | 10 |
54 | 17 |
How I can get in the sum ( 22 & 71 ) in the Col3 ?
Is there a build-in Function for this kind of operation ?

After opening your CSV (or pipe-separated) file, select the Column 3 by clicking the Column 3 headings, press Ctrl+H to bring up the Replace dialog box, click the Advanced button, click the Reset button to make sure all options in the Advanced dialog box are default, and click OK.
In the Replace dialog box, enter:
Find: .*
Replace with: \J Number( cell( -1 ) ) + Number( cell( -2 ) )
Make sure the In the Selection Only and Regular Expressions options are set.
Click Replace All.
Notes: If you need to deal with decimal numbers, use: \J parseFloat( cell( -1 ) ) + parseFloat( cell( -2 ) ) instead for the Replace with expression.
References: http://www.emeditor.org/en/howto_search_replacement_expression_syntax.html

Related

I have to make a slicer with two option "Top 5" and "All Others" which should give me results as mentioned below

I have to make a slicer to pick rows from this data (i.e. if I choose top 5 option in slicer than all the top 5 rows for each value of column A should appear and if I choose all other option from slicer then all the values except top should appear). Let me explain you by table
Here is the data table
Our slicer will have two options one "Top 5" and another "All Others".
If I choose top 5 then i should get a table like below mentioned:
And If I choose "All Other Option in Slicer then I should get the Following Table:
You can create a measure with below code and use it as filter on your visual.
Filter_Measure =
IF (
(
SELECTEDVALUE ( Slicer_Value[selection] ) = "Top 5"
&& MAX ( 'Table'[Rank] ) <= 5
)
|| (
SELECTEDVALUE ( Slicer_Value[selection] ) = "All Others"
&& MAX ( 'Table'[Rank] ) > 5
),
1,
0
)
PFA screenshot of visual with filter

Count how many sub-activities were created based on an activity

I have a dimension that stores workflows(cases, subcases). I would like to do a count of how many subcases are created for each case.
Workflow Dimension
Workflow
------------------------------
Case Number WorkflowType
------------------------------
10 Case
20 Case
30 Case
20-1 Subcase
20-2 Subcase
20-3 Subcase
10-1 Subcase
The desire output I would like is, for every case count how many subcases were created.
Workflow
------------------------------------------------
Case Number WorkflowType CountOfSubcases
------------------------------------------------
10 Case 1
20 Case 3
30 Case 0
------------------------------------------------
Total 4
I have a current dax measure that works, but the total at the bottom does not show when looking at multiple rows, only display when one case is selected.
Total Subcases =
VAR CC = FIRSTNONBLANK ( Workflow[Case Number], 1 )
RETURN
COUNTX (
FILTER (
ALL( Workflow ),
SUBSTITUTE ( Workflow[Case Number], RIGHT ( Workflow[Case Number], 2
), "" )
= CC
&& Workflow[WorkflowType] = "SubCase"
),
Workflow[WorkflowID]
)
If anybody could help me tweak my measure or present with a new measure, that would be great.
Note: I'm pointing my report to Analysis Services.
Thanks in advance.
You can fix your measure as follows:
Total Subcases = 0 +
COUNTX (
FILTER (
ALL( Workflow ),
SUBSTITUTE ( Workflow[Case Number], RIGHT ( Workflow[Case Number], 2 ), "" )
IN VALUES( Workflow[Case Number] )
&& Workflow[WorkflowType] = "SubCase"
),
Workflow[WorkflowID]
)
The VALUES function returns a list of all the values in the current filter context instead of just the one you were picking before.
Note: To make things easier to work with, I'd suggest splitting the Case Number column into two columns in the query editor stage. Then you don't have to work with all the string manipulation.
Edit: Note that x IN <Table[column]> is equivalent to the older CONTAINS syntax:
CONTAINS(Table, [column], x)
So if you can't use IN then try this formulation:
Total Subcases = 0 +
COUNTX (
FILTER (
ALL( Workflow ),
CONTAINS(
VALUES( Workflow[Case Number] ),
Workflow[Case Number],
SUBSTITUTE ( Workflow[Case Number],
RIGHT ( Workflow[Case Number], 2 ), "" )
)
&& Workflow[WorkflowType] = "SubCase"
),
Workflow[WorkflowID]
)

How to apply regular expression on the below given string

i have a string 'MCDONALD_YYYYMMDD.TXT' i need to use regular expressions and append the '**' after the letter 'D' in the string given . (i.e In the string at postion 9 i need to append '*' based on a column value 'star_len'
if the star_len = 2 the o/p = ''MCDONALD??_YYYYMMDD.TXT'
if the star_len = 1 the o/p = ''MCDONALD?_YYYYMMDD.TXT'
with
inputs ( filename, position, symbol, len ) as (
select 'MCDONALD_20170812.TXT', 9, '*', 2 from dual
)
-- End of simulated inputs (for testing purposes only, not part of the solution).
-- SQL query begins BELOW THIS LINE.
select substr(filename, 1, position - 1) || rpad(symbol, len, symbol)
|| substr(filename, position) as new_str
from inputs
;
NEW_STR
-----------------------
MCDONALD**_20170812.TXT
select regexp_replace('MCDONALD_YYYYMMDD.TXT','MCDONALD','MCDONALD' ||
decode(star_len,1,'*',2,'**'))
from dual
This is how you could do it. I don't think you need it as a regular expression though if it is always going to be "MCDONALD".
EDIT: If you need to be providing the position in the string as well, I think a regular old substring should work.
select substr('MCDONALD_YYYYMMDD.TXT',1,position-1) ||
decode(star_len,1,'*',2,'**') || substr('MCDONALD_YYYYMMDD.TXT',position)
from dual
Where position and star_len are both columns in some table you provide(instead of dual).
EDIT2: Just to be more clear, here is another example using a with clause so that it runs without adding a table in.
with testing as
(select 'MCDONALD_YYYYMMDD.TXT' filename,
9 positionnum,
2 star_len
from dual)
select substr(filename,1,positionnum-1) ||
decode(star_len,1,'*',2,'**') ||
substr(filename,positionnum)
from testing
For the fun of it, here's a regex_replace solution. I went with a star since that what your variable was called even though your example used a question mark. The regex captures the filename string in 2 parts, the first being from the start up to 1 character before the position value, the second the rest of the string. The replace puts the captured parts back together with the stars in between.
with tbl(filename, position, star_len ) as (
select 'MCDONALD_20170812.TXT', 9, 2 from dual
)
select regexp_replace(filename,
'^(.{'||(position-1)||'})(.*)$', '\1'||rpad('*', star_len, '*')||'\2') as fixed
from tbl;

REMOVE THE LAST COMMA in oracle

COUNTNUM is a column name in a table that has data like below
1,2,3,4,
I used
RTRIM((COUNTNUM),',') COUNTNUM
It didn't work
Desired output
1,2,3,4
Current output
1,2,3,4,
Any suggestions would greatly help..!
Thanks
REGEXP_REPLACE((countnum), ',$', '')
Perhaps There are non-digits after the comma which needed to be removed
Logic is added to account for possible non-digits between the comma and the end of countnum.
Explanation:
[^[:digit:]] is the negation of the digit character class
* is a quantifier meaning zero to many
$ is an anchor identify the end of countnum
SCOTT#dev>WITH d AS (
2 SELECT
3 '1,2,3,4, ' countnum
4 FROM
5 dual
6 UNION ALL
7 SELECT
8 '1,2,3,4,'
9 FROM
10 dual
11 ) SELECT
12 countnum,
13 regexp_replace(
14 countnum,
15 ',[^[:digit:]]*$'
16 ) mod_count_num
17 FROM
18 d;
COUNTNUM MOD_COUNT_NUM
1,2,3,4, 1,2,3,4
1,2,3,4, 1,2,3,4

Processing multi line logs with AWK to gather SQL statements

I have the following entries in a log file:
2016-01-25 21:12:41 UTC:172.31.21.125(56665):user#production:[21439]:ERROR: bind message supplies 1 parameters, but
prepared statement "" requires 0
2016-01-25 21:12:41 UTC:172.31.21.125(56665):user#production:[21439]:STATEMENT: SELECT count(*) AS total FROM (
SELECT 1 AS count
FROM leads_search_criteria_entities
INNER JOIN entities e on entity_id = e.viq_id
LEFT JOIN companies_user cu ON cu.entity_id = e.viq_id
WHERE criterium_id = 644 AND ((
( cu.udef_type IS NULL -- if not set by user, check calculated value
AND is_university >= 50
) OR (
cu.udef_type IS NOT NULL -- if set by user, use it
AND cu.udef_type = 'university'
)
))
GROUP BY e.viq_id
ORDER BY e.viq_id
) x
2016-01-25 21:14:11 UTC::#:[2782]:LOG: checkpoint starting: time
2016-01-25 21:14:16 UTC::#:[2782]:LOG: checkpoint complete: wrote 51 buffers (0.0%); 0 transaction log file(s) added, 0 remov
ed, 0 recycled; write=5.046 s, sync=0.038 s, total=5.091 s; sync files=18, longest=0.008 s, average=0.002 s
2016-01-25 21:19:11 UTC::#:[2782]:LOG: checkpoint starting: time
I would like to capture the SQL statements but I am not sure how can I do that with AWK.
Update:
Expected outcome:
SELECT count(*) AS total FROM ( SELECT 1 AS count FROM leads_search_criteria_entities INNER JOIN entities e on entity_id = e.viq_id LEFT JOIN companies_user cu ON cu.entity_id = e.viq_id WHERE criterium_id = 644 AND (( ( cu.udef_type IS NULL -- if not set by user, check calculated value AND is_university >= 50 ) OR ( cu.udef_type IS NOT NULL -- if set by user, use it AND cu.udef_type = 'university' ) )) GROUP BY e.viq_id ORDER BY e.viq_id ) x
My current almost working solution uses sed but this is where I got stuck, it just helps filtering the lines that have a select (multiple lines by itself) and the next line after that. Any suggestion is appreciated
sed -n "/:STATEMENT:/,/2016/p" out
I don't recommend using sed for this. First thought for an awk solution might look like this:
/^2016/&&line~/:STATEMENT:/ {
sub(/.*:STATEMENT:/,"",line)
print line
}
/^2016/ {
line=""
}
{
$1=$1
line=sprintf("%s %s",line,$0)
}
END {
if (line~/:STATEMENT:/) {
sub(/.*:STATEMENT:/,"",line)
print line
}
}
Obviously you could shrink this. I wrote and ran it (for testing) as a one-liner.
The idea here is that:
we'll append to a variable, resetting it every time our input line starts with the year. (You could replace this with a regexp matching the date if you want to run this next year without modification),
when we get to a new log line (or the end), we strip off the cruft before the SQL statement and print the result.
Note the $1=$1. The purpose of this is to change your line's whitespace, so that newlines and tabs and multiples spaces are collapsed into single spaces. Experiment with removing it to see the impact.
Update
Howabout a combination of sed and tr
sed 's/^[0-9][^S]*//' INPUT.txt | sed '/^[0-9a-z]/d' | tr -s ' ' | tr -d '\n'
output:
STATEMENT: SELECT count(*) AS total FROM ( SELECT 1 AS count FROM leads_search_criteria_entities INNER JOIN entities e on entity_id = e.viq_id LEFT JOIN companies_user cu ON cu.entity_id = e.viq_id WHERE criterium_id = 644 AND (( ( cu.udef_type IS NULL -- if not set by user, check calculated value AND is_university >= 50 ) OR ( cu.udef_type IS NOT NULL -- if set by user, use it AND cu.udef_type = 'university' ) )) GROUP BY e.viq_id ORDER BY e.viq_id ) x
$ cat log.awk
f && /^[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]/ {f=0; print ""}
sub(/^.*:STATEMENT:[[:space:]]+/,"") {f=1}
f { $1=$1; printf "%s ", $0 }
$ awk -f log.awk log.txt
SELECT count(*) AS total FROM ( SELECT 1 AS count FROM leads_search_criteria_entities INNER JOIN entities e on entity_id = e.viq_id LEFT JOIN companies_user cu ON cu.entity_id = e.viq_id WHERE criterium_id = 644 AND (( ( cu.udef_type IS NULL -- if not set by user, check calculated value AND is_university >= 50 ) OR ( cu.udef_type IS NOT NULL -- if set by user, use it AND cu.udef_type = 'university' ) )) GROUP BY e.viq_id ORDER BY e.viq_id ) x
(2nd line) This turns on printing (f=1) when :STATEMENT: is found, and as a side-effect, removes everything up until the start of the SELECT statement.
(3rd line) Then it keeps printing until printing is turned off (see below), cleaning up by replacing sequences of multiple spaces by a single space. (EDIT: Thanks to #ghoti for suggesting the elegant $1=$1 for that.)
(1st line) Turn off printing at the start of the next log, identified by starting with a date. Print a courtesy newline to end the SELECT.

Resources