regresion: Antlr C target follow set generates reference to undeclared identifier - antlr3

ANTLRWorks 1.5rc1 successfully generated the C target code for this grammar with no warnings or errors but the C target code would not compile.
ANTLRWorks 1.5 generated the same bad C target code but the console listed many template errors.
ANTLRWorks 1.4.3 generated valid C target code that compiles without error.
error C2065: 'FOLLOW_set_in_sqlCOMP_OP2185' : undeclared identifier
Rule sqlCOMP_OP is referenced from multiple boolean expression rules
All of the rules that generated references to undefined identifiers were of the form:
fule: (Tokena | Tokenb | Tokenc)? Tokend;
or there were multiple references to a common rule that was of the form:
fule: (Tokena | Tokenb | Tokenc);
In the first case I was able to transform the rule to a logically equivalent form that did not generate a reference to an undefined identifier:
fule: (Tokena Tokend | Tokenb Tokend | Tokenc Tokend | Tokend);
In the second case, no transformation is possible. Instead the only fix is to back-substitute the body of the failing rule into every reference.
sqlCONDITION :
sqlLOGICAL_EXPRESSION
;
sqlLOGICAL_EXPRESSION :
sqlLOGICAL_TERM (STOK_OP_OR sqlLOGICAL_TERM)*
;
sqlLOGICAL_TERM :
sqlLOGICAL_FACTOR (STOK_OP_AND sqlLOGICAL_FACTOR)*
;
sqlLOGICAL_FACTOR :
(STOK_NOT) => STOK_NOT
( sqlUNTYPED_BOOLEAN_PRIMARY
| sqlNUMERIC_BOOLEAN_PRIMARY
| sqlSTRING_BOOLEAN_PRIMARY
| sqlCLOB_BOOLEAN_PRIMARY
| sqlDATE_BOOLEAN_PRIMARY
| sqlDATETIME_BOOLEAN_PRIMARY
| STOK_OPEN_PAREN
sqlCONDITION
STOK_CLOSE_PAREN
)
;
sqlUNTYPED_BOOLEAN_PRIMARY :
( STOK_EXISTS sqlSUB_QUERY
| STOK_NOT STOK_EXISTS sqlSUB_QUERY
| sqlIS_OR_ISNOT_CLAUSE
STOK_IN STOK_ROWSET STOK_IDENTIFIER
)
;
sqlCOMP_OP :
( STOK_OP_EQ
| STOK_OP_NE
| STOK_OP_GE
| STOK_OP_GT
| STOK_OP_LE
| STOK_OP_LT
)
;
sqlIS_OR_ISNOT_CLAUSE :
( STOK_IS STOK_NOT?
| STOK_NOT
)
;
sqlNUMERIC_BOOLEAN_PRIMARY :
( sqlNUMERIC_EXPRESSION
( sqlCOMP_OP
sqlNUMERIC_EXPRESSION
| sqlNUMERIC_BOOLEAN_PREDICATE
)
| sqlNUMERIC_COLUMN_LIST
sqlNUMERIC_BOOLEAN_PREDICATE
)
;
sqlNUMERIC_BOOLEAN_PREDICATE:
( sqlIS_OR_ISNOT_CLAUSE?
( STOK_IN sqlNUMERIC_SET
| STOK_BETWEEN sqlNUMERIC_EXPRESSION STOK_OP_AND sqlNUMERIC_EXPRESSION
)
| sqlIS_OR_ISNOT_CLAUSE
STOK_SQL_NULL
)
;
sqlSTRING_BOOLEAN_PRIMARY :
( sqlSTRING_EXPRESSION
( sqlCOMP_OP
sqlSTRING_EXPRESSION
| sqlSTRING_BOOLEAN_PREDICATE
)
| sqlSTRING_COLUMN_LIST
sqlSTRING_BOOLEAN_PREDICATE
)
;
sqlSTRING_BOOLEAN_PREDICATE :
( sqlIS_OR_ISNOT_CLAUSE?
( STOK_IN sqlSTRING_SET
| STOK_LIKE sqlSTRING
| STOK_BETWEEN sqlSTRING_EXPRESSION STOK_OP_AND sqlSTRING_EXPRESSION
)
| sqlIS_OR_ISNOT_CLAUSE
STOK_SQL_NULL
)
;
sqlCLOB_BOOLEAN_PRIMARY :
( STOK_NOT?
STOK_CONTAINS
STOK_OPEN_PAREN
sqlCLOB_COLUMN_VALUE
STOK_COMMA
sqlSTRING
STOK_CLOSE_PAREN
| sqlCLOB_COLUMN_VALUE
sqlIS_OR_ISNOT_CLAUSE
STOK_SQL_NULL
)
;
sqlDATE_BOOLEAN_PRIMARY :
( sqlDATE_EXPRESSION
( sqlCOMP_OP
sqlDATE_EXPRESSION
| sqlDATE_BOOLEAN_PREDICATE
)
| sqlDATE_COLUMN_LIST
sqlDATE_BOOLEAN_PREDICATE
)
;
sqlDATE_BOOLEAN_PREDICATE :
( sqlIS_OR_ISNOT_CLAUSE?
( STOK_IN sqlDATE_SET
| STOK_BETWEEN sqlDATE_EXPRESSION STOK_OP_AND sqlDATE_EXPRESSION
)
| sqlIS_OR_ISNOT_CLAUSE
STOK_SQL_NULL
)
;
sqlDATETIME_BOOLEAN_PRIMARY :
( sqlDATETIME_EXPRESSION
( sqlCOMP_OP
sqlDATETIME_EXPRESSION
| sqlDATETIME_BOOLEAN_PREDICATE
)
| sqlDATETIME_COLUMN_LIST
sqlDATETIME_BOOLEAN_PREDICATE
)
;
sqlDATETIME_BOOLEAN_PREDICATE :
( sqlIS_OR_ISNOT_CLAUSE?
( STOK_IN sqlDATETIME_SET
| STOK_BETWEEN sqlDATETIME_EXPRESSION STOK_OP_AND sqlDATETIME_EXPRESSION
)
| sqlIS_OR_ISNOT_CLAUSE
STOK_SQL_NULL
)
;

I had the same problem with rules of this type:
prio14Operator: '=' | '+=' | '-=' | '*=' | '/=' | '%=' | '<<=' | '>>=' | '&=' | '|=' | '^=' | 'is';
prio14Expression: prio13Expression (prio14Operator prio13Expression)*;
Rewriting the rules to this format fixed the issue:
prio14Expression: prio13Expression (('=' | '+=' | '-=' | '*=' | '/=' | '%=' | '<<=' | '>>=' | '&=' | '|=' | '^=' | 'is') prio13Expression)*;

Related

how to delete few rows of data from a text file using shell scripting based on some conditions

I have a text file with more than 100k rows. Below mentioned data is a sample for the text file I have. I want to use some conditions on this data and delete some rows. The text file does not have headers (ID,NAME,Code-1,code,2-code-3). I mentioned for reference. How can I achieve this with shell scripting?
Input test file:
| ID | NAME | Code-1 | code-2 | code-3 |
| $$ | 5HF | 1E | N | Y |
| $$ | 2MU | 3C | N | Y |
| $$ | 32E | 3C | N | N |
| AB | 3CH | 3C | N | N |
| MK | A1M | AS | P | N |
| $$ | Y01 | 01 | F | Y |
| $$ | BG0 | 0G | F | N |
Conditions:
if code-2 = 'N' and code-1 not equal to ( '3C' , '3B' , '32' , '31' , '3D' ) then ID='$$'
if code-2 ='N' and code-1 equal to ( '3C' , '3B' , '32' , '31' , '3D') then accept any ID and (accept ID='$$' only if code-3='Y')'
if code-2 != 'N' then accept (ID='$$' only if code-3='Y') and all other IDs
Output:
| ID | NAME | Code-1 | code-2 | code-3 |
| $$ | 5HF | 1E | N | Y |
| $$ | 2MU | 3C | N | Y |
| AB | 3CH | 3C | N | N |
| MK | A1M | AS | P | N |
| $$ | Y01 | 01 | F | Y |
It's encouraged you demonstrate own efforts when ask questions. But I do understand this question could be complicated if you are new to Bash. Here is my solution using awk. Spent 0.545s processed 137k lines on my computer (with moderate specs).
awk '{
ID=$2; NAME=$4; CODE1=$6; CODE2=$8; CODE3=$10;
if (CODE2 == "N") {
if (CODE1 ~ /(3C|3B|32|31|3D)/) {
if (ID == "$$") {
if (CODE3 == "Y") {
print;
}
}
else {
print;
}
}
else {
if (ID == "$$") {
print;
}
}
}
else {
if (ID == "$$") {
if (CODE3 == "Y") {
print;
}
}
else {
print;
}
}}' file
Note it has certain restrictions:
a) It delimits values by spaces not |. It will work with your exact input format, but won't work with input rows without additional spaces, e.g.
|$$|32E|3C|N|N|
|AB|3CH|3C|N|N|
b) For the same reason, the command will generate incorrect result, if col value has extra spaces, e.g.
| $$ | 32E FOO | 3C | N | N |
| AB | 3CH BBT | 3C | N | N |

HIVE splitting string

HIVE :-
I have a column changeContext==>"A345|Fq*A|2017-05-01|2017-05-01" (string) out of which I need to extract A345 as another column. Any suggestion ? P.S. I have tried regexp_extract (running into vertex failure) so any other solution would be perfect.
with t as (select "A345|Fq*A|2017-05-01|2017-05-01" as changeContext)
select substring_index(changeContext,'|',1) option_1
,split(changeContext,'\\|')[0] option_2
,substr(changeContext,1,instr(changeContext,'|')-1) option_3
,regexp_extract(changeContext,'[^|]*',0) option_4
,regexp_replace(changeContext,'\\|.*','') option_5
from t
+----------+----------+----------+----------+----------+
| option_1 | option_2 | option_3 | option_4 | option_5 |
+----------+----------+----------+----------+----------+
| A345 | A345 | A345 | A345 | A345 |
+----------+----------+----------+----------+----------+

Update in oracle with joining two table

I have these two tables below, I need to update Table1.Active_flag to Y, where Table2.Reprocess_Flag is N.
Table1
+--------+--------------+--------------+--------------+-------------+
| Source | Subject_area | Source_table | Target_table | Active_flag |
+--------+--------------+--------------+--------------+-------------+
| a | CUSTOMER | ADS_SALES | ADS_SALES | N |
| b | CUSTOMER | ADS_PROD | ADS_PROD | N |
| CDW | SALES | CD_SALES | CD_SALES | N |
| c | PRODUCT | PD_PRODUCT | PD_PRODUCT | N |
| d | PRODUCT | PD_PD1 | PD_PD1 | N |
| e | ad | IR_PLNK | IR_PLNK | N |
+--------+--------------+--------------+--------------+-------------+
Table2
| Source | Subject_area | Source_table | Target_table | Reprocess_Flag |
+--------+--------------+--------------+--------------+----------------+
| a | CUSTOMER | ADS_SALES | ADS_SALES | N |
| b | CUSTOMER | ADS_PROD | ADS_PROD | N |
| CDW | SALES | CD_SALES | CD_SALES | N |
| c | PRODUCT | PD_PRODUCT | PD_PRODUCT | Y |
| d | PRODUCT | PD_PD1 | PD_PD1 | Y |
| e | ad | IR_PLNK | IR_PLNK | N |
+--------+--------------+--------------+--------------+----------------+
Use all three columns in a single select statement.
UPDATE hdfs_cntrl SET active_flag = 'Y'
where (source,subject_area ,source_table ) in ( select source,subject_area ,source_table from proc_cntrl where Reprocess_Flag = 'N');
Updating one table based on data in another table is almost always best done with the MERGE statement.
Assuming source is a unique key in table2:
merge into table1 t1
using table2 t2
on (t1.source = t2.source)
when matched
then update set t1.active_flag = 'Y'
where t2.reprocess_flag = 'N'
;
If you are not familiar with the MERGE statement, read about it - it's just as easy to learn as UPDATE and INSERT and DELETE, it can do all three types of operations in a single statement, it is much more flexible and, in some cases, more efficient (faster).
merge into table1 t1
using table2 t2
on (t1.sorce=t2.source and t1.Subject_area = t2.Subject_area and t1.Source_table = t2.Source_table and t1.Target_table = t2.Target_table and t2.flag_status = 'N')
when matched then update set
t1.flag = 'Y';
UPDATE hdfs_cntrl SET active_flag = 'Y' where source in ( select source from proc_cntrl where Reprocess_Flag = 'N') and subject_area in (select subject_area from proc_cntrl where Reprocess_Flag = 'N') and source_table in (select target_table from proc_cntrl where Reprocess_Flag = 'N')

strings.Split acting weird

I am doing a simple strings.Split on a date.
The format is 2015-10-04
month := strings.Split(date, "-")
output is [2015 10 03].
If I do month[0] it returns 2015 but when I do month[1], it returns
panic: runtime error: index out of range
Though it clearly isn't. Am I using it wrong? Any idea what is going on?
Here's a complete working example:
package main
import "strings"
func main() {
date := "2015-01-02"
month := strings.Split(date, "-")
println(month[0])
println(month[1])
println(month[2])
}
Output:
2015
01
02
Playground
Perhaps you're not using the correct "dash" character? There are lots:
+-------+--------+----------+
| glyph | codes |
+-------+--------+----------+
| - | U+002D | - |
| ֊ | U+058A | ֊ |
| ־ | U+05BE | ־ |
| ᠆ | U+1806 | ᠆ |
| ‐ | U+2010 | ‐ |
| ‑ | U+2011 | ‑ |
| ‒ | U+2012 | ‒ |
| – | U+2013 | – |
| — | U+2014 | — |
| ― | U+2015 | ― |
| ⁻ | U+207B | ⁻ |
| ₋ | U+208B | ₋ |
| − | U+2212 | − |
| ﹘ | U+FE58 | ﹘ |
| ﹣ | U+FE63 | ﹣ |
| - | U+FF0D | - |
+-------+--------+----------+
Here is the code with a different input string, which also throws an index out of bounds exception:
package main
import "strings"
func main() {
date := "2015‐01‐02" // U+2010 dashes
month := strings.Split(date, "-")
println(month[0])
println(month[1])
println(month[2])
}
Playground.

Using Closure_tree gem instead of Awesome nested set

Hi I followed the link to set up closure_tag gem.
When i tried to use closure_tree syntax in the following way (newStructure.find_or_create_by_path(parent) instead of newStructure.move_to_child_of(parent)) ... got the following error :
"Can't mass-assign protected attributes: ancestor, descendant, generations"
is this the correct way of using newStructure.find_or_create_by_path(parent) ?
def self.import(path)
newStructure = FileOrFolder.find(:first, :conditions=>["fullpath = ?", path])
if newStructure
return newStructure
end
newStructure = FileOrFolder.new
newStructure.fullpath = path
pathbits = path.split('/')
newStructure.name = pathbits.last
newStructure.save
parentpath = path.sub(/#{Regexp.escape(pathbits.last)}$/, '')
if parentpath.length > 1
parentpath.sub!(/\/$/,'')
parent = FileOrFolder.find(:first, :conditions=>["fullpath = ?", parentpath])
unless parent
parent = FileOrFolder.import(parentpath)
end
#newStructure.move_to_child_of(parent);
**newStructure.find_or_create_by_path(parent);**
end
newStructure.save
return newStructure
end
database table looks like :
mysql> select * from testdb7.file_or_folders limit 10;
+------+-----------+------+------+----------+------------------------+---------------------+---------------------+
| id | parent_id | lft | rgt | fullpath | name | created_at | updated_at |
+------+-----------+------+------+----------+------------------------+---------------------+---------------------+
| 6901 | NULL | NULL | NULL | NULL | | 2013-06-25 18:49:04 | 2013-06-25 18:49:04 |
| 6902 | 6901 | NULL | NULL | NULL | devel | 2013-06-25 18:49:04 | 2013-06-25 18:49:04 |
| 6903 | 6902 | NULL | NULL | NULL | Bcontrol | 2013-06-25 18:49:04 | 2013-06-25 18:49:04 |
| 6904 | 6903 | NULL | NULL | NULL | perfect | 2013-06-25 18:49:04 | 2013-06-25 18:49:04 |
| 6905 | 6904 | NULL | NULL | NULL | matlab | 2013-06-25 18:49:04 | 2013-06-25 18:49:04 |
| 6906 | 6905 | NULL | NULL | NULL | test | 2013-06-25 18:49:04 | 2013-06-25 18:49:04 |
| 6907 | 6906 | NULL | NULL | NULL | smoke | 2013-06-25 18:49:04 | 2013-06-25 18:49:04 |
| 6908 | 6907 | NULL | NULL | NULL | Control_System_Toolbox | 2013-06-25 18:49:04 | 2013-06-25 18:49:04 |
| 6909 | 6908 | NULL | NULL | NULL | tsmoke_are.m | 2013-06-25 18:49:04 | 2013-06-25 18:49:04 |
| 6910 | 6908 | NULL | NULL | NULL | tsmoke_bode.m | 2013-06-25 18:49:04 | 2013-06-25 18:49:04 |
+------+-----------+------+------+----------+------------------------+---------------------+---------------------+
FileOrFolder Load (14560.8ms) SELECT `file_or_folders`.* FROM `file_or_folders` INNER JOIN `file_or_folder_hierarchies` ON `file_or_folders`.`id` = `file_or_folder_hierarchies`.`descendant_id` INNER JOIN (
SELECT ancestor_id
FROM `file_or_folder_hierarchies`
GROUP BY 1
HAVING MAX(`file_or_folder_hierarchies`.generations) = 0
) AS leaves ON (`file_or_folders`.id = leaves.ancestor_id) WHERE `file_or_folder_hierarchies`.`ancestor_id` = 147 ORDER BY `file_or_folder_hierarchies`.generations asc
EXPLAIN (13343.7ms) EXPLAIN SELECT `file_or_folders`.* FROM `file_or_folders` INNER JOIN `file_or_folder_hierarchies` ON `file_or_folders`.`id` = `file_or_folder_hierarchies`.`descendant_id` INNER JOIN (
SELECT ancestor_id
FROM `file_or_folder_hierarchies`
GROUP BY 1
HAVING MAX(`file_or_folder_hierarchies`.generations) = 0
) AS leaves ON (`file_or_folders`.id = leaves.ancestor_id) WHERE `file_or_folder_hierarchies`.`ancestor_id` = 147 ORDER BY `file_or_folder_hierarchies`.generations asc
EXPLAIN for: SELECT `file_or_folders`.* FROM `file_or_folders` INNER JOIN `file_or_folder_hierarchies` ON `file_or_folders`.`id` = `file_or_folder_hierarchies`.`descendant_id` INNER JOIN (
SELECT ancestor_id
FROM `file_or_folder_hierarchies`
GROUP BY 1
HAVING MAX(`file_or_folder_hierarchies`.generations) = 0
) AS leaves ON (`file_or_folders`.id = leaves.ancestor_id) WHERE `file_or_folder_hierarchies`.`ancestor_id` = 147 ORDER BY `file_or_folder_hierarchies`.generations asc
+----+-------------+----------------------------+--------+------------------------------------------------------------------------------------+----------------------------------+---------+--------------------+---------+---------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+----------------------------+--------+------------------------------------------------------------------------------------+----------------------------------+---------+--------------------+---------+---------------------------------+
| 1 | PRIMARY | file_or_folder_hierarchies | ref | index_file_or_folders_on_ans_des,index_file_or_folder_hierarchies_on_descendant_id | index_file_or_folders_on_ans_des | 4 | const | 15 | Using temporary; Using filesort |
| 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 104704 | Using where; Using join buffer |
| 1 | PRIMARY | file_or_folders | eq_ref | PRIMARY | PRIMARY | 4 | leaves.ancestor_id | 1 | Using where |
| 2 | DERIVED | file_or_folder_hierarchies | index | NULL | index_file_or_folders_on_ans_des | 8 | NULL | 1340096 | |
+----+-------------+----------------------------+--------+------------------------------------------------------------------------------------+----------------------------------+---------+--------------------+---------+---------------------------------+
I'm the author of closure_tree. 4.2.3 is on it's way with the fix for attr_accessible. I'm just waiting for Travis to finish testing it.
It looks like your whole import method could be replaced with this line:
# Assumes that path is a string that looks like this: "/usr/local/bin/ruby"
def import(path)
FileOrFolder.find_or_create_by_path(path.split("/"))
end
This assumes you have this FileOrFolder setup:
class FileOrFolder < ActiveRecord::Base
acts_as_tree
before_create :set_fullpath
def set_fullpath
if root?
self.fullpath = "/#{name}"
else
self.fullpath = "/#{parent.ancestry_path.join("/")}/#{name}"
end
end
end
Please take a look at the spec directory. You'll find tons of other examples.

Resources