Double quotes in csv-table cell - python-sphinx

I am struggling to add a cell with double quotes in the csv-table.
.. csv-table::
:header: f,d
2,"ts*"
the above one works fine.
But if I try to get the cell as ts"*" instead of ts*, it starts throwing an error :
Error with CSV data in "csv-table" directive: ',' expected after '"'
I tried using escape characters (like \ ) but it didn't work.
I was trying it here : online editor

I think i found the solution; There is an option to specify the escape sequence :escape: '.
.. csv-table::
:escape: '
:header: f,d
2,"ts'"*'""
It is now showing the cell as ts"*".
Try it online

Related

trailing and leading blank issue in string

I am working on a project where I need to check if the employee enter *done* in a text field, though employee enters '* done *' or '*done *' or '* done*' in similar fashion. As you see they are putting trailing and leading blank or both at a time.I have to check the column for all three/four possible entry in like statement, I tried trim,rtrim nothing seems like working.
case when
col like ('*done*')
or col like ('* done*')
or col like ('*done *')
or col like ('* done *')
end as work_status
doesn't seems a smart way to do it. What is the best way to to check this. Any help will be appreciated. Thank you.
Remove spaces:
case when replace(col, ' ') = '*done*' then 'done'
else 'not'
end as work_status
You can look for the done substring with anything preceding and following using the LIKE operator and % wildcards:
CASE WHEN col LIKE '%done%' THEN 'done' END AS work_status
Or you can trim the leading and trailing space characters:
CASE WHEN TRIM(col) = 'done' THEN 'done' END AS work_status
Or you can replace all the leading/trailing white spaces (in case the users have entered new lines, tabs, etc. rather than space characters) using a regular expression:
CASE
WHEN REGEXP_REPLACE(col, '^[[:space:]]+|[[:space:]]+$') = 'done'
THEN 'done'
END AS work_status
fiddle

Buffer gets get reduced when escaping dot with back slash

I have the below query
SELECT
categorymap.id,
categorytype.name,
categorytype.value
FROM
categorymap,
categorytype
WHERE
( categorymap.logfilename = '**hello\.log**' )
AND ( categorymap.categorytypeid = categorytype.id )
Index is available for column logfilename of categorymap table.
I noticed the buffer gets was more when not adding "\" before "." in where clause. Both cases, before and after adding "\", index range scan was used on logfilename column as per explain plan.
Could someone please explain what role does '.' play in here in increasing buffer gets?
TIA
If you are talking about this:
= '**hello\.log**'
(maybe it is just = 'hello\.log'; double asterisks for bold?), then: you didn't escape anything. This query will search the logfilename column for exactly such a string: hello followed by a backslash \ followed by a dot . followed by log.
You'd escape a dot in e.g. regular expression, but there's none here, so ...

Oracle SQL: Using Replace function while Inserting

I have a query like this:
INSERT INTO TAB_AUTOCRCMTREQUESTS
(RequestOrigin, RequestKey, CommentText) VALUES ('Tracker', 'OPM03865_0', '[Orange.Security.OrangePrincipal]
em[u02650791]okok
it's friday!')
As expected it is throwing an error of missing comma, due to this it's friday! which has a single quote.
I want to remove this single quote while inserting using Replace function.
How can this be done?
Reason for error is because of the single Quote. In order to correct it, you shall not remove the single quote instead you need to add one more i.e. you need to make it's friday to it''s friday while inserting.
If you need to replace it for sure, then try the below code :
insert into Blagh values(REPLACE('it''s friday', '''', ''),12);
I would suggest using Oracle q quote.
Example:
INSERT INTO TAB_AUTOCRCMTREQUESTS (RequestOrigin, RequestKey, CommentText)
VALUES ('Tracker', 'OPM03865_0',
q'{[Orange.Security.OrangePrincipal] em[u02650791]okok it's friday!}')
You can read about q quote here.
To shorten this article you will follow this format: q'{your string here}' where "{" represents the starting delimiter, and "}" represents the ending delimiter. Oracle automatically recognizes "paired" delimiters, such as [], {}, (), and <>. If you want to use some other character as your start delimiter and it doesn't have a "natural" partner for termination, you must use the same character for start and end delimiters.
Obviously you can't user [] delimiters because you have this in your queries. I sugest using {} delimiters.
Of course you can use double qoute in it it''s with replace. You can omit last parameter in replace because it isn't mandatory and without it it automatically will remove ' character.
INSERT INTO TAB_AUTOCRCMTREQUESTS (CommentText) VALUES (REPLACE('...it''s friday!', ''''))
Single quotes are escaped by doubling them up
INSERT INTO Blagh VALUES(REPLACE('it''s friday', '''', ''),12);
You can try this, (sorry but I don't know why q'[ ] works)
INSERT INTO TAB_AUTOCRCMTREQUESTS
(RequestOrigin, RequestKey, CommentText) VALUES ('Tracker', 'OPM03865_0', q'[[Orange.Security.OrangePrincipal] em[u02650791]okok it's friday!]')
I just got the q'[] from this link Oracle pl-sql escape character (for a " ' ") - this question could be a possible duplicate

Sqlldr- No terminator found after terminated and enclosed field

I use Oracle 11g.
My data file looks like below:
1|"\a\ab\"|"do not "clean" needles"|"#"
2|"\b\bg\"|"wall "69" side to end"|"#"
My control file is:
load data
infile 'short.txt'
CONTINUEIF LAST <> '"'
into table "PORTAL"."US_FULL"
fields terminated by "|" OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
(
u_hlevel,
u_fullname NULLIF u_fullname=BLANKS,
u_name char(2000) NULLIF c_name=BLANKS ,
u_no NULLIF u_no=BLANKS
)
While loading data through sqlldr, a .bad file is created and .log file contains error message stating "No terminator found after terminated and enclosed field"
Double quotes starting and ending are not in my data, however I would need double quotes withing the data like in above example surrounding clean and 69. Ex: My data file after loading should look like:
1, \a\ab\, do not "clean" needles, #
2, \b\bg\ , wall "69" side to end , #
How to accomplish this?
Asking your provider to correct the data file may not be an option, but I ultimately found a solution that requires you to update your control file slightly to specify your "enclosed by" character for each field instead of for all fields.
For my case, I had an issue where if [first_name] field came in with double-quotes wrapping a nickname it would not load. (EG: Jonathon "Jon"). In the data file the name was shown as "Jonathon "Jon"" . So the "enclosed by" was throwing an error because there were double quotes around the value and double quotes around part of the value ("Jon"). So instead of specifying that the value should be enclosed by double quotes, I omitted that and just manually removed the quotes from the string.
Load Data
APPEND
INTO TABLE MyDataTable
fields terminated by "," ---- Noticed i omitted the "enclosed by"
TRAILING NULLCOLS
(
column1 enclosed by '"', --- Specified "enclosed by" here for all cols
column2 enclosed by '"',
FIRST_NAME "replace(substr(:FIRST_NAME,2, length(:FIRST_NAME)-2), chr(34) || chr(34), chr(34))", -- Omitted "enclosed by". substr removes doublequotes, replace fixes double quotes showing up twice. chr(34) is charcode for doublequote
column4 enclosed by '"',
column5 enclosed by '"'
)
I'm afraid since the fields are surrounded by double-quotes the double-quotes you want to preserve need to be escaped by adding another double-quote in front like this:
1|"\a\ab\"|"do not ""clean"" needles"|"#"
Alternately if you can get the data without the fields being surrounded by double-quotes, this would work too:
1|\a\ab\|do not "clean" needles|#
If you can't get the data provider to format the data as needed (i.e. search for double-quotes and replace with 2 double-quotes before extracting to the file), you will have to pre-process the file to set up double quotes one of these ways so the data will load as you expect.

How to find string containing regex special chars with regex

I have the fallowing piece of code :
details =~ /.#{action.name}.*/
If action.name contains regular string such as "abcd" then everything goes ok ,
but if action.string contains special chars such as . or / ,im getting an exception.
Is there a way to check the action.name string without having to put \ before every special char inside action.name ?
You can escape all special characters using Regexp::escape.
Try:
details =~ /.#{Regexp.escape(action.name)}.*/

Resources