postgresql - Mac Address Regex - greenplum

I am trying to retrieve all rows with correct mac address using the following query in Greenplum but I also get some rows with junk data like ??:??:??:??:??:??.When I pass the column to another function I get an error
ERROR: "?" is not a valid hexadecimal digit
Here is my select query
select * from table where mac_address like '%:%:%:%:%:%'
and (length(mac_address)) = 17
and mac_address like '^([0-9A-Fa-f]{2}[:-]){5}([0-9A-Fa-f]{2})$'
How can I filter out incorrect mac_addresses from a column in Greenplum?

I found it myself, mac_address ~ '^([0-9A-Fa-f]{2}[:-]){5}([0-9A-Fa-f]{2})$'
select * from table where mac_address like '%:%:%:%:%:%'
AND (length(mac_address)) = 17
AND mac_address ~ '^([0-9A-Fa-f]{2}[:-]){5}([0-9A-Fa-f]{2})$'
limit 100;

Related

get result from select statement put into header

I have problem:
- I have select statement
Select name from org where id = 123
I can get value from statement that puting into header
ex: result is "aread" then header column is "aread"
Thanks!
Possible duplicate:
Oracle - dynamic column name in select statement

How to insert data from the same row by extracting text in Oracle PL/SQL?

I have the below table:
(AddressID,ShortAddress,FullAddress).
Now, The FullAdress column normally contains address like this:
Bellvue East,204-Park Avenue,Zip-203345.
I need to write a script, which will extract the first part before the first ',' in the full address and insert into ShortAddress column.
So, the Table Data before executing the script:
AddressID|ShortAddress|FullAddress
1 | NULL |Bellvue East,204-Park Avenue,Zip-203345,United Kingdom
2 | NULL |Salt Lake,Sector-50/A,Noida,UP,India
And after executing the script, it should be:
AddressID|ShortAddress|FullAddress
1 |Bellvue East|Bellvue East,204-Park Avenue,Zip-203345,United Kingdom
2 |Salt Lake|Salt Lake,Sector-50/A,Noida,UP,India
I need to write it in Oracle PL/SQL.
Any help will be highly appreciated.
Thanks in advance.
Try this UPDATE:
UPDATE yourTable
SET ShortAddress = COALESCE(SUBSTR(FullAddress, 1, INSTR(FullAddress, ',') - 1),
FullAddress)
This update query will assign the first CSV term in the full address to the short address. If no comma be present, then it will assign the entire full address.

Oracle Select where NCLOB is Like some string

I have an Oracle table, and in this table I have a column of type NCLOB. I would like to perform a SELECT LIKE on it like so:
SELECT
*
FROM
T_WEB_TASK_IT
WHERE DBMS_LOB.substr( T_WEB_TASK_IT.ISSUE_DESCRIPTION , 32000, 1)
LIKE '%Turning on the%'
But it isn't working, I get an error saying:
String buffer too small
But I don't understand how can that be, cause I know for a fact that there aren't that many characters in that column for that particular record!
You can use DBMS_LOB.INSTR function to search for strings in the lob. Like this:
SELECT *
FROM T_WEB_TASK_IT
WHERE DBMS_LOB.INSTR( T_WEB_TASK_IT.ISSUE_DESCRIPTION , 'Turning on the') > 0
Apart from DBMS_LOB.INSTR, you could also use Regular Expressions:
SELECT *
FROM T_WEB_TASK_IT
WHERE regexp_like(issue_description, 'Turning on the')

Efficiently query table with conditions including array column in PostgreSQL

Need to come up with a way to efficiently execute a query with and array and integer columns in the WHERE clause, ordered by a timestamp column. Using PostgreSQL 9.2.
The query we need to execute is:
SELECT id
from table
where integer = <int_value>
and <text_value> = any (array_col)
order by timestamp
limit 1;
int_value is an integer value, and text_value is a 1 - 3 letter text value.
The table structure is like this:
Column | Type | Modifiers
---------------+-----------------------------+------------------------
id | text | not null
timestamp | timestamp without time zone |
array_col | text[] |
integer | integer |
How should I design indexes / modify the query to make it as efficient as possible?
Thanks so much! Let me know if more information is needed and I'll update ASAP.
PG can use indexes on array but you have to use array operators for that so instead of <text_value> = any (array_col) use ARRAY[<text_value>]<#array_col (https://stackoverflow.com/a/4059785/2115135). You can use the command SET enable_seqscan=false; to force pg to use indexes if it's possible to see if the ones you created are valid. Unfortunately GIN index can't be created on integer column so you will have to create two diffrent indexes for those two columns.
See the execution plans here: http://sqlfiddle.com/#!12/66a71/2
Unfortunately GIN index can't be created on integer column so you will have to create two diffrent indexes for those two columns.
That's not entirely true, you can use btree_gin or -btree_gist
-- feel free to use GIN
CREATE EXTENSION btree_gist;
CREATE INDEX ON table USING gist(id, array_col, timestamp);
VACUUM FULL ANALYZE table;
Now you can run the operation on the index itself
SELECT *
FROM table
WHERE id = ? AND array_col #> ?
ORDER BY timestamp;

Search Query with wildcards in Date

I'm trying to search DB for records based on Date. But the search is based in month and year. i.e mm/yyyy and dd is to be wild-card.
My search query looks like this:
Select ucid, uc_name, From (UC_Table1)
where UC_Date like To_Date('11/*/2011','mm/dd/yyyy')
this gives me the following error:
ORA-01858: a non-numeric character was found where a numeric was expected, So obviously it doesn't like * or % or _ or ? as wild-cards for dd.
Wildcards do not work like that within a function. The To_Date() function parses out the * before the LIKE has a chance to see it. Consider:
SELECT ucid, uc_name
FROM UC_Table1
WHERE UC_Date >= To_Date('11/01/2011', 'mm/dd/yyyy')
AND UC_Date < To_Date('12/01/2011', 'mm/dd/yyyy')

Resources