How can I get table creation scripts on teradata with jdbc? - jdbc

I want to get table creation script on teradata with jdbc.
I used this code which I found it on stackoverflow :
StringBuilder sb = new StringBuilder( 1024 );
if ( columnCount > 0 ) {
sb.append( "Create table ").append( rsmd.getTableName( 1 ) ).append( " ( " );
}
for ( int i = 1; i <= columnCount; i ++ ) {
if ( i > 1 ) sb.append( ", " );
String columnName = rsmd.getColumnLabel( i );
String columnType = rsmd.getColumnTypeName( i );
sb.append( columnName ).append( " " ).append( columnType );
int precision = rsmd_ddl.getPrecision( i );
if ( precision != 0 ) {
sb.append( "( " ).append( precision ).append( " )" );
}
} // for columns
sb.append( " ) " );
But the problem is : when the type is VARCHAR the precision is 0 but in teradata the column is VARCHAR(100) but how can I find 100 ?
Thanks.

getPrecision is for Decimals, you should use getColumnDisplaySize for chars.
There are lots of samples in the Teradata JDBC reference:
http://developer.teradata.com/doc/connectivity/jdbc/reference/current/frameset.html
Sample T20100JD shows how to ectract metadata.

getPrecision is for Decimals, you should use getColumnDisplaySize for chars
Teradata has a flaw/bug in their JDBC driver implementation in that they are not properly implementing the contract of the interface.
The Java API documentation for interface java.sql.ResultSetMetaData explicitly defines the expected behavior for the getPrecision() method with different datatypes:
int getPrecision(int column)
throws SQLException
Get the designated column's specified column size.
For numeric data, this is the maximum precision.
For character data, this is the length in characters.
...
The Teradata JDBC driver incorrectly returns 0 when getPrecision() is called for a VARCHAR column. Therefore, when working with Teradata JDBC one must use getColumnDisplaySize().

Related

JdbcPagingItemReader Spring batch skipping last element

I have a table with this structure:
CNMA_CO_PLATFORM_MESSAGE|AUDI_TI_CREATION|FIELD4|OTHER FIELDS
test-jj#2774#20210422112434957#00026129|22/04/21 11:24:34,957000000|11|..
test-jj2#2774#20210422112434957#00026129|22/04/21 11:24:34,957000000|12|..
test-jj3#2774#20210422112434957#00026129|22/04/21 11:24:34,957000000|13|..
This combination is the PRIMARY_KEY of the table:
CNMA_CO_PLATFORM_MESSAGE|AUDI_TI_CREATION
Well, I have an JdbcPagingItemReader defined like this (Pagesize is 1):
#StepScope
#Bean
public JdbcPagingItemReader<PendingNotificationDTO> pendingNotificationReader(
#Value("#{stepExecution}") StepExecution stepExecution){
final JdbcPagingItemReader<PendingNotificationDTO> reader = new JdbcPagingItemReader<>();
reader.setDataSource(daoDataSource);
reader.setName("pendingNotificationReader");
//Creamos la Query
final OraclePagingQueryProvider oraclePagingQueryProvider = new OraclePagingQueryProvider();
oraclePagingQueryProvider.setSelectClause("SELECT " +
" cegct.AUDI_TI_CREATION, "+
" CNMA_CO_PLATFORM_MESSAGE, " +
" OTHERFIELDS... ");
oraclePagingQueryProvider.setFromClause("FROM TABLE1 cegct " +
" JOIN TABLE1 notip ON cegct.field1 = notip.field1 " +
" AND notip.field2 = :frSur ");
oraclePagingQueryProvider.setWhereClause("WHERE "
+ " cegct.field3 = 0 "
+ " AND cegct.field4 in (:notifStatusList) ");
//Indicamos conjunto de campos no repetibles para poder paginar
Map<String, Order> sortKeys = new HashMap<>();
sortKeys.put("CNMA_CO_PLATFORM_MESSAGE", Order.DESCENDING);
sortKeys.put("AUDI_TI_CREATION", Order.DESCENDING);
oraclePagingQueryProvider.setSortKeys(sortKeys );
reader.setQueryProvider(oraclePagingQueryProvider);
String frSur = stepExecution.getJobExecution().getExecutionContext().getString(Constants.FM_ROLE_SUR_ZK);
String notifStatus = stepExecution.getJobExecution().getExecutionContext().getString(Constants.STATUS_REPORTS);
Map<String, Object> parameters = new HashMap<>();
parameters.put("frSur", frSur);
parameters.put("notifStatusList", Arrays.asList(StringUtils.split(notifStatus, ",")));
reader.setParameterValues(parameters );
Integer initLoaded = stepExecution.getJobExecution().getExecutionContext().getInt(Constants.RECOVER_PENDING_NOT_COMMIT);
reader.setPageSize(initLoaded);
reader.setRowMapper(new BeanPropertyRowMapper<PendingNotificationDTO>(PendingNotificationDTO.class));
return reader;
}
(I hide some irrelevant fields and table names)
Well, I run a test and my 3 records are valid to the select, these are selected one to one by the page size. Anyway, the first chunk-reader generated select my "test-jj3#..." record, my second chunk-reader select "test-jj2#.." and my third chunk-reader doesn't select doesn't recover any record (It should recover last 'test-jj#...' element.
These are the generated sqls (I hide some sensible no relevant fields)
First chunk, Select 1 register
SELECT * FROM (
SELECT
cegct.AUDI_TI_CREATION
CNMA_CO_PLATFORM_MESSAGE, [otherfields]
FROM [FROM]
WHERE [where]
ORDER BY CNMA_CO_PLATFORM_MESSAGE DESC, AUDI_TI_CREATION DESC
) WHERE ROWNUM <= 1;
Second chunk, Select 1 register (Here, the rownum filter by the sortkeys)
SELECT * FROM (
SELECT
cegct.AUDI_TI_CREATION
CNMA_CO_PLATFORM_MESSAGE, [otherfields]
FROM [FROM]
WHERE [where]
ORDER BY CNMA_CO_PLATFORM_MESSAGE DESC, AUDI_TI_CREATION DESC
) WHERE
ROWNUM <= 1 AND (
(CNMA_CO_PLATFORM_MESSAGE < 'test-jj3#2774#20210422112434957#00026129')
OR
(CNMA_CO_PLATFORM_MESSAGE = 'test-jj3#2774#20210422112434957#00026129' AND AUDI_TI_CREATION < TO_DATE('2021-04-22 11:24:34', 'YYYY-MM-DD HH24:MI:SS'))
);
Third chunk, select 0 registers
SELECT * FROM (
SELECT
cegct.AUDI_TI_CREATION
CNMA_CO_PLATFORM_MESSAGE, [otherfields]
FROM [FROM]
WHERE [where]
ORDER BY CNMA_CO_PLATFORM_MESSAGE DESC, AUDI_TI_CREATION DESC
) WHERE
ROWNUM <= 1 AND (
(CNMA_CO_PLATFORM_MESSAGE < 'test-jj2#2774#20210422112434957#00026129')
OR
(CNMA_CO_PLATFORM_MESSAGE = 'test-jj2#2774#20210422112434957#00026129' AND AUDI_TI_CREATION < TO_DATE('2021-04-22 11:24:34', 'YYYY-MM-DD HH24:MI:SS'))
);
Sorry for my english, I hope you can understand my problem.
Logs for the Prepared SQL Statement
Executing prepared SQL statement [SELECT * FROM (
SELECT
cegct.AUDI_TI_CREATION,
CNMA_CO_PLATFORM_MESSAGE,
OTHERFIELDS...
FROM TABLE1 cegct
JOIN TABLE2 notip ON cegct.field1 = notip.field1
AND notip.field2 = ?
WHERE cegct.field3 = 0
AND cegct.field4 in (?, ?, ?)
ORDER BY CNMA_CO_PLATFORM_MESSAGE DESC, AUDI_TI_CREATION DESC) WHERE ROWNUM <= 1]
20221116 12:52:43.560 TRACE org.springframework.jdbc.core.StatementCreatorUtils [[ # ]] - Setting SQL statement parameter value: column index 1, parameter value [1], value class [java.lang.String], SQL type unknown
20221116 12:52:43.560 TRACE org.springframework.jdbc.core.StatementCreatorUtils [[ # ]] - Setting SQL statement parameter value: column index 2, parameter value [11], value class [java.lang.String], SQL type unknown
20221116 12:52:43.560 TRACE org.springframework.jdbc.core.StatementCreatorUtils [[ # ]] - Setting SQL statement parameter value: column index 3, parameter value [12], value class [java.lang.String], SQL type unknown
20221116 12:52:43.560 TRACE org.springframework.jdbc.core.StatementCreatorUtils [[ # ]] - Setting SQL statement parameter value: column index 4, parameter value [13], value class [java.lang.String], SQL type unknown
A bind variable is a single value; therefore when you use:
AND cegct.field4 in (:notifStatusList)
Then :notifStatusList is a single string and is NOT a list of values and you effectively doing the same as:
AND cegct.field4 = :notifStatusList
If the bind variable :notifStatusList is a single value then it will work; however, when you try to pass in multiple values then it will not match those multiple values but will try to match field4 to the entire delimited list (which fails and will filter out all the rows).
If you want to pass a delimited string then use:
AND ',' || :notifStatusList || ',' LIKE '%,' || cegct.field4 || ',%'
Alternatively, pass the values as an array (rather than a delimited string) into an Oracle collection and then test to see if it is in that collection.

how to import huge tsv file into h2 in memory database with spring boot

I have a huge tsv files and I need to import them into my h2 in memory database.
I can read it with Scanner and import it line by line but it takes for hours !
is there any faster way to import tsv file into h2 in memory database ?
Use insert into select convert for direct importing from file into your h2 table.
How to read CSV file into H2 database :
public static void main (String [] args) throws Exception {
Connection conn = null;
Statement stmt = null;
Class.forName("org.h2.Driver");
conn = DriverManager.getConnection("jdbc:h2:~/test", "", "");
stmt = conn.createStatement();
stmt.execute("drop table if exists csvdata");
stmt.execute("create table csvdata (id int primary key, name varchar(100), age int)");
stmt.execute("insert into csvdata ( id, name, age ) select convert( \"id\",int ), \"name\", convert( \"age\", int) from CSVREAD( 'c:\\tmp\\sample.csv', 'id,name,age', null ) ");
ResultSet rs = stmt.executeQuery("select * from csvdata");
while (rs.next()) {
System.out.println("id " + rs.getInt("id") + " name " + rs.getString("name") + " age " + rs.getInt("age") );
}
stmt.close();
}
Or
SELECT * FROM CSVREAD('test.csv');
-- Read a file containing the columns ID, NAME with
SELECT * FROM CSVREAD('test2.csv', 'ID|NAME', 'charset=UTF-8 fieldSeparator=|');
SELECT * FROM CSVREAD('data/test.csv', null, 'rowSeparator=;');
-- Read a tab-separated file
SELECT * FROM CSVREAD('data/test.tsv', null, 'rowSeparator=' || CHAR(9));
SELECT "Last Name" FROM CSVREAD('address.csv');
SELECT "Last Name" FROM CSVREAD('classpath:/org/acme/data/address.csv');
h2 csvread function
NOTE: You can specify file's field separator for these commands.

How to insert into a snowflake variant field using a DAO?

I have the following code:
#RegisterMapper(MyEntity.ResultMapper.class)
#UseStringTemplate3StatementLocator
public interface MyDao {
#Transaction(TransactionIsolationLevel.SERIALIZABLE)
#SqlBatch("INSERT INTO mySchema.myTable (" +
" id, entity_type, entity_id, flags " +
" ) VALUES " +
"(" +
" :stepId , :entityType , :entityId,parse_json(:flags) " +
")")
#BatchChunkSize(500)
Object create( #BindBean List<MyEntity> entities );
}
As you can see, I am bulk inserting a list of entities into my Snowflake table using this DAO.
The issue is that I am unable to insert into the flags columns, which is a variant. I have tried to_variant(:flags) and currently parse_json(:flags), but the JDBI keeps throwing the following error:
net.snowflake.client.jdbc.SnowflakeSQLException: SQL
compilation error:
Invalid expression [PARSE_JSON(?)] in VALUES clause
[statement:"INSERT INTO mySchema.myTable ( id, entity_type,
entity_id, flags ) VALUES ( :stepId , :entityType , :entityId,
parse_json(:flags) )", located:"null", rewritten:"null",
arguments:{ positional:{}, named:{timeStamp:'null',
entityType:MYENTITY,
flags:'{"client":"myClient","flow":"myFlow"}',stepId:null,
entityId:'189643357241513', class:class myOrg.MyEntity}, finder:[]}]
How should I pass the value in the flags column ? Has anyone attempted this before? The flags field in MyEntity is in my control, I can keep it as a POJO or a String, whichever helps me resolve this issue.
See the comment by Jiansheng Huang for answer:
INSERT INTO T SELECT parse_json(:flag);

oracle insert statement with blob does not work

Hi I got a db dump from a mysql testdatabase and I want to insert it into an oracle database with the same table structure but I am getting the following error:
[42000][972] ORA-00972: identifier is too long
this is the statement
INSERT INTO aic.keystore (id, alias, keystore_bytes, password, type, description) VALUES (61, 'test', hextoraw(0x30820B5534002010330820A4A06092A864886F70D010701A0820A3B04820A3730820A333082058806092A864886F70D010701A082057904820575308205713082056D060B2A864886F70D010C0A0102A08204FA308204F63028060A2A864886F70D010C0103301A0414DE03192F0E0A847E7E03FA8F08E27ADE383E2A4902020400048204C81BD11FB045C9043D446FB50851C0792667310D1EFF7CF87AB122B01448E73F30873182A22DACDC2D630CA0C2EDB79D6ACEDC4E1F69BE37E535BE7838B149685A661AA829A457D5FC87EEE7BA3D8ADD6E4D8996A258B16EECC9706085414832A49F60060E55CEF434DF30D58BC77F275F4EA9B1B52DD8A9CC7EC6911390EE716C5C31C512DA5947FFB8DBDB48240921DABE0487F79DEDFCC739EFF011A4672FA7DA9626C053BB5075A94F51BCB322CFF9AD6CCE99EC5AF09E6628A4AA9724A05208AB90530B963890D8BEA0146B7CD6B53C09866F808878AC98824A8A1489216EE2951FBF024BD364BD7385D5FF7552E9A0110FE37E38C3219EAA8D7EFB6570D07203D6E3F17214FD665DF18B875161E85B15EB378A557A1EC8A6C56B97E2FE24C1350A6C937C082EBC00D029D0C5C0791E0A45C9BF785EC7D02AF40DFF5014B500E1FB4E6893934231F0BBFDD3DB0B0C3D6B8595E5BC58DD28C209ECE234908DCAFFADCA1E9FFDECEBC02A195F63964287CFE542CF03E132410AD77923F480C5CF9F4F0107BE672AA06184C86ABD743A6D06F20A73E97EB7EEA9761C72338077B4A3A07089028D32909308BA87D2166EBD8BF4EB5261D33784F7C4D0DAF2FE69CF3E60A38887932EF6B227BCF56950B9FE74FD29E35968E5FFF74889413C56E47C20973DC13E8700C2570930594F211B93898BC8CB3AB4F602DD9C457DC9FDA57A14C257EC3D1454CBD7545092AB5C11C8670E82410973DB2E6DECB7AEBFDD0585665E7C24E5227536B154326AF3C6545EA01403555DC84CCC9B4714FB23308247455C814F293B5FAA6A5C48B20B038006D0B80923E7F609890350A25DEDC333F9C636A76360F226C13A8664B6E2DA44EBA4AFC45C32102A674B461CADC86B14C80AF7924E2F7BCC2F7FBFF1F73AA4B928EF7D4250468B6754C4639FD819C6BD5D411C423C8C6FB752AC244C5BD2D3B4609E60E277B1A827F1F337C398D663C7349044C49EC021F6382BF7CA90C80DDB59BB52E63D00C6302C9D4F876DCD605E138D60EA306A6EF4DF2B2CDF3B6092EA8D5B1DDF6B6C5ABF3A7A5A626CD5BC41040E61D3D2C79F1CE1963F5DB384323C0C429B414E7C81D86D9B68586A110BF3A6B429F867A7122326CF106FBD8EEB88621AEB030ACDB71BCED42A44F2A0C1B73DE68F6C7C12E015A4717B6AC664F280A47859CE7F16934173363EE1374ABB8CFE6849E621563239A7195605E2E7EA571686808609057E1AA02466AA5EB9A9DFB18693C2606D7FF7C4F3CA66C267F13EEBFEBA7C33BF199056706963A2E499FF26940DFD0A17462506DE0ACFDAA043829A732C1EF2ED12412743CD557A1261FC15F8B4FE374929D060FD15D7E032E6E743B3EC838AF5B99F9A1609457E064BE62DE0513F86F1D1FB9A39008B5E6BB5A60225F1D5915FE3E9E79661F1D73A10195985FFAEDFFCDF68C77BE3E6E46DDBF0B204D4377813566B7D695A822054D5DB052065A7C23A622A73208402DFD9C98CB1D785E239EE8B7FB8272374ECCE946128B74959E8ACAE9366773E4F2FD422F87AC71A30B0EDA25D3865DD33D84C6BF6C5B7893848FCEEDA666FE2558E2CBEAE41BB0A235926CFD5FA292C6510661487D08A0A475C0776D0D6CBDB3E1275DD42EB4A8B7C702C8D102576815CEB80434606B5EF4557A055C8FC8928228BD4472AFC3F2CDC87B828F6281C134E636DBA488EDC4AC38D177D033160303B06092A864886F70D010914312E1E2C0073006500720076006900630065006100630063006F0075006E0074002D00730061006D006C002D00700077302106092A864886F70D0109153114041254696D652031343836393937323733363833308204A306092A864886F70D010706A0820494308204900201003082048906092A864886F70D0107013028060A2A864886F70D010C0106301A0414167A515711D35DEE864529821AABC3A221E4F887020204008082045022F58215DA63413260D3B4F87D4BABC2D4CBC6A12AF3086AA0F7FA18A722C022A18B3D9F8AA12397EC427233EE8CD90B2350FF8DBC05228F1507E14B2F15F8C7AFC2372FD3E89B2898F327DD4D06D66459B3A8064AB0644195F253E76B7A4F5F6CAFB8564BCC159CE24F8BCA72951EC45008BEA430AB48B3221D368D3F3F5F64AAD0B84E1527181C11040D9E1A8BE737B4CA8CAA0BB3DA607DAAABCFC73586CC589D8DC052F700698B2A10227EDABBD4AF1D4F93261D5B5C763EAAA00AB2EA7C912AFF5B8268769FD5ECAD7A9B17445774765AE8C1A52BC7CE11A7E1BA3C4B419C8EDA911A14BDFD171FAD16B8F825BAC398A0823D942BC555769301E68AF614C8F21E34B0B9006E8329EF039E04373FBA01840E02DBB60822780E59F13BC4C0A538906711819C5AEDE4E14704CC5AE0C94E9F7787962C672F0FAD9307A3CAB4EF0E2DF9A963A2975B787128187163CCF6D37325788422544F89FE73941756FA1D3BA261158B415677421586109B03B886BFA17ED79515E376D4F3C12BF917D88671AE2F0961042F839C938B0692AB09AD89B106ADF544C3D9688E2C2F4BE60BB53636AC067B32C47D696527E38734CF263DDBFD365569E2F9E59E6EDD668CCEB7451CD121FF87B983C51AE3913F711BBCC3029B54A2A8CCDB1A933EC554263636DD798BD997BEF9FBD1F158619BB85C326A0AAD6B1B38E4449BBB4A5654985095B57F5AF2E1AD89D73807EAB63F3D50475803E81B887F0B7D00AA618810A2E4D19A5122E03030D485811DA64365D008864681EF8DCECD70BB09EF0ACFD9F2017701BE71F152BE00091CAF13EA2B34060F25038A2EEA1891656D88B7F93596070EAC35132E98CEDD359773B255E39AA2F36EE802076E7214ED9A6D83081C4F81581F68D776DA30F57CD4364BF2F415795A7E9828465FEA0CF1C39CF8D21E3AC05319B804274C30DFA70DE8BB0262C52D940EB964DD0805FF1DEE3DC00DFEFD19E5657E2516F4E39A70CBD9D1EFD24EF3E4FA87F2C568A9703B00495B630822541B01E5C8AFC86944F8E4FD5D5BA3072010EBED3927DE4D26043AD21539D95A8E2F0AC1BD89C18D288F64E0F29B87CFA918BF96373C3C3CD792F35457D0CD15C5290385FDA57F0A3A219707F72E37943485AD4F181BA5E6A1EFD4CB58812CE9F8768621AEE4FCFB3905B178479F19C5DF94A2DA202D79719023052595595AEC3DDE1501F6ECCE2B32E9A1CD56F659FA0CCFC87DDC4DA44D6815148A56272AB692C48A962B1007710C60F5D0063B46EA011DC8662A1B060CF8CA8204BF4EFD90BB9CA2B1308268B924E5E7CBB48FD4C561D1148861A5D806FBF36E27708DD461AD60867952A2F35D8D74CCDBB86D81915E8A4AC5560D5D191BF48B3536D8FD2A51A6C6F048E3C06F9E9CA4E96BC513A6C9472368F0B03D35BE18B958EE7743ABC55A6B82F25D196C4B42BF00267CB53970544ADA6C89E3B6D2C49541F0A3CA857AE3C9B56ABDBE32791108DA35E989127028025871B4F0A15B1B86D1E210DB8A20660D3B2A64FD9EF19100A78A49139330303D3021300906052B0E03021A0500041494A0003D236C18865528381FA607BBBEB2E377F20414D8A7FC776095D6A60878C86ABCAB8498AC02C6F802020400), '123456', 'PKCS12', null);
if I remove the keystore_bytes insertion the insert does complete successfully. So why am I getting an identifier is too long? I haven't any identifiers that exceed 30 characters...
You can have a logic of something like below by using dbms_lob package.
if($stringlen >= 32000) {
for(my $i = 1; $i <= int(($querylen/32000) + 0.99); $i++) {
$temptext = substr($sql, 0 + (32000*($i-1)), 32000);
$sqltext .= "dbms_lob.append(my_sqltext,'$temptext');" . "\n";
}
}
else {
$sqltext = "my_sqltext := '$sql';" . "\n";
}
Once you get the lob you can insert by using dbms_lob.write.

How to return all rows if IN clause has no value?

Following is sample query.
CREATE PROCEDURE GetModel
(
#brandids varchar(100), -- brandid="1,2,3"
#bodystyleid varchar(100) -- bodystyleid="1,2,3"
)
AS
select * from model
where brandid in (#brandids) -- use a UDF to return table for comma delimited string
and bodystyleid in (#bodystyleid)
My requirement is that if #brandids or #bodystyleid is blank, query should return all rows for that condition.
Please guide me how to do this? Also suggest how to write this query to optimize performance.
You'll need dynamic SQL or a split function for this anyway, since IN ('1,2,3') is not the same as IN (1,2,3).
Split function:
CREATE FUNCTION dbo.SplitInts
(
#List VARCHAR(MAX),
#Delimiter CHAR(1)
)
RETURNS TABLE
AS
RETURN ( SELECT Item = CONVERT(INT, Item) FROM (
SELECT Item = x.i.value('(./text())[1]', 'int') FROM (
SELECT [XML] = CONVERT(XML, '<i>' + REPLACE(#List, #Delimiter, '</i><i>')
+ '</i>').query('.') ) AS a CROSS APPLY [XML].nodes('i') AS x(i)) AS y
WHERE Item IS NOT NULL
);
Code becomes something like:
SELECT m.col1, m.col2 FROM dbo.model AS m
LEFT OUTER JOIN dbo.SplitInts(NULLIF(#brandids, ''), ',') AS br
ON m.brandid = COALESCE(br.Item, m.brandid)
LEFT OUTER JOIN dbo.SplitInts(NULLIF(#bodystyleid, ''), ',') AS bs
ON m.bodystyleid = COALESCE(bs.Item, m.bodystyleid)
WHERE (NULLIF(#brandids, '') IS NULL OR br.Item IS NOT NULL)
AND (NULLIF(#bodystyleid, '') IS NULL OR bs.Item IS NOT NULL);
(Note that I added a lot of NULLIF handling here... if these parameters don't have a value, you should be passing NULL, not "blank".)
Dynamic SQL, which will have much less chance of leading to bad plans due to parameter sniffing, would be:
DECLARE #sql NVARCHAR(MAX);
SET #sql = N'SELECT columns FROM dbo.model
WHERE 1 = 1 '
+ COALESCE(' AND brandid IN (' + #brandids + ')', '')
+ COALESCE(' AND bodystyleid IN (' + #bodystyleid + ')', '');
EXEC sp_executesql #sql;
Of course as #JamieCee points out, dynamic SQL could be vulnerable to injection, as you'll discover if you search for dynamic SQL anywhere. So if you don't trust your input, you'll want to guard against potential injection attacks. Just like you would if you were assembling ad hoc SQL inside your application code.
When you move to SQL Server 2008 or better, you should look at table-valued parameters (example here).
if(#brandids = '' or #brandids is null)
Begin
Set #brandids = 'brandid'
End
if(#bodystyleid = '' or #bodystyleid is null)
Begin
Set #bodystyleid = 'bodystyleid'
End
Exec('select * from model where brandid in (' + #brandids + ')
and bodystyleid in (' + #bodystyleid + ')')

Resources