What is the practical difference between .pls and .sql file in oracle.
What are the restrictions on different types of statements in both?
I am given a project in unix(korn script) which uses both .sql and .pls files in different places. Trying to figure out which should be used where.
File suffixes are to a large extent just a matter of convention. We can use them to provide useful metadata to other developers.
So .pls indicates (we hope) that the file is a PL/SQL script, creating a PL/SQL package or stored procedure. In other shops we might see .pks and .pkb to indicate a package spec script and a package body script respectively. A file with a .tab or a .tbl extension indicates DDL to create a table. Because this is just a convention it requires discipline (or code reviews) to make sure we remain consistent.
The one difference is .sql. Although the convention is that it represents some SQL (a query, or perhaps DML or DDL) it has a special property in SQL*Plus. If we have a script called whatever.sql we can call it like this in SQL*Plus...
SQL> #whatever
... whereas if a script has any other extension we must include the extension in the call...
SQL> #whatever.pls
Other IDEs or others clients (e.g. build scripts) may use file extensions as a filtering mechanisms or for applying syntax highlighting, but their rules should always be controlled by preferences.
" What are the restrictions on different types of statements in both?"
To sum up, there are no restrictions. Some places I have worked used nothing but .sql files, Others had a complicated menagerie of scripts: .tbl, .idx, .vw, etc. Sociopaths can use just .txt for all their files: the database won't care. Provided it's valid Oracle syntax the code will run.
Related
As the title suggests, I am just trying to do a simple export of a datastage job. The issue occurs when we export the XML and begin examination. For some reason, the wrong information is being pulled from the job and placed in the XML.
As an example the SQL in a transform of the job may be:
SELECT V1,V2,V3 FROM TABLE_1;
Whereas the XML for the same transform may produce:
SELECT V1,Y6,Y9 FROM TABLE_1,TABLE_2;
It makes no sense to me how the export of a job could be different then the actual architecture.
The parameters I am using to export are:
Exclude Read Only Items: No
Include Dependent Items: Yes
Include Source Code with Routines: Yes
Include Source Code with Job Executable: Yes
Include Source Content with Data Quality Specifications: No
What tool are you using to view the XML? Try using something less smart, such as Notepad or Wordpad. This will determine/eliminate whether the problem is with your XML viewer.
You might also try exporting in DSX format and examining that output, to see whether the same symptoms are visible there.
Thank you all for the feedback. I realized that the issue wasn't necessarily with the XML. It had to do with numerous factors within our data stage environment. As mentioned above, the data connections were old and unreliable. For some reason this does not impact our current production refresh, so it's a non issue.
The other issue was the way that the generated SQL and custom SQL options work when creating the XML. In my case, there were times when old code was kept in the system, but the option was switched from custom code to generate SQL based on columns. This lead to inconsistent output from my script. Thus the mini project was scrapped.
I recently discovered that GET is a reserved word in SQLDeveloper,but I can't figure out what it's for. Tried oracle help center's list of reserved words but there's no mention of it.
In short: What is the use of GET in PLSQL?
It doesn't mean anything in PL/SQL, unless you have an object with that name. Or in SQL.
It's a SQL*Plus command:
GET [FILE] file_name[.ext] [LIST | NOLIST]
Loads an operating system file into the SQL buffer.
You can get a file into the buffer and edit it there before executing it, rather than just running it directly with start or #.
SQL Developer implements, or at least recognises or allows, most SQL*Plus statements, presumably for compatibility reasons (though some things don't work, such as set embed on).
It seems to silently ignore get.
It's in the documentation's keyword list, rather than the reserved words list. You can use it as an object name etc.; they recommend you don't, but as this is a client keyword rather than a SQL one it wouldn't be as noticeable. At least, if SQL Developer didn't highlight it as a keyword...
In one of the PL/SQL packages in our Oracle database, there's a global variable, g_file_path, that points to a location on the system where certain files will be stored:
create or replace
PACKAGE xyz_package
AS
...
g_file_path VARCHAR2 (80) := '/usr/tmp';
...
This variable is used in various UTL_FILE operations throughout the package.
Unfortunately, the path chosen is inadequate, and I need to figure out how to set a path dynamically depending on the environment where the database is running, e.g. so the path becomes /opt/ENVDB/xyz, where ENVDB changes depending on the env.
One idea is to emulate the behavior of the shell script:
>echo $XYZ_DB_TOP
That points to a suitable folder where the files can be stored. I can't think of a suitable PL/SQL function that emulates this behavior though. Any smart/simple solution to this problem? Any help is appreciated!
If you're using Oracle 9i or higher you should use a directory object instead. This is safer, because it only permits complete paths (no wildcards). It also doesn't require a database restart, unlike using UTL_FILE_DIR in the init.ora file. And it is far more secure because we can grant privileges on each directory to specific individual users.
But the aspect that will interest you the most right now is the that the abstraction of the directory object makes it a cinch to change the actual OS path, so it can be different in each environment. Just like this:
alter directory temp_data as '/home/oracle/tmp';
Find out more.
I have compiled a list of db object names, one name per line, in a text file. I want to know for each names, where it is being used. The target search is a group of folders containing sub-folders of source codes.
Before I give up looking for a tool to do this and start creating my own, perhaps you can help to point to me an existing one.
Ideally, it should be a Windows desktop application. I have not used grep before.
use grep (there are tons of port of this command to windows, search the web).
eventually, use AgentRansack.
See our Source Code Search Engine. It indexes a large code base according to the atoms (tokens) of the language(s) of interest, and then uses that index to quickly execute structured queries stated in terms of language elememnts. It is a kind of super-grep, but it isn't fooled by comments or string literals, and it automatically ignores whitespace. This means you get a lot fewer false positive hits than you get with grep.
If you had an identifier "foo", the following query would find all mentions:
I=foo
For C and Java, you can constrain the types of identifier accesses to Use, Read, Write or Defines.
D=bar*
would find only declarations of identifiers which started with the letters "bar".
You can write more complex queries using sequences of language tokens:
'int' I=*baz* '['
for C, would find declarations of any variable name that contained the letters "baz" and apparantly declared an array.
You can see the hits in a GUI, and one-click navigate to a source code view of any hit.
It is a Windows application. It handles a wide variety of languages: C#, C++, Java, ... and many more.
I had created an SSIS package to load my 500+ source code files that is distributed into some depth of folders belongs to several projects, into a table, with 1 row as 1 line from the files (total is 10K+ lines).
I then made a select statement against it, by cross-applying the table that keeps the list of 5K+ keywords of db objects, with the help of RegEx for MS-SQL, http://www.simple-talk.com/sql/t-sql-programming/clr-assembly-regex-functions-for-sql-server-by-example/. The query took almost 1.5 hr to complete.
I know it's a long winded, but this is exactly what I need. I thank you for your efforts in guiding me. I would be happy to explain the details further, should anyone gets interested using my method.
insert
dbo.DbObjectUsage
select
do.Id as DbObjectId,
fl.Id as FileLineId
from
dbo.FileLine as fl -- 10K+
cross apply
dbo.DbObject as do -- 5K+
where
dbo.RegExIsMatch('\b' + do.name + '\b', fl.Line, 0) != 0
What are the best ways (or at least most common ways) in ASP (VBScript) for input handling? My main concerns are HTML/JavaScript injections & SQL injections. Is there some equivalent to PHP's htmlspecialchars or addslashes, et cetera? Or do I have to do it manually with something like string replace functions?
The bottom line is this:
Always HTML-encode user input before you write it to your page. Server.HTMLEncode() does that for you.
Always use parameterized queries to interface with a database. The ÀDODB.Command and ADODB.CommandParameter objects are the right choice here.
Always use the URLScan utility and IIS lockdown on the IIS server that renders the page, unless they are version 6 and up, which do not require these tools anymore.
If you stick to points 1 and 2 slavishly, I can't think of much that can go wrong.
Most vulnerabilities come from not properly encoding user input or building SQL strings from it. If you for some reason come to the point where HTML-encoding user input stands in your way, you have found a design flaw in your application.
I would add to Tomalaks list one other point.
Avoid using concatenation of field values in SQL code. That is, in some cases a stored procedure may build some SQL in a string to subsequently execute. This is fine unless a textual field value is used as part of its construction.
A command parameter can protect SQL code designed to input a value from being hijacked into executing unwanted SQL but it allows such unwanted SQL to become data in the database. This is a first-level vunerability. A second-level injection vunerability exists if the field's value is then used in some SQL string concatenation inside a stored procedure.
Another consideration is that this is just minimal protection. All its doing is rendering attack attempts harmless. However in many cases it may be better to add to this a system which prevents such data entry altogther and/or alters admins to a potential injection attack.
This is where input validation becomes important. I don't know of any tools that do this for you but a few simple Regular Expressions might help. For example, "<\w+" would detect the attempt to include a HTML tag in the field.