I am looking for a method (or even better a DW table) that contains report properties such as Name, Description, Type, Location, etc.
I have looked through many tables but can not find this information. I am working to build out a web portal that includes hyperlinks for all reports on the server.
Here is an example of report properties I am looking for-
Unfortunately, the definitions you're looking for are not stored at the database level, which is super lame, but that's the way it is. They're stored in the RPD file and the web catalog at the OS level.
The webcatalog is located:
on 10G: OracleBIData/web/catalog/
on 11G:
$ORACLE_INSTANCE/bifoundation/OracleBIPresentationServicesComponent/catalog/
on 12c: $ORACLE_HOME\user_projects\domains\bi\bidata\service_instances\ssi\metadata\content\catalog where ssi is a service instance.
If you descend into one of those directory structures you'll see files that are named with a bunch of punctuation symbols, plus the name of the report they represent.
Reference 1
Reference 2
Just to clarify the "lame" storage: What the OP is asking for is in the presentation catalog; the RPD has nothing to do with it.
And to clarify even further: Every object stored in the presentation catalog is physically represented by two files on the disk: one file without file extension which represents the object's XML definition. And one file with an .atr extension which contains the object's properties - what the OP is looking for - as well as the object's access permissions.
Ranting's fain, but please be precise ;-)
For what it's worth, in E-Business Suite, tables start with XDO_
Related
Is there a way to customize the folder structure for objects which get created in the project when importing a Database, or doing a schema comparison?
Is there a customizable template file used by Visual Studio to perform the action of including generated object in the solution.
Example:
By default, a Table and all its Indexes get created in "Tables" Folder, in a single file.
I would like to split these into separate files. Same goes for Statistics.
Here is an image of Folder Structures:
- Server Object Explorer
- Solution Explorer (what i would like it to look like)
Note:
I know that when doing comparison I can prepare a folder in my solution, then drag the desired objects, but that is not an acceptable solution.
As of today, I don't think there is way to specify folder structures apart from these default import options (Schema, Object type, schema\object type).
Well, you can split them up manually and create them on separate folders but its lot of manual work based on how many objects you have. but the publish/Script generation/Compare are not affected by this move.
Background:
I am doing a self study on an Oracle Product (Argus Safety Insight), and I need to understand the database schema of this product. I have installed the database and loaded the schema successfully. I have also generated data model using "SQL DEVELOPER DATA MODELER".
Issue:
This schema has 500 tables and 700 views which together gives around 20K columns, I couldn't navigate through the data model due its huge size; SQL developer hangs.
Question:
Will you please help me with a tool or technique on how to read and understand the logical relationships between tables in such huge databases.
You have two issues.
1: Technical - 'sql dev hangs' - you're asking it to open something so big, it overwhelms the Java Virtual Machine (JVM). For really LARGE models, we recommend you bump this to 2 or even 3 GB.
To increase the memory for the JVM, you need to find the product.conf file for SQL Developer. On Windows, it's under AppData for your user, and roaming profiles. On a Mac/NIX, it's in your $HOME directory, and then in a .SQLDev 'hidden' sub directory.
The file is documented quite well, but you need to do something like -
AddVMOption -Xmx2048m
Save, then re-open SQLDev and your design.
2: Human - how do you make sense of hundreds or thousands of objects in a diagram? You just can't. So you need to find application-driving MAIN tables, and generate SubViews (a subset of the diagram) for easier digestion.
I talk about how to do this here.
Now that your objects are grouped by SubViews, you can now view, print, report, and search them by SubView as well.
I realize this is possible with the FileNET P8 API, however I'm looking for a way to find the physical document path within the database. Specifically there are two level subfolders in the FileStore, like FN01\FN13\DocumentID but I can't find the reference to FN01 or FN13 anywhere.
You will not find the names of the folders anywhere in the FN databases. The folder structure is determined by a hashing function. Here is an excerpt from this page on filestores:
Documents are stored among the directories at the leaf level using a hashing algorithm to evenly distribute files among these leaf directories.
The IBM answer is correct only from a technical standpoint of intended functionality.
If you really really need to find the document file name and folder location, disable your actual file store(s) by making the file store(s) folder unavailable to Content Engine. I did that for each file store by simply changing the root FN#'s to FN#a. For instance, FN3 became FN3a. Once done, I changed the top tree folder back. I used that method so log files would not exceed the tool's maximum output. Any method that leaves a storage location (eg: drive, share, etc) accessible and searchable, but renders the individual files unavailable should cause the same results.
Then, run the Content Engine Consistency Checker. It will provide you with a full list of all files, IDs and locations.
After that, you can match the entries to the OBJECT_ID fields in the database tables. In non-MSSQL databases, the byte ordering is reversed for the first few octets of the UUID. You need to account for that and fix the byte ordering to match the CCC output.
...needs to be byte reversed so that it can be queried upon in Oracle.
When querying on GUIDs, GUIDs are stored in byte-reversed form in
Oracle and DB2 (not MS SQL), whereby the first three sections are pair
reversed and the last two are left alone.
Thus, the same applies in reverse. In order to use the output from the Content Consistency Checker to match output to database, one must go through the same byte ordering reversal.
See this IBM Tech Doc and the answer linked below for details:
IBM Technote: https://www.ibm.com/support/pages/node/469173
Stack Answer: https://stackoverflow.com/a/53319983/1854328
More detailed information on the storage mechanisms is located here:
IBM Technote: "How to translate the unique identifier as displayed within FileNet Enterprise Manager so that it matches what is stored in the Oracle and DB2 databases"
I do not suggest using this for anything but catastrophic need, such as rebuilding and rewriting an entire file store that got horrendously corrupted by your predecessor when they destroyed an NTFS (or some similarly nasty situation).
It is a workaround to bypass FileNet's hashing that's used to obsfucate content information from those looking at the file system.
I'm looking for advice on how to best organize a new Oracle schema and dependent files in my project directory - with the sequences, triggers, DDL, etc. I've been using one monolothic file called schema.sql for some time, but I'm wondering if there's a best practice? Something like...
database/
tables/
person.sql
group.sql
sequences/
person.sequence
group.sequence
triggers/
new_person.trigger
Penny for your thoughts or a URL that I may have missed!
Thank you!
Storing DDL by object type is a reasonable approach-- anything is likely to be easier to navigate than a monolithic SQL script. Personally, though, I'd much rather have DDL organized by function. If you're building an accounting system, for example, you probably have a series of objects to manage accounts payable and a separate set of objects to manage accounts receivable along with some core objects for managing the general ledger accounts. That would lead to something along the lines of
database/
general_ledger/
tables/
packages/
sequences/
accounts_receivable/
tables/
packages/
sequences/
accounts_payable/
tables/
packages/
sequences
As the system gets more complex, that hierarchy would naturally get deeper over time. This sort of approach would more naturally mirror the way non-database code is stored in source control. You wouldn't have a single directory of Java classes in a directory structure like
middle_tier/
java/
Foo.java
Bar.java
You would organize the classes that implement the same sorts of business logic together and separate from the classes that implement different bits of business logic.
One item to consider is those SQLs which can act as 'latest only' scripts. These include CREATE OR REPLACE PROCEDURE/FUNCTION/TRIGGER etc. You run the latest version and you are not worried about what may have previously existed in the database.
On the other hand you have tables where you may start off with a CREATE TABLE followed by several ALTER TABLEs as changes to the schema evolve. And if you are doing an upgrade you may want to apply several of the ALTER TABLE scripts (preferably in order).
I'd argue against a 'functional grouping' unless it is really obvious where the lines are drawn. You probably don't want to be in a position where you have a USERS table in one group and a USER_AUTHORITIES in another and an AUTHORITY group in a third.
If you do have decent separation, then they are probably in separate schemas and you do want to keep schemas distinct (since you can have the same object names in different schemas).
The division-by-object-type arrangement, with the addition of a "schema" directory below the database directory works well for me.
I've worked with source control systems that have the additional division-by-function layer - if there are many objects it adds additional searching if you're trying to cross-reference the source control file with the object that you see in a database GUI navigator that generally groups objects by type. It's also not always clear how an object should be classified this way.
Consider adding a "grants" directory for the grants made by that schema to other schemas or roles, with one file per grantee. If you have "rule-based" grants such as "the APPLICATION_USER role always gets SELECT on all of schema X's tables", then write a PL/SQL anonymous block to perform this action. (You might be tempted to reverse-engineer the grants after they get put in place by some ad-hoc method, but it's easy to miss something when new tables or views are added to the application).
Standardize on a delimiter for all scripts and you'll make your life easier if you start deploying through a build utility such as Ant. Using "/" (vs. ";") works for both SQL statements as well as PL/SQL anonymous blocks.
In our projects we use somewhat combined approach: we have a core of our program as a root and other functionalities in subfolders:
root/
plugins/
auth/
mail/
report/
etc.
In all these folders we have both DDL and DML scripts almost all of them can be run more that once, e.g. all packages are defined as create or replace..., all data insertion scripts check whether data already exists and so on. This gives us the opportunity to rus almost all scripts without thinking that we can crash something.
Obviously this scenario can't be applied for create table and similar statements. For these scripts we have manually written small bash script that extracts specified files and runs them not failing on particular ORA errors, like: ORA-00955: name is already used by an existing object.
Also all files are mixed in the directories but differ with extensions: .seq goes for sequence, .tbl goes for table, .pkg goes for package interface, .bdy goes for package body, .trg goes for trigger an so on...
Also we have a naming convention denoting prefixes for all of our files: we can have cl_oper.tbl table with cl_oper.seq and cl_oper.trg sequence and triggers and cl_oper_processing.pkg together with cl_oper_processing.bdy with logic for mentioned objects. With this naming convention in file managers it's very easy to see all the files connected with some unit of logic for our project (whilst the grouping in directories by object types does not provide this).
Hope this information helps you somehow. Please leave comments if you have any questions.
This question is addressed to a degree in this question on LINQ to SQL .dbml best practices, but I am not sure how to add to a question.
One of our applications uses LINQ to SQL and we have currently have one .dbml file for the entire database which is becoming difficult to manage. We are looking at refactoring it a bit into separate files that are more module/functionality specific, but one problem is that many of the high level classes would have to be duplicated in several .dbml files as the associations can't be used across .dbml files (as far as I know), with the additional partial class code as well.
Has anyone grappled with this problem and what recommendations would you make?
Take advantage of the namespace settings. You can get to it in properties from clicking in the white space of the ORM.
This allows me to have a Users table and a User class for one set of business rules and a second (but the same data store) Users table and a User class for another set of business rules.
Or, break up the library, which should also have the affect of changing the namespacing depending on your company's naming conventions. I've never worked on an enterprise app where I needed access to every single table.
Past a certain size it probably becomes easier to work with the xml instead of the dbml designer.
I have written a tool too! Mine is for scripting changes to dbml files using c# so you can rerun them and not lose changes. See my blog http://www.adverseconditionals.com 4 more details
The approach that we've used it to keep 2 .dbml files. One of them holds the Stored Procs and all production DB access is done through this. The other is in a unit test folder and holds tables and their relationships and is used for DB data manipulation and querying for unit tests.
I have written a utility to address exactly that problem, I needed a quick app to let you select only the database objects you need. In my case I often needed a complex view, but no tables.
http://www.codeplex.com/SqlMetalInclude/