How to give table information in the SQl METAL Command Line Tool - sqlmetal

How do we mention the Table information in the command when we want to generate the external mapping file directly from the database. We have the /Database option but how do we mention the table information

You can't use SqlMetal to specify which tables in a database to generate mapping for - it's all or nothing, unfortunately.
You could use SqlMetal to generate a DBML file for the database first, then filter out <Table> elements in the DBML file as needed, using a custom process you write yourself. A DBML file is just an XML file that matches the DBML schema, so it's easy to manipulate using LINQ to XML, for example.
Once the DBML file is ready, you can pass it to SqlMetal again to generate code and an external mapping file.

Try using SqlMetal Include.

Related

How to extract sql query from Oracle reports

I have Oracle report file (rdf file). I need to find what are the queries running (SQL queries) to extract data for this report.
As an example, how to find what is the query running once we click on save button related to this RDF file?
Obviously, the simplest way is to open that report in Reports Builder (which supports RDF version you have; I mean, Reports 2.5 probably won't open a RDF created in version 10g) and have a look at the data model editor.
Otherwise, do as I do it - open a RDF (or, if you have a JSP - even better) with a text editor (such as Notepad) and perform search through the file - search for SELECT keyword. You'll find some useless entries, but you'll certainly see useful ones as well. Copy them out of the RDF file and ... well, do whatever you planned to do with them.

Script schema of the entire database with Datagrip

Is it possible to script schema of the entire database (SQL Server or Postgres) using datagrip?
I know I can get DDL for table and view and source for each stored procedure / function on it's own.
Can I get one script for all objects in database at once?
Alternatively, is there a way to search through code of all routines at once, say I need to find which ones are using #table temp table?
From 2018.2 there is a feature called SQL generator. It will generate the whole DDL for the database/schema with several available options.
The result is:
BUT:
If you want just to understand where the table is used, please use the dedicated functionality which is called Find Usages (Alt+F7 or context menu on a table name)
I was looking for this today and just found it. If you right click the schema you want to copy and choose "Copy DDL" this will copy the create script to the clipboard.
To answer your second part of the question: quick and easy way to search for #table in all of your procedures you can do the following query
SELECT *
FROM information_schema.routines
WHERE routine_definition LIKE '%#table%'
For now only dumping tables works. In 2016.3 EAP which will be available in the end of August there will be an integration with mysqldump and pg_dump.

loading multiple files with the same xml schema

I've defined a xml-schema in Talend, using an xml file from one provider. I have multiple providers that I need to handle seperately, but they have the same format of xml.
I only want to define the xml schema once, but use it in multiple jobs each with a different file name. The xml schema seems to be tight to a filename however, and changing the filename makes it a build-in type. I don't want a build in type as I want changes to the xml schema to happen once.
Can somebody point me in the right direction? Should this be done using context?
It is possible to define a schema for a set file (using the wizards provided or building it yourself) and then use just that schema by simply choosing it from the repository.
So, as an example, you might wish to loop through a folder full of XML files and read them using the same schema for all of them and then load this into a database:
To do this you would start with a tFileList which points to the folder full of XML files. Set this up as usual (you probably want a filemask on *.xml") and then link it via an Iterate flow to a tFileInputXML component specifying the file name as: ((String)globalMap.get("tFileList_1_CURRENT_FILEPATH")).
Now select Repository from the drop down box next to Schema (should be default as Built-In. From here simply select the XML schema previously defined for a single file. Now you can use just the schema defined but change everything else (you probably only want control over the file name and leave the rest as is).
Now you can simply connect it to a database component of your choice, such as a tMySQLOutput and have the database component insert rows as per usual.
This is very common, but unfortunately there's no an elegant solution.
Context vars are limited to just primitive types (almost), while the only way to do so is to define an xml schema metadata and then switch off to built-in to change just the filename. This is very ugly, but AFAIK is the only solution possible atm.

dbsaint - Retrieve form EXCEL

How can I retrieve data (using sql) from Excel to a table in Oracle database. I am using dbsaint.
Instead of DBSAINT, which developer tool should I use for this purpose?
The easiest way to do this is to export the data from Excel into a CSV file. Then use an external table to insert the data into your database table.
Exporting the CSV file can be as simple as "Save as ...". But watch out if your data contains commas. In that case you will need to ensure that the fields are delimited safely and/or that the separator is some other character which doesn't appear in your data: a set of characters like |~| (pipe tilde pipe) would work. Find out more.
External tables were introduced in Oracle 9i. They are just like normal heap tables except their data is held in external OS files rather than inside the database. They are created using DDL statements and we can run SELECTs against them (they are read only). Find out more.
Some additional DB infrastructure is required - the CSV files need to reside in an OS directory which is defined as an Oracle dictionary object. However, if this is a task you're going to be doing on a regular basis then the effort is very worthwhile. Find out more.
I don't know much about DbSaint; it's some kind of database IDE like TOAD or SQL Developer but focused at the cheap'n'cheerful end of the market. It probably doesn't support this exact activity, especially exporting to CSV from Excel.

csv viewer on windows environement for 10MM lines file

We a need a csv viewer which can look at 10MM-15MM rows on a windows environment and each column can have some filtering capability (some regex or text searching) is fine.
I strongly suggest using a database instead, and running queries (eg, with Access). With proper SQL queries you should be able to filter on the columns you need to see, without handling such huge files all at once. You may need to have someone write a script to input each row of the csv file (and future csv file changes) into the database.
I don't want to be the end user of that app. Store the data in SQL. Surely you can define criteria to query on before generating a .csv file. Give the user an online interface with the column headers and filters to apply. Then generate a query based on the selected filters, providing the user only with the lines they need.
This will save many people time, headaches and eye sores.
We had this same issue and used a 'report builder' to build the criteria for the reports prior to actually generating the downloadable csv/Excel file.
As other guys suggested, I would also choose SQL database. It's already optimized to perform queries over large data sets. There're couple of embeded databases like SQLite or FirebirdSQL (embeded).
http://www.sqlite.org/
http://www.firebirdsql.org/manual/ufb-cs-embedded.html
You can easily import CSV into SQL database with just few lines of code and then build a SQL query instead of writing your own solution to filter large tabular data.

Resources