Talend open studio embeded DB (H2 or Derby) - h2

I need to use an embedded DB (H2, Apache Derby) with talend open studio. I saw that it's possible with talend MDM, but couldn't find any tutorial on how to embed this in talend open studio.
I have a big amount of data, from different tables that are processed the first time, stored locally before a second step of transformations. But can't use cache memory or files (csv) as middle storage.
Any ideas ? help please

Talend MDM uses an internal Database for the purpose of the product (data repository). Talend Open Studio (DI) does not use it (there are no internal informations needed with Talend Open Studio for Data Integration).
If you want to connect and extract data from your DB2 database, you can use DB2 components inside your talend jobs.

Related

Microsoft SSAS-Tabular Model (TM) connection to Power BI via Import mode - 'not enough memory available for the application'

I have a rudimentary question, when connecting an SSAS-TM (SQL Server Analysis Services - Tabular Model) database (on-premise) in my own local machine to my Power BI Desktop (also in my local machine) via Import mode.
I am not at all familiar with the memory allocation parameters.
The relational database I have is a very simple AdventureWorksDW. I develop a SQL Server Analysis Services - Tabular Model project, using Visual Studio 2015, and deploy the Project as a new database in the Analysis Services Engine. I am able to query tables in this SSAS-TM database in the following format in the SSMS (SQL Server Management Studio), using DAX language:
EVALUATE 'tablename'
However, when I try to connect this SSAS-TM database to my Power BI Desktop via an Import connection, I get the following error.
AnalysisServices: The operation has been cancelled because there is not enough memory available for the application. If using a 32-bit version of the product, consider upgrading to the 64-bit version or increasing the amount of memory available on the machine.
I have the properties from the SSMS in this image file.
I tried to refer some links related to setting up some Vertipaq memory parameters, after referring some links in learn.microsoft.com. It has not been useful to me.
My simple question is this:
What properties do I need to change in the image file above to make this connection successful? This is only training, so am using Adventure Works DW here. So size is not an issue. My laptop has a lot of memory and disk space, and is 64-bit. The Power BI Desktop is 64-bit.
Can someone help me ?
The Power BI connector for Analysis Services that has the table picker will generate an MDX query instead of a DAX query. And if you try to extract a more than a handful of rows, it will fail. It's a known issue, but low priority because.
Don't import from SSAS. Use Live Connect. You've already got a Cube/Data Set, you can just connect to it and write reports.
If you absolutely must import from SSAS, use a DAX query, eg
In M:
AnalysisServices.Database("MySSAS", "AdventureWorksDW", [Query="evaluate FactResellerSales", Implementation="2.0"])
or in the UI
Use Live Connect if you are only getting data from cube. If you are getting data from Excel files, etc. then you are forced to use Import. I have used Import to get many tables from cube, no memory errors. What you can do is Import 3 tables at a time, then in Power BI advanced editor select option to add more tables from cube, add another 3 tables, see how that goes. With Live Connect even if the relationship columns are hidden, you still get them. With Import, if they are hidden, you can't select them, so can't create the relationships.

integration of multiple databases via talend open studio for big data

I have 11 MySQL Databases, each one contains 81 tables, each table have the same schema in the 11 databases, my goal is to integrate those databases into a MongoDB database
I'm working with Talend Open Studio for Big Data
Can you please suggest a way that can help me?

Applying SCD on Talend MDM Server

I am using Talend Open Studio for MDM and I have a requirement to do version control on customer records.
When using an Oracle database, I can use tOracleSCD to capture the changes. Likewise, for MySQL, I can use tMysqlSCD.
But in Talend Open Studio for MDM, the only supported database is H2 and so I am storing all master records in a H2 database.
In this case, how can I achieve version control as there is no component available in Talend
for H2 database?
The SCD components just set up triggers on the watched tables and provide an easy interface into reading the trigger output tables.
You could set the triggers up manually on the H2 database by recreating the database in MySQL and then using the MySQL SCD components to work out what it's doing and work out how to read the data back in and then recreate those steps with H2 components as part of a data integration task.
That said, Talend MDM has the concept of a journal which stores all of the changes made to a data record. The Talend Open Studio for MDM documentation has some more detailed information about how to view the journal. All changes made through the MDM interface should make an entry in the journal automatically.

How to Replace ODBC Data Source with OLE DB Data Source in SSIS Package?

I use Integration Services 2012 in project deployment mode. I want to replace existing ODBC data source with OLE DB data source in existing package without breaking all the links that cascade down the package into the data destination.
I have tried deleting the ODBC & adding OLE DB data source. Then I lost all my output aliases after the first MERGE JOIN data flow. What can I do about it?
First fix all of the metadata in your source components, by opening them for edit. Then edit each component in order in the data flow. This will often fix downstream components as you go, but if data types changed (i.e. unicode to non-unicode) then you may have conversions to do.

Deploy Entity Framework Code First Database

I have an ASP.NET MVC 3 application using an Entity Framework (4.3.1) Code First database. Now I would like to create a comprehensive zip file containing the database, the application package generated by Visual Studio 2010 and a script to deploy everything to a Windows 2008 server with IIS7 and SQL Server 2008 with a prepared (but empty) database.
I don't foresee any problems with the deployment of the application package, but I'm unsure of what approach to use in deploying the database. The target environment already has an empty database that's been assigned to me, but I've been told that dropping and creating the database is fine.
From what I've read, I can do a straightforward copy of the .mdf and .ldf files to the server and then setup my connection string to point to that specific file but this approach sort of ignores the database that has already been created (or at least named) for me. The other approach would be to use the the existing .mdf to create the database on the server with a script. My only issue here is that I would like to keep the database name assigned to me.
I usually connect to my development database locally using SQL Management Studio and right-click the database, choose Tasks -> Generate Scripts. Then I select the entire database or just the tables I'd like to keep, click next, then click the Advanced button and make sure that I am scripting out "Schema and Data", and then generate a sql script that I can run on the production database, therefore keeping the table structure and the data that was in the dev database. Obviously, if you don't want to keep the data then just script out the Schema only. Then, point your application's connection string to the new production environment database and you're good to go.

Resources