MemGrapg connection to PowerBI / Tableau - memgraphdb

Cam memgraph connect to any of the standard visualization products such as PowerBI , Tableau. N4j seems to have recently launch a enterprise version of PowerBI connector.

As of now, Memgraph does not have an ODBC connection to connect to such sources. However, it is in the works to make visualization products as one of the destinations for Memgraph analytics.
The thing is that PowerBI can connect to a streaming dataset and accept incoming data while making ad-hoc dynamic visualizations. This is not the same though as making queries to static data to the database and visualizing it as capabilities of streaming dataset is a bit low on the PowerBI side.
Haven't yet checked Tableau, so no info on that side.
For concrete use cases and technical help, try checking their Discord page at https://discord.com/invite/memgraph

Related

Best way to transfer data IN BATCHES/BULK from Oracle 19c to SQL Server 2016 using SSIS

We have a legacy process that runs on SSIS 2016 on Windows Server 2016, executes custom queries against databases on remote servers, pulls the results (thousands or sometimes millions of records) and stores them in a local SQL Server database. These other databases are on DB2 and Oracle 19c.
This process has always connected using an OLE DB driver and a data flow with OLE DB source and destination components. It also has always been slow.
Because of some article we read recently talking about how OLE DB transfers only 1 record at a time, but with ADO.NET this network transfer could be done in batches (is this even true?), we decided to try to use an ADO.NET driver to connect to DB2 and replace the OLE DB source and destination components by ADO.NET components.
The transfer we were using as test case, which involved 46 million records, basically flew and we could see it bring down around 10K records at a time. Something that used to run in 13 hours ran in 24 minutes with no other changes. Some small tweaks in the query allowed us to bring that time even lower to 11 minutes.
This is obviously major and we want to be able to replicate it with our Oracle data sources. Network bandwidth seems to have been the main issue, so we want to be able to transfer data from Oracle 19c to our SQL Server 2016 databases using SSIS in batches, but want to ask the experts what the best/fastest way to do this is.
Is Microsoft Connector for Oracle the way to go as far as driver? Since we're not on SQL Server 2019, this article says we also need to install the Oracle Client and Microsoft Connector Version 4.0 for Oracle by Attunity. What exactly is the Oracle Client? Is it one of these? If so, which one, based on our setup?
Also, should we use ADO.NET components in the data flow just like we did with DB2? In other words, is the single record vs. record batches difference driven by the driver used to connect, the type of components in the data flow or both need to go hand in hand for this to work?
Thanks in advance for your responses!
OLEDB connections are not slow by themselves - it's a matter or what features the driver has available to it. It sounds like the ADO.NET driver for DB2 allows bulk insert and the OLEDB one does not.
Regarding Oracle, the attunity driver is the way to go. You'll need to install the oracle driver as well. The links that you have look correct to me but I don't have access to test.
Also, please note that dataflows will batch data by default in increments of the buffer size. 10k rows for example.

PowerBI Web: Connect to Oracle Cloud database

I would like to know if it is possible in Power BI Web to retrieve data from an Oracle database hosted in Oracle Cloud. Or to any other database/cloud combo different that Azure SQL Server.
Thanks!
Yes you can definitely achieve this.
There are connectors available for connecting powerbi desktop to oracle database.
Link
Link 2

How to load oracle table data into kafka topic?

How to load oracle table data into kafka topic? i did some research and got to know,i should use CDC tool,but all CDC tools are paid version ,can anyone suggest me how to achieve this ?
You'll find this article useful: No More Silos: How to Integrate your Databases with Apache Kafka and CDC
It details all of your options and currently-available tools. In short, you can do bulk (or query-based CDC) with the Kafka Connect JDBC Connector, or you can use a log-based CDC approach with one of several CDC tools that support Oracle as a source, including Attunity, GoldenGate, SQ Data, and IBM's IIDR.
You'll generally find that if you've paid for your database (e.g. Oracle, DB2, etc) you're going to have to pay for a log-based CDC tool. Open source CDC tools are available for open source databases. For example, Debezium is open source and works great with MongoDB, MySQL, and PostgreSQL.
You might be interested in the Debezium project, which provides open-source CDC connectors for a variety of databases. Amongst others, we provide one for Oracle DB. Note that this connector currently is based on the XStream API of Oracle, which itself requires a separate license, but we hope to add a fully free alternative soon.
Disclaimer: I'm the lead of Debezium
Please refer to kafka jdbc source connector . Below is link
https://docs.confluent.io/current/connect/connect-jdbc/docs/index.html
You don't need a Change Data Capture (CDC) tool in order to load data from Oracle Table into a Kafka topic.
You can use Kafka Confluent's JDBC Source Connector in order to load the data.
However, if you need to capture deletes and updates you must use a CDC tool for which you need to pay a licence. Confluent has certified the following CDC tools (Source connectors):
Attunity
Dbvisit
Striim
Oracle GoldenGate
As others have mentioned, CDC requires paid products. If you'd just like to try something out, Striim is available for free for the first 30 days.
https://www.striim.com/instant-download/
The 'free' options which include JDBC..but you would be introducing a significant load on your database if you actually want to use triggers to capture changes.
disclaimer: i work at striim
There's a custom Kafka source connector for Oracle database which is based on logminer here:
https://github.com/erdemcer/kafka-connect-oracle
This project is in development.
You might be interested in OpenLogReplicator. It is an open source GPL-licensed tool written completely in C++. It reads binary format of Oracle Redo logs and sends them to Kafka.
It is very fast - you can achieve low latency without much effort, since it operates fully in memory. It supports all Oracle database versions since 11.2.0.1 and requires no additional licensing.
It can work on the database host, but you can also configure it to read the redo logs using sshfs from another host - with minimal load of the database.
disclaimer: I am the author of this solution

Connecting to Oracle DB using Bottle in Python

Dears,
I am making simple web page using Bottle library in Python. And started struggling when i want to connect to Oracle DB and extract data from table for visualization. I see that Bottle has only Sqlite.db connections. Is it possible to connect to Oracle db with integrated Bottle library functionality? Or should i call cx_Oracle each time when a web page link is pressed to get data from database?
This might work for you? https://github.com/bormotov/bottle-oracle. I haven't used it myself but the code in it is simple enough you can develop your own if need be! Hope that helps, anyway.

OCDM combined with ODI

ODI = ELT tool
OCDM = Data warehouse.
Is my understanding of the above correct ? More information/explanation is welcome.
Now my question is :
Is it possible to load into OCDM's pre-existing tables via ODI, when the source of ODI are in flatfiles/XML format ? If possible, how ?
Any links related to above are also welcome.
It is indeed possible. OCDM is a solution using an Oracle 11g database to store the data, so ODI can definitely load it.
Actually OCDM comes out-of-the-box with adapters to load the data from NCC (Oracle Communications Network Charging and Control) and BRM (Oracle Communications Billing and Revenue Management), and these adapters are using ODI 11g – and optionally Golden Gate.
Each of these adapters is composed of some models and one ODI project holding interfaces and packages.
If you want to build you own integration process, it is just a standard loading from flat file to Oracle or XML to Oracle. Both of these are covered by the tutorials in the ODI 11g Series in the Oracle Learning Library : https://apexapps.oracle.com/pls/apex/f?p=44785:24:0::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5185,29
It's possible, using OCDM ad-ons to load the data (NCC, BRM)
NCC (Oracle Communications Network Charging and Control),
BRM (Oracle Communications Billing and Revenue Management),
and these adapters are using ODI 11g and ODI 12c and optionally Golden Gate also.
ODI is mostly used to load historical data. For more real-time data , they use Oracle Golden Gate. However, OGG is used to load the data to staging, for data sync from staging to presentation they still use ODI>
Yes, its possible. You need normal standard interface writing techniques to implement this.

Resources