Cockroach DB multi-tenancy IMPORT error for sql dump import - cockroachdb

I am trying to import a sql dump into a cockroachdb and getting the following error:
IMPORT is unsupported in multi-tenancy mode
I have created an instance of CockroachCloud Free hosted on Google cloud Platform and have successfully connected to it form my terminal. I am trying to follow the tutorial here to import spatial data (geojson) into a database I created called SpatialData.
After using ogr2ogr to convert the geojson file to a sql dump, I have hosted the sql dump on a Google Cloud Bucket and made it public. Here is the url for the file:
https://storage.googleapis.com/poi_roi/tanks.sql
However when I try to IMPORT the the data as a table in my database, I am getting the aforementioned error when I execute either of:
IMPORT TABLE tanks FROM PGDUMP 'https://storage.googleapis.com/poi_roi/tanks.sql';
OR
IMPORT PGDUMP('https://storage.googleapis.com/poi_roi/tanks.sql');

CockroachCloud Free is in Beta and is missing some features (the stated reason for still being in beta).
You can find more details in the FAQ which mentions restrictions in two answers. Both answers spell out the lack of support for IMPORT (emphasis mine):
Limitations of CockroachCloud Free:
CockroachCloud Free is currently in beta and there are capabilities we
are still working on enabling, such as the ability to enable backups,
to import data, and no-downtime upgrades to a paid tier. If you want
to use any of these capabilities, try a 30-day trial of
CockroachCloud.
Why is CockroachCLoud Free in Beta:
CockroachCloud Free is in beta while we work on adding core features
like import and backups.
You will need to find an alternate way of loading your data such as creating the resources and inserting the data yourself.

Related

Microsoft SSAS-Tabular Model (TM) connection to Power BI via Import mode - 'not enough memory available for the application'

I have a rudimentary question, when connecting an SSAS-TM (SQL Server Analysis Services - Tabular Model) database (on-premise) in my own local machine to my Power BI Desktop (also in my local machine) via Import mode.
I am not at all familiar with the memory allocation parameters.
The relational database I have is a very simple AdventureWorksDW. I develop a SQL Server Analysis Services - Tabular Model project, using Visual Studio 2015, and deploy the Project as a new database in the Analysis Services Engine. I am able to query tables in this SSAS-TM database in the following format in the SSMS (SQL Server Management Studio), using DAX language:
EVALUATE 'tablename'
However, when I try to connect this SSAS-TM database to my Power BI Desktop via an Import connection, I get the following error.
AnalysisServices: The operation has been cancelled because there is not enough memory available for the application. If using a 32-bit version of the product, consider upgrading to the 64-bit version or increasing the amount of memory available on the machine.
I have the properties from the SSMS in this image file.
I tried to refer some links related to setting up some Vertipaq memory parameters, after referring some links in learn.microsoft.com. It has not been useful to me.
My simple question is this:
What properties do I need to change in the image file above to make this connection successful? This is only training, so am using Adventure Works DW here. So size is not an issue. My laptop has a lot of memory and disk space, and is 64-bit. The Power BI Desktop is 64-bit.
Can someone help me ?
The Power BI connector for Analysis Services that has the table picker will generate an MDX query instead of a DAX query. And if you try to extract a more than a handful of rows, it will fail. It's a known issue, but low priority because.
Don't import from SSAS. Use Live Connect. You've already got a Cube/Data Set, you can just connect to it and write reports.
If you absolutely must import from SSAS, use a DAX query, eg
In M:
AnalysisServices.Database("MySSAS", "AdventureWorksDW", [Query="evaluate FactResellerSales", Implementation="2.0"])
or in the UI
Use Live Connect if you are only getting data from cube. If you are getting data from Excel files, etc. then you are forced to use Import. I have used Import to get many tables from cube, no memory errors. What you can do is Import 3 tables at a time, then in Power BI advanced editor select option to add more tables from cube, add another 3 tables, see how that goes. With Live Connect even if the relationship columns are hidden, you still get them. With Import, if they are hidden, you can't select them, so can't create the relationships.

Power BI Oracle on prem gateway issue - Unable to find the requested .Net Framework Data Provider

I have a Power BI dashboard which has direct queries to an Oracle database, where I import data using SQL queries. On my local pbix file everything is fine. When I publish it to my enterprise powerbi.com site and want to refresh the data, I get the following error:
{"error":{"code":"DM_GWPipeline_Gateway_ProviderDataAccessArgumentError","pbi.error":
{"code":"DM_GWPipeline_Gateway_ProviderDataAccessArgumentError","parameters":{},"details":
[{"code":"DM_ErrorDetailNameCode_UnderlyingErrorMessage","detail":{"type":1,"value":"Unable to
find the requested .Net Framework Data Provider. It may not be installed."}},
{"code":"DM_ErrorDetailNameCode_UnderlyingHResult","detail":
{"type":1,"value":"-2147024809"}}],"exceptionCulprit":1}}}
Does anyone have any idea what could be causing the issue?
I have trawled through the Power BI forums and there does not seem to be a definitive remedy.
I don't have such an issue using Tibco Spotfire, however we are being pushed to use Power BI.
I found a work around. It seems that after you create the Views in Oracle, in PowerBI, you should rather import (as selected below) than make a direct query. Just click the import option and find your table with the view you create in the following options.
I still see the same error if I want to manually fresh the data on the PowerBI app, but I can at least now see my dashboard working, with the tables and graphics fully visible.

Transferring the project to another server, with all the data in apex oracl

I want to migrate the project to another server, I exported the project, and generated for all the tables. But I can't migrate these tables. Someone can help me with this. ????
Based on your description, I'd say your best bet to migrate any custom schemas is to use Data Pump. Data Pump is made up of three distinct components. They are the command-line clients, expdp and impdp; the DBMS_DATAPUMP PL/SQL package (also known as the Data Pump API); and the DBMS_METADATA PL/SQL package (also known as the Metadata API).
An example export would look like:
expdp hr TABLES=employees DUMPFILE=employees.dmp
That would generate a file you could move to the destination database (where a database directory can map to).
Then an example import would look like:
impdp hr DIRECTORY=dpump_dir1 DUMPFILE=employees.dmp TABLES=employees
Of course, there are many more options than that. Here's the official doc:
https://docs.oracle.com/en/database/oracle/oracle-database/18/sutil/index.html
Also, if you want to move to Oracle's new always free tier, then Adrian Png provides a nice overview that also touches on some APEX related topics here:
https://fuzziebrain.com/content/id/1920/

Migrate Vignette CM (Content Manager) content to Oracle UCM

We're at the start of a project where we need to migrate Vignette CM (version 7.6) content to Oracle UCM (11g).
Some pointers on which tools and approaches to use would be extremely helpful.
It seems to me that you should be able to write an application to export content using the Vignette Content Management SDK (as described in this doc: http://docs.bizbeta.com/docs/vignette/sdk_content_management.pdf). However, I can't quite figure out how to connect to a Vignette CM instance using the above mentioned SDK.
As for the import side of things, I managed to find this link to a Batch Loader product for Oracle UCM.
http://www.fishbowlsolutions.com/StellentSolutions/StellentComponents/EnterpriseBatchloader/fs_entbatchload_webcopy
What other tools are available for importing content into Oracle UCM/WCM? Does Oracle provide a tool?
Proventeq has tools for automated migration into Oracle UCM (most versions including 11g). Its called the Proventeq Migration Accelerator, and it does have connectors to migrate Vignette to Oracle UCM ( http://www.proventeq.com/solutions/oracle-ucm-stellent-migration ). It a very versatile rules based migration engine which can be configured to meet any requirements.

How can I replicate an Oracle 11g database(data+structure) on my local machine for development?

I am working on a test server with an Oracle 11g installed. I was wondering if there is anyway I can replicate the database(environment + data) on my local Linux machine. I am using a CentOS 5.3 on Windows XP with SUN Virtual Box. On Windows I am using sqldeveloper client to connect to the 11g database.
There are a number of ways to move the data over:
Restore an RMAN backup on your test server
Export and import the data using exp/expdp/imp/impdp
Export and import using a transportable tablespace (Further Info)
Use database links to duplicate the data using SQL
You can use the Database Configuration Assistant to generate a template from your production database. This will give you all the parameters and tablespaces, among other things. You will need to tweak the configuration somewhat; for instance the file paths may be wrong, and some parameters may need downsizing. You can then feed that template into DBCA to clone the database on you Linux machine.
To get the schemas and data you should use Data Pump (rather than the older Import / Export utlities). This can be run off the command line or from PL/SQL.
Bear in mind that using production data in a development or test environment can cause you to run foul of data protection laws and other compliance issues. It depends on what your application does and what jurisdiction you operate under. But if your production system contains citizens' personal data you need to be very careful. There are products out there which will apply masking as part of a data import process (Oracle sells one) but they tend to be expensive. Rolling your own masking product can be tricky: if this applies to your situation be sure to get your compliance staff (legal team) involved early.
I would suggest you install Oracle XE which is free to use on your local if your development is not something that is related to core database features. You can then use the methods given above to pump data into Oracle XE and compile your code on it, though for development I don't think you would need data as much as that in production.

Resources