How to install pgAgent in greenplum DB? - greenplum

I am trying to install pgAgent. I have installed greenplum. Have't install postgres separately. I am using pgAdmin3. I have downloaded the pgagent3.4.0.
When I am running this pgagent.sql file in windows, its redirecting to pgAdmin3 to install the pgAgent schemas. But i am getting the below error :-
>>NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "pga_jobagent_pkey" for table "pga_jobagent"
NOTICE: CREATE TABLE will create implicit sequence "pga_jobclass_jclid_seq" for serial column "pga_jobclass.jclid"
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "pga_jobclass_pkey" for table "pga_jobclass"
>>ERROR: UNIQUE index must contain all columns in the distribution key of relation "pga_jobclass"
********** Error **********
>>ERROR: UNIQUE index must contain all columns in the distribution key of relation "pga_jobclass"
SQL state: 42P16
Also i don't have the pgagent extension in my unix server where Greenplum is installed.
How to work accordingly in this type of environment setup?
-- One small question, postgres installation is required separately even if greenplum DB is installed to perform such pgagent activity??

Greenplum doesn't support pgAgent. Most of the logic is handled by triggers which are not supported by Greenplum. You should use PostgreSQL if you want to use pgAgent.

Related

DropIndex ommits Schema while generating migration script for Oracle via EntityFramework CodeFirst

I have a problem with generating migration script from existing migration files, via command line, either Script-Migration or dotnet ef migrations script.
The migration file contains the following:
migrationBuilder.DropIndex(
name: "IX_CAR_CAR_ID",
schema: "AUTO",
table: "CARS",
column: "CAR_ID");
The generated SQL script contains SQL that fails during database update.
Expected result:
SQL script should contain schema in the drop index command: DROP INDEX "AUTO"."IX_CAR_CAR_ID"
Actual result:
SQL script does not contain schema in the drop index command: DROP INDEX "IX_CAR_CAR_ID"
I have fixed that by manual modification of the migration file, so instead of migrationBuilder.DropIndex, I use
migrationBuilder.Sql("DROP INDEX \"AUTO\".\"IX_CAR_CAR_ID\"").
Well, that solves the problem for me, but does not remove the cause, I suppose.
I have tried the same with MsSQL server and the result is valid:
DROP INDEX [IX_CAR_CAR_ID] ON [AUTO].[CARS];
so, I assume it might be Oracle only related issue.
Using:
Microsoft.EntityFrameworkCore version 5.0.17
Oracle.EntityFrameworkCore version 5.21.61
My question: does anybony know whether this could be a bug in one of the Oracle or EntityFramework packages?

How to migrate(convert) database(or just tables) from PostgreSQL to Oracle using only Oracle tools?

Data was sent to our company with PostgreSQL, but we are prohibited to use the tools of PostgreSQL , permitted the use of only Oracle.
How to migrate data from PostgreSQL to Oracle without using a third party application(they are also prohibited)? You can only use the tools of Oracle.
I found this article https://support.oracle.com/knowledge/Oracle%20Database%20Products/2220826_1.html but we don't have Support Identifier
We have one .sql file. It weighs 8 Gigabytes.
It looks like you have so many impediments in your company. Regarding Oracle's SQL Developer Migration Workbench, unfortunately it does not support the migration of PostgreSQL databases. However, the following 3rd-party software tools are available and may assist in migration, but I am afraid you cannot use them as you said that those products are forbidden:
http://www.easyfrom.net/download/?gclid=CNPntY36vsQCFUoqjgodHnsA0w#.VRBHKGPVuRQ
http://www.sqlines.com/postgresql-to-oracle
Other options will only move the data from your Postgresql database to Oracle database, it means that you must have the DDLs of the tables before to run the import:
To move data only, you can generate a flat file of the the
PostgreSQL data and use Oracle SQL*Loader.
Another option to migrate data only is to use Oracle Database
Gateway for ODBC which requires an ODBC driver to connect to the
PostreSQL database, and copy each table over the network using
sqlplus "COPY" or "CREATE TABLE AS SELECT" commands via oracle
database link.
Also, Oracle has discussion forum for migrating non-oracle databases to Oracle.
http://www.oracle.com/technetwork/database/migration/third-party-093040.html
But, if you have only a sql file, you should look at it to see whether you have both DDLs ( create tables and indexes, etc ) and the data itself as insert statements. If so, you need to split it and treat the DDLs to convert the original data types to Oracle datatypes.

Separate privileges for External Tables

I need to create and drop external tables and im able to do that in non production environment (its used to import massive csv files, so those tables persist only for the duration of the import). However there is no way i can have table create and drop rights on production, so im not sure how to do that on production. How are you solving such problem ?
I can see from documentation that privileges for external tables are a subset of privileges for "normal" tables, while i was hoping there could be a set of privileges for external tables only, which would solve my problem. Is there something like that in newer versions (im using 12c)
from documentation :
System and object privileges for external tables are a subset of those for regular table. Only the following system privileges are applicable to external tables:
CREATE ANY TABLE
ALTER ANY TABLE
DROP ANY TABLE
SELECT ANY TABLE

Cloudera/Hive - Can't access tables after hostname change

I created a Cloudera cluster and imported some sample test files from oracle DB. But after a while I had to change the hostnames of the nodes. I followed the guide mentioned in cloudera site and everything worked fine. But when I try to access tables(using both hive and impala) I created earlier I get the following error:
Fetching results ran into the following error(s):
java.io.IOException: java.lang.IllegalArgumentException: java.net.UnknownHostException: [Old Host Name]
Then I created another table under the same DB (Using Hue>Metastore Tables) and I can access these new tables created under the new hostname with no issue.
Can someone explain how I can access my old tables without reverting back my hostnames. Can I access metastore db and change the table pointers to new hostname.
Never Mind I found the answer.
You can confirm that hive/impala is looking for the wrong location by executing
describe formatted [tablename];
O/P
14 Location: hdfs://[oldhostname]:8020/user/hive/warehouse/sample_07 NULL
Then you can change "Location" property using :
ALTER TABLE sample_07 SET LOCATION "hdfs://[newhostname]:8020/user/hive/warehouse/sample_07";
ps - sample_07 is my the table in concern
Some times this doesn't WORK !!
Above workaround works for sample table which is available by default but I had another table which I sqooped from external DB to a custome metastore DB, this gave me again an error similar to above.
Solution :
Go to host where you've installed hive.
Temporally add the old hostname of the hive server to /etc/hosts (if you don't have external DNS both new and old hostnames should exist in the same host file)
Execute the 'ALTER TABLE ....' at hive shell (or web interface)
Remove the oldhostname entry from /etc/hosts
Try this
hive --service metatool -updateLocation <newfsDefaultFSValue> <old_fsDefaultFSValue>
You can refer to https://www-01.ibm.com/support/knowledgecenter/SSPT3X_3.0.0/com.ibm.swg.im.infosphere.biginsights.trb.doc/doc/trb_inst_hive_hostnames.html

View monetdb table content

How can I view the content of a MONETDB tables ?
In WAMP for example I just type localhost in the browser and there i can see
All the tables and databases with their content .
But I'm unable to do so in MONETDB , Or to be more accurate I don't know how .
Plus the documentation doesn't provide info on the matter .
Assuming you've installed it on a machine running linux, you need to first set up a farm using the server daemon (monetdbd), then create a database on said farm using monetdb. You can then set up tables and run SQL queries using the mclient that you can fire up using
mclient -u monetdb databasename
There is a step by step guide to setting up a basic database here: http://www.monetdb.org/Documentation/UserGuide/Tutorial
For windows, you'd have to start the monetdb server and then use the monetdb sql client. This is also detailed in the windows tutorial:
http://www.monetdb.org/Documentation/UserGuide/Tutorial/Windows
You can view the MONETDB Tables by using the following command:
\d
The above command list all the tables in a schema you have logged into.
If you want to check content of a particular table use the below command:
\d TABLE_NAME

Resources