I am working on migrating data from WL 5.0.6.2 to 6.2. While doing that I encountered problems running data migration tool.
Background:
Approach:
we export the WRKLGHT table from 5062 DB to an intermediate DB for data migration so the running DB is not affected.
DBs:
1. schema: proj
its storing 5062 runtime data, upgrade-worklight-506-60-oracle.sql, upgrade-worklight-60-61-oracle.sql upgrade sql are executed.
2. schema: proj6
its blank DB at first, "create-worklightadmin-oracle.sql" is executed to prepare WLADMIN tables.
WL Project existing:
1. wlProj
its the app running on 5062 WL server
2. wlProj6
its the target project running on 6.2 WL, and data will be migrated to be used by this project
Steps:
1. recreate proj6 schema
1.1 SQLPlus -> connect as SYSTEM -> "drop user proj6 cascade"
1.2. -> "create user proj6 identified by {password}" -> "grant all privileges to proj6"
2. execute "create-worklightadmin-oracle.sql" on proj6 schema
2.1 SQLDeveloper -> connect as user proj6
2.2 run #"{ WL server install dir}/databases/create-worklightadmin-oracle.sql";
2.3 commit
3. Open cmd and run the following command
java -cp ojdbc6.jar;worklight-ant-deployer.jar com.ibm.worklight.config.dbmigration62.MigrationTool -p /wlProj -sourceurl jdbc:oracle:thin:#//192.168.0.**:1521/xe -sourcedriver oracle.jdbc.driver.OracleDriver -sourceuser proj -sourcepassword *** -targeturl jdbc:oracle:thin:#//192.168.0.***:1521/xe -targetdriver oracle.jdbc.driver.OracleDriver -targetuser proj6 -targetpassword *** 2>out.txt
'migrating wlProj-{ios,android}' & 'migrating access data record for wlProj-.....' are shown in cmd.
And in out.txt, the following error is shown.
15 WorklightManagementPU-oracle INFO [main] openjpa.Runtime - Starting OpenJPA 1.2.2
15 WorklightManagementPU-oracle INFO [main] openjpa.jdbc.JDBC - Using dictionary class "org.apache.openjpa.jdbc.sql.OracleDictionary".
com.ibm.worklight.config.dbmigration62.exceptions.MigrationException: FWLSE3406E: The applications migration failed with error The field "description" of instance "ApplicationEntity[id=51, name=wlProj, displayName=, description=null, thumbnail=null, platformVersion=null]" contained a null value; the metadata for this field specifies that nulls are illegal..
at com.ibm.worklight.config.dbmigration62.MigrationTool.run(MigrationTool.java:195)
at com.ibm.worklight.config.dbmigration62.MigrationTool.main(MigrationTool.java:128)
Caused by: <openjpa-1.2.2-r422266:898935 fatal user error> org.apache.openjpa.persistence.InvalidStateException: The field "description" of instance "ApplicationEntity[id=51, name=wlProj, displayName=, description=null, thumbnail=null, platformVersion=null]" contained a null value; the metadata for this field specifies that nulls are illegal.
at org.apache.openjpa.kernel.SingleFieldManager.preFlush(SingleFieldManager.java:540)
at org.apache.openjpa.kernel.SingleFieldManager.preFlush(SingleFieldManager.java:478)
at org.apache.openjpa.kernel.StateManagerImpl.preFlush(StateManagerImpl.java:2829)
at org.apache.openjpa.kernel.PNewState.beforeFlush(PNewState.java:39)
at org.apache.openjpa.kernel.StateManagerImpl.beforeFlush(StateManagerImpl.java:960)
at org.apache.openjpa.kernel.BrokerImpl.flush(BrokerImpl.java:1967)
at org.apache.openjpa.kernel.BrokerImpl.flushSafe(BrokerImpl.java:1927)
at org.apache.openjpa.kernel.BrokerImpl.beforeCompletion(BrokerImpl.java:1845)
at org.apache.openjpa.kernel.LocalManagedRuntime.commit(LocalManagedRuntime.java:81)
at org.apache.openjpa.kernel.BrokerImpl.commit(BrokerImpl.java:1369)
at org.apache.openjpa.kernel.DelegatingBroker.commit(DelegatingBroker.java:877)
at org.apache.openjpa.persistence.EntityManagerImpl.commit(EntityManagerImpl.java:512)
at com.ibm.worklight.config.dbmigration62.ApplicationMigration.migrateApplication(ApplicationMigration.java:138)
at com.ibm.worklight.config.dbmigration62.ApplicationMigration.migrate(ApplicationMigration.java:63)
at com.ibm.worklight.config.dbmigration62.AbstractMigration.run(AbstractMigration.java:66)
at com.ibm.worklight.config.dbmigration62.MigrationTool.run(MigrationTool.java:183)
... 1 more
ERROR 20 : FWLSE3406E: The applications migration failed with error The field "description" of instance "ApplicationEntity[id=51, name=wlProj, displayName=, description=null, thumbnail=null, platformVersion=null]" contained a null value; the metadata for this field specifies that nulls are illegal..
This is recorded as IBM PMR #77138,999,738, and was identified as a defect that is currently being worked on.
There is no local fix to do other than wait for a fix via the support ticket.
Related
My OBIEE12c configuration failed after proceed 12%.
OBIEE version: 12.2.1.4
Oracle Database version: 19c
Stack Trace:
Variable in stdconfigactionhandler : BI Configuration
progress in calculate progress6
progress in calculate progress6
java.lang.IllegalStateException: Action:BI_Configuration failed with error:Configure BI Failed with Execution of [/u01/app/middleware/bi_home/oracle_common/common/bin/wlst.sh, /u01/app/middleware/bi_home/bi/modules/oracle.bi.configassistant/essbase.py, /u01/app/middleware/bi_home, /u01/app/middleware/bi_home/user_projects/domains/bi12c, weblogic, Expanded, EDWIPRDAPP1, 9502, 9503, ORACLE, oracle.jdbc.OracleDriver, jdbc:oracle:thin:#//EDWIPRDDB-scan:1521/edwprddb, DEVBI, jdbc:oracle:thin:#//EDWIPRDDB-scan:1521/edwprddb, ] failed with exit value 1
at oracle.as.install.engine.modules.configuration.client.ConfigAction.fail(ConfigAction.java:281)
at oracle.bi.install.config.actions.BIConfigAction.doExecute(BIConfigAction.java:137)
at oracle.as.install.engine.modules.configuration.client.ConfigAction.execute(ConfigAction.java:405)
at oracle.as.install.engine.modules.configuration.action.TaskPerformer.run(TaskPerformer.java:88)
at oracle.as.install.engine.modules.configuration.action.TaskPerformer.startConfigAction(TaskPerformer.java:108)
at oracle.as.install.engine.modules.configuration.action.ActionRequest.perform(ActionRequest.java:15)
at oracle.as.install.engine.modules.configuration.action.RequestQueue.performSequentialExecution(RequestQueue.java:284)
at oracle.as.install.engine.modules.configuration.action.RequestQueue.perform(RequestQueue.java:260)
at oracle.as.install.engine.modules.configuration.standard.StandardConfigActionManager.start(StandardConfigActionManager.java:185)
at oracle.as.install.engine.modules.configuration.boot.ConfigurationExtension.kickstart(ConfigurationExtension.java:82)
at oracle.as.install.engine.modules.configuration.ConfigurationModule.run(ConfigurationModule.java:87)
at java.lang.Thread.run(Thread.java:820)
In Config Module Finish Event...
The following workaround can fix the issue
Step 1: First following command in the current session
export JAVA_OPTIONS="-Doracle.jdbc.fanEnabled=false ${JAVA_OPTIONS} ${JAVA_PROPERTIES}"
export JAVA_OPTIONS
export CONFIG_JVM_ARGS=-Doracle.jdbc.fanEnabled=false
Step 2: Remove or rename the following file
$Oracle_home/oracle_common/lib/ons.jar
Ste3. Re-run the configuration assistant
$Oracle_Home/bi/bin/config.sh or config.cmd
I want to create hive table on top of phoenix table in emr.
I am facing a NoClassDefFoundError: org.apache.hadoop.hbase.security.SecurityInfo
What I have done so far:
I followed the instructions from https://phoenix.apache.org/hive_storage_handler.html and added phoenix-hive-5.0.0-HBase-2.0.jar to hive-env.sh as well as in hive-site.xml .
Restarted the hive service systemctl restart hive-server2.service
Restarted the metastore systemctl restart hive-hcatalog-server.service
Executed create table command from hue:
create external table ext_table (
i1 int,
s1 string,
f1 float,
d1 decimal
)
STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler'
TBLPROPERTIES (
"phoenix.table.name" = "ext_table",
"phoenix.zookeeper.quorum" = "localhost",
"phoenix.zookeeper.znode.parent" = "/hbase",
"phoenix.zookeeper.client.port" = "2181",
"phoenix.rowkeys" = "i1",
"phoenix.column.mapping" = "i1:i1, s1:s1, f1:f1, d1:d1"
);
Got an exception: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hbase.security.SecurityInfo)
I am using emr-6.1.0
HBase 2.2.5
Phoenix 5.0.0
Hive 3.1.2
Anybody has an idea what can be the issue?
Update
I followed the advice from #leftjoin and used ADD JAR from hue to add phoenix-hive jar to classpath. Then I faced jar compatibility issue caused by phoenix hive connector that I use:
phoenix-hive-5.0.0-HBase-2.0.jar.
The newer versions of phoenix connectors are not archived into single bundle that could be downloaded from phoenix website . Instead
the connectors are located now in github repo.
I built the new phoenix-hive connector (versions: Phoenix->5.1.0, Hive->3.1.2, Hbase->2.2) and used it to create the Hive table.
As a result I got another exception, which I am not able to fix:
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. org/apache/phoenix/compat/hbase/CompatSteppingSplitPolicy
I think it is still somehow connected to dependency issues. But no clue what is exactly.
As a workaround put jar into hdfs and execute ADD JAR command before create table and query:
ADD JAR hdfs://path/to/your/jar/phoenix-hive-5.0.0-HBase-2.0.jar;
I'd like to know if it's possible to have an external table pointing to a DynamoDB table on AWS using Hive.
I'm not using AWS EMR, what I'm using is a Hadoop Stack configured through Apache Ambari.
Hive version: Hive 3.1.0.3.1.4.0-315
What I did was:
Downloaded the EMR Dynamo-Hive connector JARS directly from the maven repository: https://mvnrepository.com/artifact/com.amazon.emr
I loaded all the JARS in hive.aux.jars.path:
emr-dynamodb-hadoop-4.12.0.jar
emr-dynamodb-hive-4.12.0.jar
emr-dynamodb-tools-4.12.0.jar
hive1.2-shims-4.12.0.jar
hive1-shims-4.12.0.jar
hive2-shims-4.12.0.jar
hive2-shims-4.15.0.jar
shims-common-4.12.0.jar
shims-loader-4.12.0.jar
But when I try to create the table with:
CREATE EXTERNAL TABLE dynamo_LabDynamoHive
(id double, nome string)
STORED BY 'org.apache.hadoop.hive.dynamodb.DynamoDBStorageHandler'
TBLPROPERTIES (
"dynamodb.table.name" = "LabDynamoHive",
"dynamodb.column.mapping" = "id:id,nome:nome"
);
I get the following error:
INFO : Starting task [Stage-0:DDL] in serial mode
ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: Shim class for Hive version 3.1.1000 does not exist
INFO : Completed executing command(queryId=hive_20200422142624_6ebabdc8-8942-4025-84a8-411505d20895); Time taken: 0.203 seconds
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: Shim class for Hive version 3.1.1000 does not exist (state=08S01,code=1)
I know I'm not loading a Shims JAR for Hive 3, but I'd like to know if any of you have tried and succeded in using an external table with DynamoDB using Hive 3 outside of EMR.
Any help or directions would be greatly appreciated!
problem is apparently the source code of this EMR connector is somewhat outdated and lacks Hive 3.x support recently introduced by AWS for EMR 6.0.
However, you can find a working 3.1 implementation here, forked from the official EMR connector: https://github.com/ramsesrm/emr-dynamodb-connector
Installation steps as follow:
1- compile the mentioned code (mvn clean package)
2- install the 3 JARs in your hive.aux.jars.path, along with aws-java-sdk-core and aws-java-sdk-dynamodb JARs from AWS (shim JARs are not required), 5 in total.
Tha's it. Don't forget to specify the region as a TBLPROPERTIES if you're not using the default US one.
I try to migrate data from Amazon RDS MySQL to Azure Database for MySQL using 'Attunity Replicate for Microsoft Migrations(Replicate MSM)'.
For this I setup the Replicate MSM tool on a Windows 10 machine locally, then I define & test the source & target database endpoints eg. RDS as source and Azure as target, install the required mysql, odbc drivers and enable binary logging, local-infile parameters on both the databases. But now when I run the migration task it only create the schema's of migrated tables on target db and failed at 'load data local infile' command.
Here's the stack trace:
00014468: 2019-06-20T11:17:41 [SOURCE_UNLOAD ]I: Unload finished for table 'TestDb'.'Employee' (Id = 1). 2000 rows sent. (streamcomponent.c:2892)
00014968: 2019-06-20T11:17:41 [TARGET_LOAD ]I: Loading table 'migrationtesting'.'Employee' with parallel threads (odbc_endpoint_imp.c:5256)
00014968: 2019-06-20T11:17:41 [TARGET_LOAD ]I: Use parallel load thread pool with '3' threads (csv_target.c:280)
00014968: 2019-06-20T11:17:42 [TARGET_LOAD ]I: Load finished for table 'TestDb'.'Employee' (Id = 1). 2000 rows received. 0 rows skipped. Volume transfered 904960 (streamcomponent.c:3116)
00014968: 2019-06-20T11:17:43 [TARGET_LOAD ]E: Failed to execute statement: 'load data local infile "C:\\Program Files\\Attunity\\ReplicateMSM\\data\\tasks\\Aws2Azure\\data_files\\1\\LOAD00000001.csv" into table `migrationtesting`.`Employee` CHARACTER SET UTF8 fields terminated by ',' enclosed by '"' lines terminated by '\n'( `id`,`name`,`gender`,`mobile`,`city` ) ;' [1022502] (ar_odbc_stmt.c:4349)
00014968: 2019-06-20T11:17:43 [TARGET_LOAD ]E: RetCode: SQL_ERROR SqlState: HY000 NativeError: 1148 Message: [MySQL][ODBC 5.3(w) Driver][mysqld-5.6.39.0]The used command is not allowed with this MySQL version [1022502] (ar_odbc_stmt.c:4355)
00007376: 2019-06-20T11:17:43 [TASK_MANAGER ]W: Table 'TestDb'.'Employee' (subtask 1 thread 1) is suspended (replicationtask.c:2050)
00014968: 2019-06-20T11:17:43 [TARGET_LOAD ]E: Failed to start load process for file '1' [1022502] (csv_target.c:1350)
00007376: 2019-06-20T11:17:43 [TASK_MANAGER ]I: All tables are loaded. Full load only task is stopped (replicationtask.c:2992)
00014968: 2019-06-20T11:17:43 [TARGET_LOAD ]E: Failed to load file '1' [1022502] (csv_target.c:1418)
00014968: 2019-06-20T11:17:43 [TARGET_LOAD ]E: Failed to load data from csv file. [1022502] (odbc_endpoint_imp.c:5331)
According to Azure docs:
LOAD DATA INFILE is supported, but the [LOCAL] parameter must be
specified and directed to a UNC path (Azure storage mounted through
SMB).
if this is the solution then kindly explain how to implement it.
Note: MySQL Server version on both RDS and AZURE is 5.6
As the error logs suggest that you're using v5.3 of MySQL ODBC Driver in which the LOAD DATA INFILE functionality is disabled by default, to enable this we need to explicitly set the value of ENABLE_LOCAL_INFILE to 1.
In Attunity Replicate for Microsoft migrations, you have to enable this flag for your Target database endpoint, you can enable it by following these steps...
Open the settings of target Endpoint.
Goto Advanced tab > Internal Parameters.
Add search key additionalConnectionProperties and hit Enter. (it's case-sensitive,so just copy/paste the same)
You can see a new key has been created under internal parameters, type the value for this newly created key as: ENABLE_LOCAL_INFILE=1;
Save and then Reload your task.
Credits: Official Attunity Community/Support team for Microsoft Migrations
I have been upgrading to another Visual Studio Version 2013 (Update 3) on another machine dev machine.
I then tried to create a test project in an existing collection. it crashed. Tried it three times then deleted the corrupted projects.
After that I tought. Well I should upgrade to TFS 2013 (Update 3) too. So it tried to Upgrade my existing collections. It failed for the collection with the corrupted project.
So I tought its easy just restore the database. But thats not so easy. And it tells me that I need to restore the configuration db too. In order to do so it says I need to rename the configuration db. But then I cannot start the management tool to restore ?! It freezes.
What would you suggest? I have a backup but I cannot restore it so far. And I do not understand why it tells me that I need to restore the configuration backup too. I always tought that collections are independent.
Here are some addition screenshots:
Upgrade progess problem:
Complete Screenshot:
[2014-08-07 23:30:13Z][Error] TF400744: An error occurred while executing the following script: SetRecoveryModelToSimple.sql. Failed batch starts on the line 1. Statement line: 1. Script line: 1. Error: 5069 ALTER DATABASE statement failed.
As suggested I have run the best practice analyzer.
The upgrade log is actually large. I am posting just the last lines:
"[Info #23:29:51.189]
[Info #23:29:51.189] +-+-+-+-+-| ResultsSqmData |+-+-+-+-+-
[Info #23:29:51.189] Feature: ApplicationTier (1)
[Info #23:29:51.190] Feature: ApplicationTier; previousFailure: False
[Info #23:29:51.192] Error count: 0
[Info #23:29:51.192] Warning count: 0
[Info #23:29:51.192] Overall Result: TotalSuccess (1)
[Info #23:29:51.192] WebSiteData: 9
[Info #23:29:51.192] SqlData: 8
[Info #23:29:51.193] RSData: 0
[Info #23:29:51.193] WSSData: 0
[Info #23:29:51.193] Wizard: UpgradeWizard (4)
[Info #23:29:51.193] TfsConfigData: 8194
[Info #23:29:51.197] serviceLevel: Dev12.M68
[Info #23:29:51.197] Fatal Error Location: 0
[Info #23:29:51.197] Activity = ApplicationTierUpgrade (4)
[Info #23:29:53.053] ResultSqmData.UpdateIssues
[Info #23:29:53.068] no issues
[Error #06:53:08.370] TF400744: An error occurred while executing the following script: SetRecoveryModelToSimple.sql. Failed batch starts on the line 1. Statement line: 1. Script line: 1. Error: 5069 ALTER DATABASE statement failed.
[Info #06:53:08.385] To configure the new features for a team project, follow the steps in http://go.microsoft.com/fwlink/?LinkID=229859
"
When I try to detatch it this occurs:
TF401219: The team project collection 'XXX' cannot be detached because its version ID is different than the ID for the configuration database. The collection has the following version: Dev12.M62. The Team Foundation Server is at the following version: Dev12.M68.
When I try to restore a backup this occurs:
TF400990: Database Tfs_Configuration exists on SQL instance NUBO-XXX\SqlExpress. Please drop or rename the existing database before the restore operation
First of all keep calm.
I would try to complete the upgrade before trying other options. From what you show seems you have issue at SQL level, could be a permission: check both the TFS service account and the your user.
If you want to rollback, and you used the integrated backup, you have to restore all databases in practice.