UCanAccess Exception UCAExc:::3.0.0 needs column or cannot drop sole column of table - ucanaccess

I am attempting to use UCanAccess to build a program to help me script some Access Database queries. Ultimately I want daily reports generated from a cron job. My couple of test cases to connect to said database give the following exception: UCAExc:::3.0.0 needs column or cannot drop sole column of table.
I'm a dabbler in databases, and this database is maintained by some other part of the company's organization.
The exception is thrown on the getConnection call.
Output of console.sh after compact and repair: Cannot execute:CREATE
CACHED TABLE EMPL() needs column or cannot drop sole column of table
Loaded Tables:
Loaded Queries:
Loaded Indexes:
net.ucanaccess.jdbc.UcanaccessSQLException: UCAExc:::3.0.0 needs
column or cannot drop sole column of table at
net.ucanaccess.jdbc.UcanaccessDriver.connect(UcanaccessDriver.java:258)
at java.sql.DriverManager.getConnection(DriverManager.java:571) at
java.sql.DriverManager.getConnection(DriverManager.java:187) at
net.ucanaccess.console.Main.main(Main.java:145) Caused by:
java.sql.SQLSyntaxErrorException: needs column or cannot drop sole
column of table at org.hsqldb.jdbc.JDBCUtil.sqlException(Unknown
Source) at org.hsqldb.jdbc.JDBCUtil.sqlException(Unknown Source) at
org.hsqldb.jdbc.JDBCStatement.fetchResult(Unknown Source) at
org.hsqldb.jdbc.JDBCStatement.executeUpdate(Unknown Source) at
net.ucanaccess.converters.LoadJet.exec(LoadJet.java:1416) at
net.ucanaccess.converters.LoadJet.access$000(LoadJet.java:71) at
net.ucanaccess.converters.LoadJet$TablesLoader.createSyncrTable(LoadJet.java:481)
at
net.ucanaccess.converters.LoadJet$TablesLoader.createSyncrTable(LoadJet.java:411)
at
net.ucanaccess.converters.LoadJet$TablesLoader.createTable(LoadJet.java:807)
at
net.ucanaccess.converters.LoadJet$TablesLoader.createTable(LoadJet.java:761)
at
net.ucanaccess.converters.LoadJet$TablesLoader.createTables(LoadJet.java:942)
at
net.ucanaccess.converters.LoadJet$TablesLoader.loadTables(LoadJet.java:1036)
at
net.ucanaccess.converters.LoadJet$TablesLoader.access$2900(LoadJet.java:273)
at net.ucanaccess.converters.LoadJet.loadDB(LoadJet.java:1479) at
net.ucanaccess.jdbc.UcanaccessDriver.connect(UcanaccessDriver.java:243)
... 3 more Caused by: org.hsqldb.HsqlException: needs column or
cannot drop sole column of table at
org.hsqldb.error.Error.error(Unknown Source) at
org.hsqldb.error.Error.error(Unknown Source) at
org.hsqldb.ParserDDL.compileCreateTableBody(Unknown Source) at
org.hsqldb.ParserDDL.compileCreateTable(Unknown Source) at
org.hsqldb.ParserDDL.compileCreate(Unknown Source) at
org.hsqldb.ParserCommand.compilePart(Unknown Source) at
org.hsqldb.ParserCommand.compileStatements(Unknown Source) at
org.hsqldb.Session.executeDirectStatement(Unknown Source) at
org.hsqldb.Session.execute(Unknown Source) ... 16 more UCAExc:::3.0.0
needs column or cannot drop sole column of table

UCanAccess will throw an exception when opening a database if it contains a table with no columns defined.
A table with no columns is not a valid table. Access itself will not allow us to save such a table ...
... however it is possible to make such a table using DDL as in the following VBScript
Option Explicit
Dim con
Set con = CreateObject("ADODB.Connection")
con.Open "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\__tmp\31886535.accdb"
On Error Resume Next
con.Execute "DROP TABLE BadTable"
On Error Goto 0
con.Execute "CREATE TABLE BadTable (foo INT)"
con.Execute "ALTER TABLE BadTable DROP COLUMN foo"
con.Close
To avoid the error when opening the database with UCanAccess, simply open the database in Access and delete the table that has no columns defined.

Related

H2 - Table not found after ALTER table - leftover "_COPY_" table

I have an application that uses H2 v1.4.199, with versioning provided by Flyway.
For roughly 25% of the users, this migration ends up in an unusual state.
The migration is performing a number ALTER TABLE commands (typically dropping columns) on one database (lets call the table "fruits").
2021-01-25 14:57:04,226 ERROR main h2database - mydatabase:database opening mydatabase
org.h2.message.DbException: Table "FRUITS" not found [42102-199]
at org.h2.message.DbException.get(DbException.java:205)
at org.h2.message.DbException.get(DbException.java:181)
at org.h2.command.ddl.AlterTableAddConstraint.tryUpdate(AlterTableAddConstraint.java:108)
at org.h2.command.ddl.AlterTableAddConstraint.update(AlterTableAddConstraint.java:78)
at org.h2.engine.MetaRecord.execute(MetaRecord.java:60)
at org.h2.engine.Database.open(Database.java:842)
at org.h2.engine.Database.openDatabase(Database.java:319)
at org.h2.engine.Database.<init>(Database.java:313)
at org.h2.engine.Engine.openSession(Engine.java:69)
at org.h2.engine.Engine.openSession(Engine.java:201)
at org.h2.engine.Engine.createSessionAndValidate(Engine.java:178)
at org.h2.engine.Engine.createSession(Engine.java:161)
at org.h2.engine.Engine.createSession(Engine.java:31)
at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:336)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:169)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:148)
at org.h2.Driver.connect(Driver.java:69)
at org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38)
... more
Caused by: org.h2.jdbc.JdbcSQLSyntaxErrorException: Table "FRUITS" not found; SQL statement:
ALTER TABLE PUBLIC.FRUITS ADD CONSTRAINT PUBLIC.CONSTRAINT_23 PRIMARY KEY(ID) INDEX PUBLIC.PRIMARY_KEY_23 [42102-199]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:451)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:427)
... 80 more
The "fruits" table has a constraint, but they end up without the "fruits" table that acts on and ends up stuck - and you can't even connect to the database.
Looking through the debug logs, I can see the following:
2021-02-02 10:11:18 lock: 1 exclusive write lock requesting for FRUITS_COPY_4_3
2021-02-02 10:11:18 lock: 1 exclusive write lock added for FRUITS_COPY_4_3
Looking through the H2 source code, I can see that a temporary table is created with the suffix of "COPY", so I'm assuming the existing table is dropped but somehow the temporary table is persisted with the new name,, which means the existing table name is no longer being used.
Unfortunately I'm not able to reproduce this so it's difficult to provide any further information.

Hive:Getting error in execution select and drop partiton hive queries in same time

I am getting error when running two query in same time.
Here are the scenerios.
I am using AWS EMR and below is my hive table schema.
CREATE TABLE India (OFFICE_NAME STRING,
OFFICE_STATUS STRING,
PINCODE INT,
TELEPHONE BIGINT,
TALUK STRING,
DISTRICT STRING,
POSTAL_DIVISION STRING,
POSTAL_REGION STRING,
POSTAL_CIRCLE STRING
)
PARTITIONED BY (STATE STRING)
ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION 's3a://mybucket/'
TBLPROPERTIES ( 'parquet.compression'='SNAPPY', 'transient_lastDdlTime'='1537781726');
First query:
SELECT count( distinct STATE ) FROM India;
Second query:
ALTER TABLE India DROP PARTITION (STATE='Delhi');
While running the first query, I have executed the 2nd query in same time, so I got this error in first query
Error: java.io.IOException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:271)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:144)
at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:200)
at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:186)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:455)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:344)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedConstructorAccessor42.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:257)
... 11 more
Caused by: com.amazon.ws.emr.hadoop.fs.consistency.exception.FileDeletedInMetadataNotFoundException: File 'mybucket/India/state=Delhi/000000_0' is marked as deleted in the metadata
at com.amazon.ws.emr.hadoop.fs.consistency.ConsistencyCheckerS3FileSystem.getFileStatus(ConsistencyCheckerS3FileSystem.java:440)
at com.amazon.ws.emr.hadoop.fs.consistency.ConsistencyCheckerS3FileSystem.getFileStatus(ConsistencyCheckerS3FileSystem.java:416)
at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy34.getFileStatus(Unknown Source)
at com.amazon.ws.emr.hadoop.fs.s3n2.S3NativeFileSystem2.getFileStatus(S3NativeFileSystem2.java:227)
at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.getFileStatus(EmrFileSystem.java:509)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:386)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:372)
at org.apache.hadoop.hive.ql.io.parquet.ParquetRecordReaderBase.getSplit(ParquetRecordReaderBase.java:79)
at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:75)
at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:60)
at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:75)
at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.<init>(CombineHiveRecordReader.java:99)
... 15 more
after googled I found this link
https://docs.aws.amazon.com/emr/latest/ManagementGuide/emrfs-files-tracked.html
Is there anyway to syn metadata at runtime or 2nd query won't be execute until the first one status is completed.
Please help me to fix this issue or any suggestion, set any parameter that will fix the issue.
Partition path and splits are being calculated at the very beginning. Your mappers have started to read files in partition location and at the same time you dropped partition, this caused drop files, because your table is managed. This causes runtime FileDeletedInMetadataNotFoundException Exception.
If you still want to drop partition during reading it, try this:
If you make your table EXTERNAL then DROP PARTITION should not drop the files, they will remain and it should not cause exception, and you can remove partition location from filesystem later. Or use S3 Lifecycle policy to drop old files like described here.
Unfortunately, already started job cannot detect that Hive partition with files was dropped and gracefully skip reading them, because Hive metadata has already been read, query plan built and splits can be already calculated.
So, the solution is to drop Hive partitions and postpone dropping files.
BTW when you add partitions when querying table, it works fine.

Not able to recover partitions through alter table in Hive 1.2

I am not able to run ALTER TABLE MY_EXTERNAL_TABLE RECOVER PARTITIONS; on hive 1.2, however when i run the alternative MSCK REPAIR TABLE MY_EXTERNAL_TABLE its just listing the partitions which aren't there in Hive Meta Store and not adding it. Based on the source code from hive-exec am able to see under org/apache/hadoop/hive/ql/parse/HiveParser.g:1001:1 that theres no token matching in the grammer for RECOVER PARTITIONS.
Kindly let me know if theres a way to recover all the partitions after creating external table on Hive 1.2.
Stack Trace for ALTER TABLE MY_EXTERNAL_TABLE RECOVER PARTITIONS; :
NoViableAltException(26#[])
at org.apache.hadoop.hive.ql.parse.HiveParser.alterTableStatementSuffix(HiveParser.java:7946)
at org.apache.hadoop.hive.ql.parse.HiveParser.alterStatement(HiveParser.java:7409)
at org.apache.hadoop.hive.ql.parse.HiveParser.ddlStatement(HiveParser.java:2693)
at org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1658)
at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1117)
at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:202)
at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:431)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:316)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1189)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1237)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1126)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1116)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:216)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:168)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:379)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:739)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:624)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
FAILED: ParseException line 1:45 cannot recognize input near 'recover' 'partitions' '<EOF>' in alter table statement
Note: Am using S3 as the storage, HDP 2.4 for hadoop and Hive 1.2.
Hi after spending some time debugging got the fix, the reason is it was not adding the partition through MSCK, as my partition names were in camel case(FileSystem is case sensitive, but hive treats all partition column names as lowercase), however once made my partition path in lowercase it works like a charm.
run it from spark use:
spark.sql("ALTER TABLE MY_EXTERNAL_TABLE RECOVER PARTITIONS").
This will work.

Unable to insert values in hive table using beeline

I have connected to hive using thrift server. And, I am using beeline for querying the tables.
I am able to see the existing tables, and able to perform select/aggregate on these tables.
I am also able to create the tables and databases, but when I am trying to INSERT VALUES in the table, I am getting the following error:
15/08/10 13:02:32 WARN ThriftCLIService: Error executing statement:
org.apache.hive.service.cli.HiveSQLException: org.apache.spark.sql.hive.HiveQl$ParseException: Failed to parse: **insert into table test values("kundan")**
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.run(Shim13.scala:192)
at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:231)
at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:218)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:79)
at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:37)
at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:64)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:493)
at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:60)
at com.sun.proxy.$Proxy12.executeStatementAsync(Unknown Source)
at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:233)
at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:344)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1313)
In Hive it is not possible to insert values directly into a hive table you can do it in 3 ways
You can insert values from one table into another table using insert command.
For ex:
INSERT INTO TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...)]
select_statement1 FROM from_statement;
2.Loading data into table using input file from hdfs or from local location.
For ex:
LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename [PARTITION (partcol1=val1, partcol2=val2 ...)];
3.Loading data into table while creating the table itself using the input file.
For ex:
CREATE TABLE 'table name'(fields) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE LOCATION '<hdfs_location>';

error while executing select query in hive

I'm using hadoop 1.1.2 , hbase 0.94.8 and hive 0.14 .
I'am trying to create a table in hbase using hive and load data in it later by insert overwrite .
for the moment I was able to create the table:
CREATE TABLE hbase_table_emp(id int, name string, role string)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:name,cf1:role")
TBLPROPERTIES ("hbase.table.name" = "emp");
and load data into another table that I will overwrite it into the hbase table :
hive> create table testemp(id int, name string, role string) row format delimited fields terminated by '\t';
hive> load data local inpath '/home/user/sample.txt' into table testemp;
but when I try select * from testemp; to verify that the data was loaded successfully I get this error:
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.mapred.JobConf.unset(Ljava/lang/String;)V
at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushFilters(HiveInputFormat.java:432)
at org.apache.hadoop.hive.ql.exec.FetchTask.initialize(FetchTask.java:76)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:443)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:303)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1067)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1129)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1004)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:994)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:247)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:199)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:410)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:783)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:616)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:622)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
could someone help me please ! thank you
Unfortunately, I believe you will have to upgrade Hadoop to at least 1.2.0.
It appears that Hive is trying to access the unset method of the org.apache.hadoop.mapred.JobConf class. In looking at the API documentation for that class in Hadoop 1.1.2, you can see that method does not exist.
The first release from the 1.x series in which that method gets implemented is 1.2.0 (see the API documentation for the same class). Note that the method actually gets inherited from the org.apache.hadoop.conf.Configuration class.

Resources