clickhouse MaterializedView getting error after running query on it - clickhouse

i created MaterializedView with target table
after data is inserting to the view , i am trying to query the view but the query stop clickhouse-server after 1 second with error :
Exception on client:
Code: 32. DB::Exception: Attempt to read after eof: while receiving packet from localhost:9000, 127.0.0.1
the aggregation that causing problem are
first_url SimpleAggregateFunction(any, String),
last_url SimpleAggregateFunction(anyLast, String),
why is that?

Obviously it's a bug in CH.
CH version?
You should create a bug report and provide a table DDL, a full error message from CH logs, and data sample / reproducible example/ CH version.
https://github.com/ClickHouse/ClickHouse/issues

i success fix it i used:
AggregateFunction(any, String),
and in the mv anyState:
what is the difference any vs anystat?

Related

why i am getting Error when modify table column clickhouse

I have a table with column action LowCardinality(String),
but I want to change this column to -> action Nullable(String) and I am getting this error:
Code: 473, e.displayText() = DB::Exception: READ locking attempt on
"glassbox.beacon_event" has timed out! (120000ms) Possible deadlock
avoided. Client should retry.: While executing Columns (version
20.4.2.9 (official build))
Also the client is stuck (tabix).
If i will run this command like this, it works:
alter table test modify column action String
alter table test modify column action Nullable(String)
Why can't I run with one command?
alter table test modify column action Nullable(String)
probably it's a bug. Try Ch version 20.6

"NzSQLException: The update count exceeded Integer.MAX_VALUE" ERROR only on JDBC connection

When constructing a rather large table in netezza, I get the following ERROR when using a JDBC connection:
org.netezza.error.NzSQLException: The update count exceeded Integer.MAX_VALUE.
The table does get created properly, but the code throws an exception. When I try running the same SQL using nzsql I get:
INSERT 0 2395423258
i.e. no thrown exceptions. It seems the variable storing the count of records in JDBC is not large enough?
Has anyone else encountered this error? How did you deal with it?
Modify your connection string to include ignoreUpdateCount=on as a parameter and try again.

When I use multi-table greenplum bulk load in kettle, I report the following error:

When I use multi-table greenplum bulk load in kettle, I report the following error:
ERROR: Segment reject limit reached. Aborting operation. Last error was: missing data for column "deviceid"
There are data formatting errors where the data is coming from that does not match the DDL/format of the table t_e_app_monitor log.
Check the log file from gpload defaults to ~/gpAdminLogs (https://gpdb.docs.pivotal.io/530/utility_guide/admin_utilities/gpload.html)
Also, I am not familiar with kettle, but add a log file to your last screen under GP configuration and review that.
Finally default for gpload is to fail on the first formatting error. But you can have it fail after N number of failed rows and log the reason into a table for ease of troubleshooting. Check out the doc link above and the sections for ERROR_LIMIT and LOG_ERRORS

Oracle destination in SSIS data flow is failing with Error- ORA-01405: fetched column value is NULL

I have one SSIS package in which there is one DFT. In DFT, I have one Oracle source and one Oracle destination.
In Oracle destination I am using Data Access Mode as 'Table Name - Fast Load (Using Direct Path)'
There is one strange issue with that. It is failing with the following error
[Dest 1 [251]] Error: Fast Load error encountered during
PreLoad or Setup phase. Class: OCI_ERROR Status: -1 Code: 0 Note:
At: ORAOPRdrpthEngine.c:735 Text: ORA-00604: error occurred at
recursive SQL level 1 ORA-01405: fetched column value is NULL
I thought it is due to NULL values in source but there is no NOT NULL constraint in the destination table, so it should not be an issue. And to add into this, the package is working fine in case of 'Normal Load' but 'Fast Load'.
I have tried using NVL in case of NULL values from source but still no luck.
I have also recreated the DFT with these connections but that too in vain.
Can some one please help me with this?
It worked fine after recreating the oracle table with the same script

weird exception in Hive : Error in semantic analysis

Maybe when you see the "Error in semantic" in the title, you consider it as syntax error?
Of course not, I will show you what happened.
hive> use android;
OK
Time taken: 0.223 seconds
hive> desc tb_user_basics;
OK
col_datetime string
col_is_day_new string
col_is_hour_new string
col_ch string
...
p_date string
p_hourmin string
Time taken: 0.189 seconds
hive> select count(distinct col_udid) from android.tb_user_basics where p_date>='20121001' and p_date<='20121231';
FAILED: Error in semantic analysis: org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out
hive>
>
> select count(distinct col_udid) from android.tb_user_basics where p_date>='20121001' and p_date<='20121231';
FAILED: Error in semantic analysis: Unable to fetch table tb_user_basics
I'm very sure the table does exist in the database android. After the first statement failed, it appears that the table is missing.(Even I add the db prefix in the table name)
I'm wonderring whether it's because of the volumn of data is very big, maybe you have noticed that the time range is [20121001, 20121231].
I run the command before many times, always raise this error. But if I change the contition to "p_date='20121001'", the statement can run normally. (since the volumn is smaller? )
I'm expecting your answers, Thanks.
Probably you are in strict mode. One of strict mode feature is that partitions has to be specified, so this is why queries with "p_date='20121001'" in where cause are working.
Please try the non-strict mode:
set hive.mapred.mode=nonstrict;

Resources