How to properly create KsqlDB stream based on Kafka topic? - apache-kafka-streams

I'm attempting to create a KsqlDB stream (using KSQL CLI) in the following way:
CREATE STREAM orders_stream (
OrderId BIGINT,
Description VARCHAR
) WITH (
KAFKA_TOPIC = ‘orders’,
VALUE_FORMAT = 'JSON'
);
I get this error:
line 5:19: mismatched input '‘' expecting {'NULL', 'TRUE', 'FALSE', '-', STRING, INTEGER_VALUE, DECIMAL_VALUE, FLOATING_POINT_VALUE, VARIABLE}
Statement: CREATE STREAM orders_stream (
Ordered BIGINT,
Description VARCHAR
) WITH (
KAFKA_TOPIC = ‘orders’,
VALUE_FORMAT = 'JSON'
);
Caused by: line 5:19: mismatched input '‘' expecting {'NULL', 'TRUE', 'FALSE',
'-', STRING, INTEGER_VALUE, DECIMAL_VALUE, FLOATING_POINT_VALUE, VARIABLE}
Caused by: org.antlr.v4.runtime.InputMismatchException
I can't seem to find the issue here.
Any help is appreciated

Just change ‘orders’ to 'orders' , it works.
ksql> CREATE STREAM orders_stream (
OrderId BIGINT, Description VARCHAR )
WITH ( KAFKA_TOPIC = 'orders', PARTITIONS=1, REPLICAS=1,
VALUE_FORMAT = 'JSON' );
Message
Stream created
ksql>

Related

How to create an array of struct in aws athena - hive on parquet data

I tried creating an table on aws-athena with hive on parquet data with following :
CREATE TABLE IF NOT EXISTS db.test (
country STRING ,
day_part STRING ,
dma STRING ,
first_seen STRING,
geohash STRING ,
last_seen STRING,
location_backfill ARRAY <
element STRUCT <
backfill_type: BIGINT,
brq: BIGINT ,
first_seen: STRING,
last_seen: STRING ,
num_days: BIGINT >>
)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION
's3://<location>'
TBLPROPERTIES (
'parquet.compress'='SNAPPY',
'transient_lastDdlTime'='<sometime>')
i repeatedly get the error
line 9:12: mismatched input 'struct' expecting {'(', 'array', '>'} (service: amazonathena; status code: 400; error code: invalidrequestexception; request id: )
The syntax seems fine and not sure. The data is stored in s3 path
any idea what may be causing this problem?
Array elements are not named, specify only type (struct):
location_backfill ARRAY <
STRUCT <
backfill_type: BIGINT,
brq: BIGINT ,
first_seen: STRING,
last_seen: STRING ,
num_days: BIGINT >>

Call PL SQL function with record type as input

I have to call a PL SQL Function in Oracle DB with signature as given below:
FUNCTION funcName (input IN input_type) RETURN funcName_RETURN;
input_type is defined as below:
create or replace TYPE "INPUT_ROW" AS OBJECT
(
Data1 VARCHAR2(255 BYTE),
Data2 VARCHAR2(255 BYTE)
)
create or replace TYPE "INPUT_TABLE"
AS VARRAY (50000) OF INPUT_ROW
create or replace TYPE "INPUT_TYPE" AS OBJECT
(
file_date DATE,
all_rows INPUT_TABLE
)
I am trying to call this procedure from another pl sql block to insert data with multiple rows.
Not sure how you want to call your function but let's assume it's in SQL:
select funcname( input_type (
date '2019-02-05' -- file_date
, input_table (
input_row('some val 1', 'another val 1')
, input_row('some val 2', 'another val 2')
) -- all rows
) -- input
) as funcname
from dual
This instantiates all the required objects with hard-coded values. Perhaps you want to pick them up from some table? If so, the principle is the same: instantiate each object from the pertinent data source.

ORA-01722: invalid number with to_char timestamp

First, the table I'm trying to insert into is this table:
CREATE TABLE Message
(
MessageID varchar2(80) NOT NULL,
Message varchar2(500),
SendDate date NOT NULL,
SendID varchar2(50) NOT NULL,
Request_ID varchar2(50) NOT NULL,
PRIMARY KEY (MessageID)
);
and my insert query is this(Spring, mybatis):
INSERT INTO message (
messageid
, message
, senddate
, sendId
, request_Id
)VALUES(
#{sendidjbuser} + TO_CHAR(systimestamp, 'yyyymmddhh24missff3')
, #{message}
, sysdate
, #{sendidjbuser}
, #{requestIdjbuser}
)
I tried this on cmd and this part of the above query was the problem:
INSERT INTO message (messageId) VALUES('sendId' + TO_CHAR(systimestamp, 'yyyymmddmissff3'))
I'm on Oracle 11. I just tried inserting only TO_CHAR(systimestamp, 'yyyymmddmissff3') without adding that to a string and it worked. But I do need that part to work. Is there a right way to do that?
In Oracle, please use || or CONCAT() function to concatenate strings. You are using '+', hence getting the error.

elasticsearch hadoop integration- java.lang.ClassCastException

I downloaded elasticsearch2.1.2 JAR and followed the guide to configure it in Hadoop(v5.4.4). Everything looks ok but I am getting 'CAST' error while reading from the elasticsearch source. Below is the error message-
Failed with exception java.io.IOException:org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.ClassCastException: org.elasticsearch.hadoop.mr.WritableArrayWritable cannot be cast to org.apache.hadoop.io.Text
Below is the table created in hive-
CREATE EXTERNAL TABLE Log_Event_ICS_ES(
product_version string,
agent_host string,
product_name string,
temp_time_stamp bigint,
log_message string,
org_id string,
log_datetime timestamp,
message string,
log_source_provider string,
log_source_name string,
log_message_for_trending string,
index_only_message string,
log_level string,
code_source string,
log_type string,
full_message string,
session_log_operation string,
source_received_time timestamp
)
STORED BY 'org.elasticsearch.hadoop.hive.EsStorageHandler'
TBLPROPERTIES('es.resource' = 'log_event_2015-05-11/log_event',
'es.nodes' = '',
'es.port' = ''
)
Select query- select * from log_event_ics_es
Any idea?

Identifier is too long while loading from SQL*Loader

I have a table structure like this
CREATE TABLE acn_scr_upload_header
(
FILE_RECORD_DESCRIPTOR varchar2(5) NOT NULL,
schedule_no Number(10) NOT NULL,
upld_time_stamp Date NOT NULL,
seq_no number NOT NULL,
filename varchar2(100) ,
schedule_date_time Date
);
When I try to load my file via SQL*Loader I'm getting an error on this value in the column filename: Stock_Count_Request_01122014010101.csv. The error is:
Error on table ACN_SCR_UPLOAD_HEADER, column FILENAME.
ORA-00972: identifier is too long".
If I try to insert the same value into the table using an INSERT statement it works fine.
My data file Stock_Count_Request_01122014010101.csv looks like
FHEAD,1,12345,20141103
FDETL,7,100,W,20141231,SC100,B,N,1,5
FTAIL,8,6
and control file
LOAD DATA
INFILE '$IN_DIR/$FILENAME'
APPEND
INTO TABLE ACN_SCR_UPLOAD_HEADER
WHEN FILE_RECORD_DESCRIPTOR = 'FHEAD'
FIELDS TERMINATED BY ","
TRAILING NULLCOLS
(
FILE_RECORD_DESCRIPTOR position(1),
LINE_NO FILLER,
schedule_no ,
schedule_date_time,
upld_time_stamp sysdate,
seq_no "TJX_STOCK_COUNT_REQ_UPLD_SEQ.NEXTVAL",
FILENAME constant ""
)

Resources