Confluent HDFS Connector - hadoop

I want to move kafka log file to hadoop log file. So i make follow HDFS Connector Configuration
/quickstart-hdfs.properties
name=hdfs-sink
connector.class=io.confluent.connect.hdfs.HdfsSinkConnector
tasks.max=1
topics=kafka_log_test
hdfs.url=hdfs://10.100.216.60:9000
flush.size=100000
hive.integration=true
hive.metastore.uris=thrift://localhost:9083
schema.compatibility=BACKWARD
format.class=io.confluent.connect.hdfs.parquet.ParquetFormat
partitioner.class=io.confluent.connect.hdfs.partitioner.Hour‌​lyPartitioner
/connect-avro-standalone.properties
bootstrap.servers=localhost:9092
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.storage.file.filename=/tmp/connect.offsets
When i run the HDFS Connector, just writes avro schema in .avro file. Not data.
/kafka_log_test+0+0000000018+0000000020.avro
avro.schema {"type":"record","name":"myrecord","fields":[{"name":"f1","type":"string"}],"connect.version":1,"connect.name":"myrecord"}
Topic have lots of data but confluent hdfs connector doesn't move data to hdfs.
How can i do that to resolve this problem ?

By definition, unless the messages are otherwise compacted or expired between offsets 18 and 20 then the file containing the name 0+0000000018+0000000020 will have 2 records from partition 0.
You should use tojson command of avro-tools rather than getmeta.
Or you can use Spark or Pig to read that file.
You might also want to verify the connectors is continuing to run after starting it because setting hive.metastore.uris=thrift://localhost:9083 on a machine that is not the Hive Metastore Server will cause the Connect task to fail. The URI should be the actual host for Hive, just as you've done for the NameNode.
Also, it shouldn't be possible to get a .avro file extension with format.class=io.confluent.connect.hdfs.parquet.ParquetFormat anyway, so you might want to verify you are looking in the correct HDFS path. Note: Connect writes to a +tmp location temporarily before writing the final output files.

Related

How to copy a file from a GCS bucket in Dataproc to HDFS using google cloud?

I had uploaded the data file to the GCS bucket of my project in Dataproc. Now I want to copy that file to HDFS. How can I do that?
For a single "small" file
You can copy a single file from Google Cloud Storage (GCS) to HDFS using the hdfs copy command. Note that you need to run this from a node within the cluster:
hdfs dfs -cp gs://<bucket>/<object> <hdfs path>
This works because hdfs://<master node> is the default filesystem. You can explicitly specify the scheme and NameNode if desired:
hdfs dfs -cp gs://<bucket>/<object> hdfs://<master node>/<hdfs path>
Note that GCS objects use the gs: scheme. Paths should appear the same as they do when you use gsutil.
For a "large" file or large directory of files
When you use hdfs dfs, data is piped through your local machine. If you have a large dataset to copy, you will likely want to do this in parallel on the cluster using DistCp:
hadoop distcp gs://<bucket>/<directory> <HDFS target directory>
Consult the DistCp documentation for details.
Consider leaving data on GCS
Finally, consider leaving your data on GCS. Because the GCS connector implements Hadoop's distributed filesystem interface, it can be used as a drop-in replacement for HDFS in most cases. Notable exceptions are when you rely on (most) atomic file/directory operations or want to use a latency-sensitive application like HBase. The Dataproc HDFS migration guide gives a good overview of data migration.

Write to HDFS/Hive using NiFi

I'm using Nifi 1.6.0.
I'm trying to write to HDFS and to Hive (cloudera) with nifi.
On "PutHDFS" I'm configure the "Hadoop Confiugration Resources" with hdfs-site.xml, core-site.xml files, set the directories and when I'm trying to Start it I got the following error:
"Failed to properly initialize processor, If still shcedule to run,
NIFI will attempt to initalize and run the Processor again after the
'Administrative Yield Duration' has elapsed. Failure is due to
java.lang.reflect.InvocationTargetException:
java.lang.reflect.InvicationTargetException"
On "PutHiveStreaming" I'm configure the "Hive Metastore URI" with
thrift://..., the database and the table name and on "Hadoop
Confiugration Resources" I'm put the Hive-site.xml location and when
I'm trying to Start it I got the following error:
"Hive streaming connect/write error, flow file will be penalized and routed to retry.
org.apache.nifi.util.hive.HiveWritter$ConnectFailure: Failed connectiong to EndPoint {metaStoreUri='thrift://myserver:9083', database='mydbname', table='mytablename', partitionVals=[]}:".
How can I solve the errors?
Thanks.
For #1, if you got your *-site.xml files from the cluster, it's possible that they are using internal IPs to refer to components like the DataNodes and you won't be able to reach them directly using that. Try setting dfs.client.use.datanode.hostname to true in your hdfs-site.xml on the client.
For #2, I'm not sure PutHiveStreaming will work against Cloudera, IIRC they use Hive 1.1.x and PutHiveStreaming is based on 1.2.x, so there may be some Thrift incompatibilities. If that doesn't seem to be the issue, make sure the client can connect to the metastore port (looks like 9083).

Need help debugging kafka source to hdfs sink with flume

I'm trying to send data from kafka (eventually we'll use kafka running on a different instance) to hdfs. I think flume or some sort of ingestion protocol is necessary to get data into hdfs. So we're using cloudera's flume service and hdfs.
This is my flume-conf file. The other conf file is empty
tier1.sources=source1
tier1.channels=channel1
tier1.sinks=sink1
tier1.sources.source1.type=org.apache.flume.source.kafka.KafkaSource
tier1.sources.source1.zookeeperConnect=localhost:2181
tier1.sources.source1.topic=test
tier1.sources.source1.groupId=flume
tier1.sources.source1.channels=channel1
tier1.sources.source1.interceptors=i1
tier1.sources.source1.interceptors.i1.type=timestamp
tier1.sources.source1.kafka.consumer.timeout.ms=100
tier1.channels.channel1.type=memory
tier1.channels.channel1.capacity=10000
tier1.channels.channel1.transactionCapacity=1000
tier1.sinks.sink1.type=hdfs
tier1.sinks.sink1.hdfs.path=/tmp/kafka/test/data
tier1.sinks.sink1.hdfs.rollInterval=5
tier1.sinks.sink1.hdfs.rollSize=0
tier1.sinks.sink1.hdfs.rollCount=0
tier1.sinks.sink1.hdfs.fileType=DataStream
When I start a kafka consumer it can get messages from a kafka producer just fine on localhost:2181. But I don't see any errors from the flume agent and nothing gets put into hdfs. I also can't find any log files.
This is how I start the agent.
flume-ng agent --conf /opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/lib/flume-ng/conf --conf-file flume-conf --name agent1 -Dflume.root.logger=DEBUG,INFO,console
Help please?
Fixed it.
Have to change
--name agent1
to --name tier1

Reading a file in Spark in cluster mode in Amazon EC2

I'm trying to execute a spark program in cluster mode in Amazon Ec2 using
spark-submit --master spark://<master-ip>:7077 --deploy-mode cluster --class com.mycompany.SimpleApp ./spark.jar
And the class has a line that tries to read a file:
JavaRDD<String> logData = sc.textFile("/user/input/CHANGES.txt").cache();
I'm unable to read this txt file in cluster mode even if I'm able to read in standalone mode. In cluster mode, it's looking to read from hdfs. So I put the file in hdfs at /root/persistent-hdfs using
hadoop fs -mkdir -p /wordcount/input
hadoop fs -put /app/hadoop/tmp/input.txt /wordcount/input/input.txt
And I can see the file using hadoop fs -ls /workcount/input. But Spark is still unable to read the file. Any idea what I'm doing wrong. Thanks.
You might want to check the following points:
Is the file really in the persistent HDFS?
It seems that you just copy the input file from /app/hadoop/tmp/input.txt to /wordcount/input/input.txt, all in the node disk. I believe you misunderstand the functionality of the hadoop commands.
Instead, you should try putting the file explicitly in the persistent HDFS (root/persistent-hdfs/), and then loading it using the hdfs://... prefix.
Is the persistent HDFS server up?
Please take a look here, it seems Spark only starts the ephemeral HDFS server by default. In order to switch to the persistent HDFS server, you must do the following:
1) Stop the ephemeral HDFS server: /root/ephemeral-hdfs/bin/stop-dfs.sh
2) Start the persistent HDFS server: /root/persistent-hdfs/bin/start-dfs.sh
Please try these things, I hope they can serve you well.

how do i backup hbase using distcp?

I would like to do a back up of hbase files using distcp. Then point hbase to the newly copied files and work with the stored tables.
I realize that there are tools out there which are recommended for this job. However, I'd like to know what I need to do after I've copied the files to get hbase to recognize the copied files.
For example, i'd like to start hbase shell and scan the stored tables from the newly copied file.
DistCp (distributed copy) is a tool used for large inter/intra-cluster copying. So if you want to backup your clusterA to clusterB, you'll have to:
do the copy from clusterA to clusterB using distcp
start an Hbase master and some RegionServers
enjoy the command line interface on clusterB
This means have 2 clusters each with HDFS and Hbase.
But, if you want to backup your data in the same cluster, this is simplier:
do the intra copy in a different folder: hadoop distcp hdfs://nn:8020/hbase hdfs://nn:8020/backuptest
stop all the Hbase processes and change the property hbase.rootdir from "hbase" to "backuptest"
restart all the processes

Resources