Flume与Kakfa整合

发布 : 2016-02-12 分类 : 大数据 浏览 :
1
2
flume官方下载地址:https://flume.apache.org/download.html
建议下载最新的1.6.0版本的,因为1.6.0版本的集成了整合kafka的插件包可以直接配置使用

1.下载并解压apache-flume-1.6.0-bin.tar.gz包

1
2
通过tar –zxvf apache-flume-1.6.0-bin.tar.gz命令解压压缩文件
Flume解压既安装成功,配置conf/flume-conf.properties文件启动完成相应的功能

2.配置flume连接kafka

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
前面kafka集群已经成功,这里只需要配置好conf/flume-conf.properties文件,配置如下

a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type=avro
a1.sources.r1.bind=master
a1.sources.r1.port=41414

# Describe the sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.topic = testflume
a1.sinks.k1.brokerList = 192.168.230.129:9092,192.168.230.130:9092,192.168.230.131:9092
a1.sinks.k1.requiredAcks = 1
a1.sinks.k1.batchSize = 20
a1.sinks.k1.channel = c1

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000000
a1.channels.c1.transactionCapacity = 10000

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
1
2
3
4
5
6
7
8
9
10
11
12
注意:这里需要根据实际需求来配置sources,这里是用avro的方式配置监听master本机的41414端口

注意:这里是启动agent的配置 后续的flume client也需要用到这个配置,

下面的sink配置
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink是固定的不需要修改,
a1.sinks.k1.topic = testflume是创建的话题名,需要根据自己需要来改,
a1.sinks.k1.brokerList = 192.168.230.129:9092,192.168.230.130:9092,192.168.230.131:9092是根据实际的kafka集群情况配置的

注意:这里的a1指的是配置文件中的agent名字,a1不是随意写的

这里的flume对接kafka实际已经完成,接下来就是测试

3.启动flume连接到kafka

1
[root@master flume-1.6.0]# bin/flume-ng agent -c ./conf/ -f conf/flume-conf.properties -Dflume.root.logger=DEBUG,console -n a1

4.启动kafka消费者接受数据

1
[root@slave1 kafka_2.10-0.8.1.1]# ./bin/kafka-console-consumer.sh --zookeeper master:2181,slave1:2181,slave2:2181 --from-beginning --topic order

5.运行测试程序

1
程序结构图
1
2
3
注意这里的a1指的是配置文件中的agent名字a1不是随意写的

这里的flume对接kafka实际已经完成,接下来就是测试
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
package com.matrix.flume;

import org.apache.flume.Event;
import org.apache.flume.EventDeliveryException;
import org.apache.flume.api.RpcClient;
import org.apache.flume.api.RpcClientFactory;
import org.apache.flume.event.EventBuilder;
import java.nio.charset.Charset;

public class MyApp {
public static void main(String[] args) {
MyRpcClientFacade client = new MyRpcClientFacade();
client.init("master", 41414);

String sampleData = "Hello Flume!";
for (int i = 0; i < 10; i++) {
client.sendDataToFlume(sampleData);
}

client.cleanUp();
}
}

class MyRpcClientFacade {
private RpcClient client;
private String hostname;
private int port;

public void init(String hostname, int port) {
this.hostname = hostname;
this.port = port;
this.client = RpcClientFactory.getDefaultInstance(hostname, port);
}

public void sendDataToFlume(String data) {
Event event = EventBuilder.withBody(data, Charset.forName("UTF-8"));

try {
client.append(event);
} catch (EventDeliveryException e) {
client.close();
client = null;
client = RpcClientFactory.getDefaultInstance(hostname, port);
}
}

public void cleanUp() {
client.close();
}

}

6.查看kafka消费者接收数据情况

本文作者 : Matrix
原文链接 : https://matrixsparse.github.io/2016/02/12/Flume与Kafka整合/
版权声明 : 本博客所有文章除特别声明外,均采用 CC BY-NC-SA 4.0 许可协议。转载请注明出处!

知识 & 情怀 | 二者兼得

微信扫一扫, 向我投食

微信扫一扫, 向我投食

支付宝扫一扫, 向我投食

支付宝扫一扫, 向我投食

留下足迹