agentx.sources.execsource.channels = memorychanne2
agentx.sinks.filesink.channel = memorychanne2
(2)将agentx.conf放到flume工作的相应位置,我的是 /usr/hadoop/flume目录下,然后执行命令:
[hadoop@Master flume]$ ls
agent1.conf agent2.conf agent3.conf agentx.conf conf derby.log logs
[hadoop@Master flume]$ flume-ng agent --conf conf --conf-file agentx.conf --name agentx
Info: Sourcing environment configuration script /usr/hadoop/flume/conf/flume-env.sh
Info: Including Hadoop libraries found via (/usr/hadoop/bin/hadoop) for HDFS access
Info: Excluding /usr/hadoop/libexec/../lib/slf4j-api-1.4.3.jar from classpath
Info: Excluding /usr/hadoop/libexec/../lib/slf4j-log4j12-1.4.3.jar from classpath
中间省略。。。
(3)然后在从机slave1中,通过telnet远程访问主机master,并输入命令(需要先安装telnet: 使用买了yum install -y telnet 进行安装即可):
[hadoop@Slave1 ~]$ curl telnet://172.16.2.17:3000
Test from slave1
OK
about netcat and
OK
exec!
OK
^C
[hadoop@Slave1 ~]$
(4)查看log的输出:
05 Jan 2015 10:32:58,302 INFO [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.LoggerSink.process:70) - Event: { headers:{} body: 54 65 73 74 20 66 72 6F 6D 20 73 6C 61 76 65 31 Test from slave1 }
05 Jan 2015 10:33:07,306 INFO [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.LoggerSink.process:70) - Event: { headers:{} body: 61 62 6F 75 74 20 6E 65 74 63 61 74 20 61 6E 64 about netcat and }
05 Jan 2015 10:33:23,936 INFO [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.LoggerSink.process:70) - Event: { headers:{} body: 65 78 65 63 21 exec! }
(5)查看files的输出:
[hadoop@Master flume]$ cd /home/hadoop/flume/files/
[hadoop@Master files]$ ls
1420384890083-1 1420386984869-1 1420388077115-1 1420423829240-1
[hadoop@Master files]$ cat 1420423829240-1
Hello netcat and exec Flume!