0%

hadoop-lzo

前置要求

1.安装好hadoop

2.安装好jdk和maven(maven提前配置阿里云连接,不然慢死!)

3.安装前置库

1
yum -y install  lzo-devel  zlib-devel  gcc autoconf automake libtool

lzo使用流程

下载及安装

1
2
3
4
5
6
7
8
9
10
11
12
# 下载
wget http://www.oberhumer.com/opensource/lzo/download/lzo-2.10.tar.gz

# 解压
tar -zxvf lzo-2.10.tar.gz
cd lzo-2.10

# 配置前缀
./configure -prefix=/usr/local/hadoop/lzo/

# 安装
make && make install

编译hadoop-lzo源码

下载hadoop-lzo的源码,下载地址:https://github.com/twitter/hadoop-lzo/archive/master.zip
解压之后,修改pom.xml

1
<hadoop.current.version>2.6.0-cdh5.16.2</hadoop.current.version>

注意!cdh版本需要配置加下面的仓库

1
2
3
4
5
6
7
8
9
<repository>
<id>cloudera</id>
<url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
</repository>

<repository>
<id>aliyun</id>
<url>http://maven.aliyun.com/nexus/content/groups/public</url>
</repository>

声明两个临时环境变量

1
2
export C_INCLUDE_PATH=/usr/local/hadoop/lzo/include
export LIBRARY_PATH=/usr/local/hadoop/lzo/lib

进入hadoop-lzo-master,执行maven编译命令

1
mvn package -Dmaven.test.skip=true

进入target,hadoop-lzo-0.4.21-SNAPSHOT.jar 即编译成功的hadoop-lzo组件

编译好的文件改名hadoop-lzo-0.4.20.jar 放入到 $HADOOP_HOME/share/hadoop/common/下面然后进行分发

hadoop配置文件

添加下面内容

core-site.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<property>
<name>io.compression.codecs</name>
<value>
org.apache.hadoop.io.compress.GzipCodec,
org.apache.hadoop.io.compress.DefaultCodec,
org.apache.hadoop.io.compress.BZip2Codec,
org.apache.hadoop.io.compress.SnappyCodec,
com.hadoop.compression.lzo.LzoCodec,
com.hadoop.compression.lzo.LzopCodec
</value>
</property>

<property>
<name>io.compression.codec.lzo.class</name>
<value>com.hadoop.compression.lzo.LzoCodec</value>
</property>

mapred-site.mxl

1
2
3
4
5
6
7
8
<property>
<name>mapreduce.map.output.compress</name>
<value>true</value>
</property>
<property>
<name>mapreduce.map.output.compress.codec</name>
<value>com.hadoop.compression.lzo.LzoCodec</value>
</property>

之后记得重启集群

测试lzo

本测试提前下载了lzop,提前将文件进行了压缩,再传到hdfs上的

1
yum -y install lzop

通过python3生成测试数据(测试功能为wc):

1
2
3
4
5
6
import random

filename = 'test.txt'
with open(filename, 'w') as file_object:
for i in range(1000000000):
file_object.write(str(random.randint(1, 100000))+"\t"+str(random.randint(1, 5))+"\n")

上传压缩完成的lzo数据

1
2
lzop test.txt
hadoop fs -put test.txt.lzo /wc/text.txt.lzo

无索引测试

1
2
3
4
5
6
7
8
hadoop jar hadoop-mapreduce-examples-2.6.0-cdh5.16.2.jar wordcount \
-Dmapreduce.map.output.compress=true \
-Dmapreduce.map.output.compress.codec=com.hadoop.compression.lzo.LzopCodec \
-Dmapreduce.output.fileoutputformat.compress=true \
-Dmapreduce.output.fileoutputformat.compress.codec=com.hadoop.compression.lzo.LzopCodec \
/wc/test.txt.lzo /wc/out

20/04/10 17:39:51 INFO mapreduce.JobSubmitter: number of splits:1

有索引测试

1
2
3
4
5
6
7
8
9
10
11
12
hadoop jar hadoop-lzo-0.4.21-SNAPSHOT.jar com.hadoop.compression.lzo.LzoIndexer /wc/test.txt.lzo

hadoop jar hadoop-mapreduce-examples-2.6.0-cdh5.16.2.jar wordcount \
-Dmapreduce.job.inputformat.class=com.hadoop.mapreduce.LzoTextInputFormat \
-Dmapreduce.map.output.compress=true \
-Dmapreduce.map.output.compress.codec=com.hadoop.compression.lzo.LzopCodec \
-Dmapreduce.output.fileoutputformat.compress=true \
-Dmapreduce.output.fileoutputformat.compress.codec=com.hadoop.compression.lzo.LzopCodec \
/wc/test.txt.lzo /wc/out

20/04/10 18:18:46 INFO input.FileInputFormat: Total input paths to process : 1
20/04/10 18:18:46 INFO mapreduce.JobSubmitter: number of splits:4