设为首页 收藏本站
查看: 1923|回复: 0

[经验分享] 快速搭建 Hadoop 环境

[复制链接]

尚未签到

发表于 2018-11-1 09:12:29 | 显示全部楼层 |阅读模式
  对于Hadoop来说,最主要的是两个方面,一个是分布式文件系统HDFS,另一个是MapReduce计算模型,下面讲解下我在搭建Hadoop 环境过程。
  

  
Hadoop 测试环境
  


  • 共4台测试机,1台namenode 3台datanode

  • OS版本:RHEL 5.5 X86_64
  • Hadoop:0.20.203.0
  • Jdk:jdk1.7.0

  • 角色        ip地址
  • namenode  192.168.57.75
  • datanode1 192.168.57.76
  • datanode2 192.168.57.78
  • datanode3 192.168.57.79
  

  
一 部署 Hadoop 前的准备工作
  


  • 1 需要知道hadoop依赖Java和SSH
  • Java 1.5.x (以上),必须安装。
  • ssh 必须安装并且保证 sshd 一直运行,以便用Hadoop 脚本管理远端Hadoop守护进程。

  • 2 建立 Hadoop 公共帐号
  • 所有的节点应该具有相同的用户名,可以使用如下命令添加:
  • useradd hadoop
  • passwd hadoop

  • 3 配置 host 主机名
  • tail -n 3 /etc/hosts
  • 192.168.57.75  namenode
  • 192.168.57.76  datanode1
  • 192.168.57.78  datanode2
  • 192.168.57.79  datanode3

  • 4 以上几点要求所有节点(namenode|datanode)配置全部相同
  

  
二 ssh 配置
  
ssh 详细了解
  


  • 1 生成私匙 id_rsa 与 公匙 id_rsa.pub 配置文件
  • [hadoop@hadoop1 ~]$ ssh-keygen -t rsa
  • Generating public/private rsa key pair.
  • Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
  • Enter passphrase (empty for no passphrase):
  • Enter same passphrase again:
  • Your identification has been saved in /home/hadoop/.ssh/id_rsa.
  • Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
  • The key fingerprint is:
  • d6:63:76:43:e2:5b:8e:85:ab:67:a2:7c:a6:8f:23:f9 hadoop@hadoop1.test.com

  • 2 私匙 id_rsa 与 公匙 id_rsa.pub 配置文件
  • [hadoop@hadoop1 ~]$ ls .ssh/
  • authorized_keys  id_rsa  id_rsa.pub  known_hosts

  • 3 把公匙文件上传到datanode服务器
  • [hadoop@hadoop1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@datanode1
  • 28
  • hadoop@datanode1's password:
  • Now try logging into the machine, with "ssh 'hadoop@datanode1'", and check in:

  •   .ssh/authorized_keys

  • to make sure we haven't added extra keys that you weren't expecting.

  • [hadoop@hadoop1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@datanode2
  • 28
  • hadoop@datanode2's password:
  • Now try logging into the machine, with "ssh 'hadoop@datanode2'", and check in:

  •   .ssh/authorized_keys

  • to make sure we haven't added extra keys that you weren't expecting.

  • [hadoop@hadoop1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@datanode3
  • 28
  • hadoop@datanode3's password:
  • Now try logging into the machine, with "ssh 'hadoop@datanode3'", and check in:

  •   .ssh/authorized_keys

  • to make sure we haven't added extra keys that you weren't expecting.

  • [hadoop@hadoop1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@localhost
  • 28
  • hadoop@localhost's password:
  • Now try logging into the machine, with "ssh 'hadoop@localhost'", and check in:

  •   .ssh/authorized_keys

  • to make sure we haven't added extra keys that you weren't expecting.


  • 4 验证
  • [hadoop@hadoop1 ~]$ ssh datanode1
  • Last login: Thu Feb  2 09:01:16 2012 from 192.168.57.71
  • [hadoop@hadoop2 ~]$ exit
  • logout

  • [hadoop@hadoop1 ~]$ ssh datanode2
  • Last login: Thu Feb  2 09:01:18 2012 from 192.168.57.71
  • [hadoop@hadoop3 ~]$ exit
  • logout

  • [hadoop@hadoop1 ~]$ ssh datanode3
  • Last login: Thu Feb  2 09:01:20 2012 from 192.168.57.71
  • [hadoop@hadoop4 ~]$ exit
  • logout

  • [hadoop@hadoop1 ~]$ ssh localhost
  • Last login: Thu Feb  2 09:01:24 2012 from 192.168.57.71
  • [hadoop@hadoop1 ~]$ exit
  • logout
  

  三 java环境配置
  


  • 1 下载合适的jdk
  • //此文件为64Linux 系统使用的 RPM包
  • wget http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.rpm

  • 2 安装jdk
  • rpm -ivh jdk-7-linux-x64.rpm

  • 3 验证java
  • [root@hadoop1 ~]# java -version
  • java version "1.7.0"
  • Java(TM) SE Runtime Environment (build 1.7.0-b147)
  • Java HotSpot(TM) 64-Bit Server VM (build 21.0-b17, mixed mode)
  • [root@hadoop1 ~]# ls /usr/java/
  • default  jdk1.7.0  latest

  • 4 配置java环境变量
  • #vim /etc/profile //在profile文件中加入如下信息:

  • #add for hadoop
  • export JAVA_HOME=/usr/java/jdk1.7.0
  • export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/
  • export PATH=$PATH:$JAVA_HOME/bin

  • //使环境变量生效
  • source /etc/profile

  • 5 拷贝 /etc/profile 到 datanode
  • [root@hadoop1 src]# scp /etc/profile root@datanode1:/etc/
  • The authenticity of host 'datanode1 (192.168.57.86)' can't be established.
  • RSA key fingerprint is b5:00:d1:df:73:4c:94:f1:ea:1f:b5:cd:ed:3a:cc:e1.
  • Are you sure you want to continue connecting (yes/no)? yes
  • Warning: Permanently added 'datanode1,192.168.57.86' (RSA) to the list of known hosts.
  • root@datanode1's password:
  • profile                                       100% 1624     1.6KB/s   00:00
  • [root@hadoop1 src]# scp /etc/profile root@datanode2:/etc/
  • The authenticity of host 'datanode2 (192.168.57.87)' can't be established.
  • RSA key fingerprint is 57:cf:96:15:78:a3:94:93:30:16:8e:66:47:cd:f9:cd.
  • Are you sure you want to continue connecting (yes/no)? yes
  • Warning: Permanently added 'datanode2,192.168.57.87' (RSA) to the list of known hosts.
  • root@datanode2's password:
  • profile                                       100% 1624     1.6KB/s   00:00
  • [root@hadoop1 src]# scp /etc/profile root@datanode3:/etc/
  • The authenticity of host 'datanode3 (192.168.57.88)' can't be established.
  • RSA key fingerprint is 31:73:e8:3c:20:0c:1e:b2:59:5c:d1:01:4b:26:41:70.
  • Are you sure you want to continue connecting (yes/no)? yes
  • Warning: Permanently added 'datanode3,192.168.57.88' (RSA) to the list of known hosts.
  • root@datanode3's password:
  • profile                                       100% 1624     1.6KB/s   00:00

  • 6 拷贝 jdk 安装包,并在每个datanode 节点安装 jdk 包
  • [root@hadoop1 ~]# scp -r /home/hadoop/src/ hadoop@datanode1:/home/hadoop/
  • hadoop@datanode1's password:
  • hadoop-0.20.203.0rc1.tar.gz                   100%   58MB  57.8MB/s   00:01
  • jdk-7-linux-x64.rpm                           100%   78MB  77.9MB/s   00:01
  • [root@hadoop1 ~]# scp -r /home/hadoop/src/ hadoop@datanode2:/home/hadoop/
  • hadoop@datanode2's password:
  • hadoop-0.20.203.0rc1.tar.gz                   100%   58MB  57.8MB/s   00:01
  • jdk-7-linux-x64.rpm                           100%   78MB  77.9MB/s   00:01
  • [root@hadoop1 ~]# scp -r /home/hadoop/src/ hadoop@datanode3:/home/hadoop/
  • hadoop@datanode3's password:
  • hadoop-0.20.203.0rc1.tar.gz                   100%   58MB  57.8MB/s   00:01
  • jdk-7-linux-x64.rpm                           100%   78MB  77.9MB/s   00:01
  

  四 hadoop 配置
  
//注意使用hadoop 用户 操作
  


  • 1 配置目录
  • [hadoop@hadoop1 ~]$ pwd
  • /home/hadoop
  • [hadoop@hadoop1 ~]$ ll
  • total 59220
  • lrwxrwxrwx  1 hadoop hadoop       17 Feb  1 16:59 hadoop -> hadoop-0.20.203.0
  • drwxr-xr-x 12 hadoop hadoop     4096 Feb  1 17:31 hadoop-0.20.203.0
  • -rw-r--r--  1 hadoop hadoop 60569605 Feb  1 14:24 hadoop-0.20.203.0rc1.tar.gz


  • 2 配置hadoop-env.sh,指定java位置
  • vim hadoop/conf/hadoop-env.sh
  • export JAVA_HOME=/usr/java/jdk1.7.0

  • 3 配置core-site.xml //定位文件系统的 namenode

  • [hadoop@hadoop1 ~]$ cat hadoop/conf/core-site.xml








  • fs.default.name
  • hdfs://namenode:9000




  • 4 配置mapred-site.xml //定位jobtracker 所在的主节点

  • [hadoop@hadoop1 ~]$ cat hadoop/conf/mapred-site.xml








  • mapred.job.tracker
  • namenode:9001




  • 5 配置hdfs-site.xml //配置HDFS副本数量

  • [hadoop@hadoop1 ~]$ cat hadoop/conf/hdfs-site.xml








  • dfs.replication
  • 3




  • 6 配置 master 与 slave 配置文档
  • [hadoop@hadoop1 ~]$ cat hadoop/conf/masters
  • namenode
  • [hadoop@hadoop1 ~]$ cat hadoop/conf/slaves
  • datanode1
  • datanode2

  • 7 拷贝hadoop 目录到所有节点(datanode)
  • [hadoop@hadoop1 ~]$ scp -r hadoop hadoop@datanode1:/home/hadoop/
  • [hadoop@hadoop1 ~]$ scp -r hadoop hadoop@datanode2:/home/hadoop/
  • [hadoop@hadoop1 ~]$ scp -r hadoop hadoop@datanode3:/home/hadoop

  • 8 格式化 HDFS
  • [hadoop@hadoop1 hadoop]$ bin/hadoop namenode -format
  • 12/02/02 11:31:15 INFO namenode.NameNode: STARTUP_MSG:
  • /************************************************************
  • STARTUP_MSG: Starting NameNode
  • STARTUP_MSG:   host = hadoop1.test.com/127.0.0.1
  • STARTUP_MSG:   args = [-format]
  • STARTUP_MSG:   version = 0.20.203.0
  • STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by 'oom' on Wed May  4 07:57:50 PDT 2011
  • ************************************************************/
  • Re-format filesystem in /tmp/hadoop-hadoop/dfs/name ? (Y or N)  Y  //这里输入Y
  • 12/02/02 11:31:17 INFO util.GSet: VM type       = 64-bit
  • 12/02/02 11:31:17 INFO util.GSet: 2% max memory = 19.33375 MB
  • 12/02/02 11:31:17 INFO util.GSet: capacity      = 2^21 = 2097152 entries
  • 12/02/02 11:31:17 INFO util.GSet: recommended=2097152, actual=2097152
  • 12/02/02 11:31:17 INFO namenode.FSNamesystem: fsOwner=hadoop
  • 12/02/02 11:31:18 INFO namenode.FSNamesystem: supergroupsupergroup=supergroup
  • 12/02/02 11:31:18 INFO namenode.FSNamesystem: isPermissionEnabled=true
  • 12/02/02 11:31:18 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
  • 12/02/02 11:31:18 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
  • 12/02/02 11:31:18 INFO namenode.NameNode: Caching file names occuring more than 10 times
  • 12/02/02 11:31:18 INFO common.Storage: Image file of size 112 saved in 0 seconds.
  • 12/02/02 11:31:18 INFO common.Storage: Storage directory /tmp/hadoop-hadoop/dfs/name has been successfully formatted.
  • 12/02/02 11:31:18 INFO namenode.NameNode: SHUTDOWN_MSG:
  • /************************************************************
  • SHUTDOWN_MSG: Shutting down NameNode at hadoop1.test.com/127.0.0.1
  • ************************************************************/
  • [hadoop@hadoop1 hadoop]$

  • 9 启动hadoop 守护进程
  • [hadoop@hadoop1 hadoop]$ bin/start-all.sh
  • starting namenode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-hadoop1.test.com.out
  • datanode1: starting datanode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop2.test.com.out
  • datanode2: starting datanode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop3.test.com.out
  • datanode3: starting datanode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop4.test.com.out
  • starting jobtracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-hadoop1.test.com.out
  • datanode1: starting tasktracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop2.test.com.out
  • datanode2: starting tasktracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop3.test.com.out
  • datanode3: starting tasktracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop4.test.com.out

  • 10 验证
  • //namenode
  • [hadoop@hadoop1 logs]$ jps
  • 2883 JobTracker
  • 3002 Jps
  • 2769 NameNode

  • //datanode
  • [hadoop@hadoop2 ~]$ jps
  • 2743 TaskTracker
  • 2670 DataNode
  • 2857 Jps

  • [hadoop@hadoop3 ~]$ jps
  • 2742 TaskTracker
  • 2856 Jps
  • 2669 DataNode

  • [hadoop@hadoop4 ~]$ jps
  • 2742 TaskTracker
  • 2852 Jps
  • 2659 DataNode

  • Hadoop 监控web页面
  • http://192.168.57.75:50070/dfshealth.jsp
  

  

  
五 简单验证HDFS
  


  • hadoop 的文件命令格式如下:
  • hadoop fs -cmd
  • //建立目录
  • [hadoop@hadoop1 hadoop]$ bin/hadoop fs -mkdir /test-hadoop
  • //査看目录
  • [hadoop@hadoop1 hadoop]$ bin/hadoop fs -ls /
  • Found 2 items
  • drwxr-xr-x   - hadoop supergroup          0 2012-02-02 13:32 /test-hadoop
  • drwxr-xr-x   - hadoop supergroup          0 2012-02-02 11:32 /tmp
  • //査看目录包括子目录
  • [hadoop@hadoop1 hadoop]$ bin/hadoop fs -lsr /
  • drwxr-xr-x   - hadoop supergroup          0 2012-02-02 13:32 /test-hadoop
  • drwxr-xr-x   - hadoop supergroup          0 2012-02-02 11:32 /tmp
  • drwxr-xr-x   - hadoop supergroup          0 2012-02-02 11:32 /tmp/hadoop-hadoop
  • drwxr-xr-x   - hadoop supergroup          0 2012-02-02 11:32 /tmp/hadoop-hadoop/mapred
  • drwx------   - hadoop supergroup          0 2012-02-02 11:32 /tmp/hadoop-hadoop/mapred/system
  • -rw-------   2 hadoop supergroup          4 2012-02-02 11:32 /tmp/hadoop-hadoop/mapred/system/jobtracker.info
  • //添加文件
  • [hadoop@hadoop1 hadoop]$ bin/hadoop fs -put /home/hadoop/hadoop-0.20.203.0rc1.tar.gz /test-hadoop
  • [hadoop@hadoop1 hadoop]$ bin/hadoop fs -lsr /
  • drwxr-xr-x   - hadoop supergroup          0 2012-02-02 13:34 /test-hadoop
  • -rw-r--r--   2 hadoop supergroup   60569605 2012-02-02 13:34 /test-hadoop/hadoop-0.20.203.0rc1.tar.gz
  • drwxr-xr-x   - hadoop supergroup          0 2012-02-02 11:32 /tmp
  • drwxr-xr-x   - hadoop supergroup          0 2012-02-02 11:32 /tmp/hadoop-hadoop
  • drwxr-xr-x   - hadoop supergroup          0 2012-02-02 11:32 /tmp/hadoop-hadoop/mapred
  • drwx------   - hadoop supergroup          0 2012-02-02 11:32 /tmp/hadoop-hadoop/mapred/system
  • -rw-------   2 hadoop supergroup          4 2012-02-02 11:32 /tmp/hadoop-hadoop/mapred/system/jobtracker.info
  • //获取文件
  • [hadoop@hadoop1 hadoop]$ bin/hadoop fs -get /test-hadoop/hadoop-0.20.203.0rc1.tar.gz /tmp/
  • [hadoop@hadoop1 hadoop]$ ls /tmp/*.tar.gz
  • /tmp/1.tar.gz  /tmp/hadoop-0.20.203.0rc1.tar.gz
  • //删除文件
  • [hadoop@hadoop1 hadoop]$ bin/hadoop fs -rm /test-hadoop/hadoop-0.20.203.0rc1.tar.gz
  • Deleted hdfs://namenode:9000/test-hadoop/hadoop-0.20.203.0rc1.tar.gz
  • [hadoop@hadoop1 hadoop]$ bin/hadoop fs -lsr /
  • drwxr-xr-x   - hadoop supergroup          0 2012-02-02 13:57 /test-hadoop
  • drwxr-xr-x   - hadoop supergroup          0 2012-02-02 11:32 /tmp
  • drwxr-xr-x   - hadoop supergroup          0 2012-02-02 11:32 /tmp/hadoop-hadoop
  • drwxr-xr-x   - hadoop supergroup          0 2012-02-02 11:32 /tmp/hadoop-hadoop/mapred
  • drwx------   - hadoop supergroup          0 2012-02-02 11:32 /tmp/hadoop-hadoop/mapred/system
  • -rw-------   2 hadoop supergroup          4 2012-02-02 11:32 /tmp/hadoop-hadoop/mapred/system/jobtracker.info
  • drwxr-xr-x   - hadoop supergroup          0 2012-02-02 13:36 /user
  • -rw-r--r--   2 hadoop supergroup        321 2012-02-02 13:36 /user/hadoop
  • //删除目录
  • [hadoop@hadoop1 hadoop]$ bin/hadoop fs -rmr /test-hadoop
  • Deleted hdfs://namenode:9000/test-hadoop
  • [hadoop@hadoop1 hadoop]$ bin/hadoop fs -lsr /
  • drwxr-xr-x   - hadoop supergroup          0 2012-02-02 11:32 /tmp
  • drwxr-xr-x   - hadoop supergroup          0 2012-02-02 11:32 /tmp/hadoop-hadoop
  • drwxr-xr-x   - hadoop supergroup          0 2012-02-02 11:32 /tmp/hadoop-hadoop/mapred
  • drwx------   - hadoop supergroup          0 2012-02-02 11:32 /tmp/hadoop-hadoop/mapred/system
  • -rw-------   2 hadoop supergroup          4 2012-02-02 11:32 /tmp/hadoop-hadoop/mapred/system/jobtracker.info
  • drwxr-xr-x   - hadoop supergroup          0 2012-02-02 13:36 /user
  • -rw-r--r--   2 hadoop supergroup        321 2012-02-02 13:36 /user/hadoop

  • //hadoop fs 帮助(部分)
  • [hadoop@hadoop1 hadoop]$ bin/hadoop fs -help
  • hadoop fs is the command to execute fs commands. The full syntax is:

  • hadoop fs [-fs ] [-conf ]
  •     [-D ] [-ls ] [-lsr ] [-du ]
  •     [-dus ] [-mv  ] [-cp  ] [-rm [-skipTrash] ]
  •     [-rmr [-skipTrash] ] [-put  ... ] [-copyFromLocal  ... ]
  •     [-moveFromLocal  ... ] [-get [-ignoreCrc] [-crc]  
  •     [-getmerge   [addnl]] [-cat ]
  •     [-copyToLocal [-ignoreCrc] [-crc]  ] [-moveToLocal  ]
  •     [-mkdir ] [-report] [-setrep [-R] [-w]  ]
  •     [-touchz ] [-test -[ezd] ] [-stat [format] ]
  •     [-tail [-f] ] [-text ]
  •     [-chmod [-R]  PATH...]
  •     [-chown [-R] [OWNER][:[GROUP]] PATH...]
  •     [-chgrp [-R] GROUP PATH...]
  •     [-count[-q] ]
  •     [-help [cmd]]
  

  
更多Hadoop 相关知识
  结束
  
Hadoop 环境搭建步骤繁琐,需要具备一定的Linux 系统知识,需要注意的是,通过以上步骤搭建的Hadoop 环境只能让你大体了解的hadoop ,如果想将HDFS 用于线上服务,还需对hadoop 配置文档做进一步配置 ,后续文档将继续以博文的形式发布,敬请期待。



运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-629197-1-1.html 上篇帖子: hadoop中System.out.println输出 下篇帖子: hadoop io Sequence, Map, Set, Array, BloomMap Files(译文)
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表