设为首页 收藏本站
查看: 855|回复: 0

[经验分享] Hadoop集群(五) Hive安装

[复制链接]

尚未签到

发表于 2018-10-28 14:45:49 | 显示全部楼层 |阅读模式
  作为一个多年的DBA,hadoop家族中,最亲切的产品就是hive了。毕竟SQL的使用还是很熟悉的。再也不用担心编写Mapreducer的痛苦了。
  首先还是简单介绍一下Hive吧
  Hive是基于Hadoop的数据仓库解决方案。由于Hadoop本身在数据存储和计算方面有很好的可扩展性和高容错性,因此使用Hive构建的数据仓库也秉承了这些特性。
  简单来说,Hive就是在Hadoop上架了一层SQL接口,可以将SQL翻译成MapReduce去Hadoop上执行,这样就使得数据开发和分析人员很方便的使用SQL来完成海量数据的统计和分析,而不必使用编程语言开发MapReduce那么麻烦。
DSC0000.jpg

  下面开始Hive的安装, 安装hive的前提,是hdfs,yarn已经安装完成并启动。hdfs安装,可以参考
  Hadoop集群(一) Zookeeper搭建
  Hadoop集群(二) HDFS搭建
  Hadoop集群(三) Hbase搭建


  •   Hive软件的下载,我使用版本是hive-1.2.1,现在已经无法下载了。大家可以根据需要下载新版本。
      http://hive.apache.org/downloads.html
      tar -xzvf apache-hive-1.2.1-bin.tar.gz

  • 修改hive-site.xml数据库相关的配置,主要有下面几个。实际生产中,还有很多其他的参数需要配置,比如后面提及的lzo压缩,kerberos等。这几个参数只是最基本保证hive运行的参数。  

    ##javax.jdo.option.ConnectionURL,将该name对应的value修改为MySQL的地址,例如:  javax.jdo.option.ConnectionURL
      jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true
      
    ##javax.jdo.option.ConnectionDriverName,将该name对应的value修改为MySQL驱动类路径,例如我的修改后是:
      

      javax.jdo.option.ConnectionDriverName
      com.mysql.jdbc.Driver
      

      
    ##javax.jdo.option.ConnectionUserName,将对应的value修改为MySQL数据库登录名:
      
    javax.jdo.option.ConnectionUserName
      
    root
      
    ##javax.jdo.option.ConnectionPassword,将对应的value修改为MySQL数据库的登录密码:
      javax.jdo.option.ConnectionPassword
      change to your password
      
    ##hive.metastore.schema.verification,将对应的value修改为false:
      
    hive.metastore.schema.verification
      false

  •   创建对应目录
      

    mkdir -p /data1/hiveLogs-security/;chown -R hive:hadoop /data1/hiveLogs-security/  
    mkdir -p  /data1/hiveData-security/;chown -R hive:hadoop /data1/hiveData-security/
      
    mkdir -p /tmp/hive-security/operation_logs; chown -R hive:hadoop /tmp/hive-security/operation_logs
      


  •   创建hdfs目录
      

    hadoop fs -mkdir /tmp  
    hadoop fs -mkdir -p /user/hive/warehouse
      
    hadoop fs -chmod g+w /tmp
      
    hadoop fs -chmod g+w /user/hive/warehouse
      


  •   初始化hive
      

    [hive@aznbhivel01 ~]$ schematool -initSchema -dbType mysql  
    Metastore connection URL:        jdbc:mysql://172.16.13.88:3306/hive_beta?useUnicode=true&characterEncoding=UTF-8&createDatabaseIfNotExist=true
      
    Metastore Connection Driver :    com.mysql.jdbc.Driver
      
    Metastore connection User:       envision
      
    Starting metastore schema initialization to 1.2.0
      
    Initialization script hive-schema-1.2.0.mysql.sql
      
    Initialization script completed
      
    schemaTool completed
      


  • 第一次启动hive,遇到错误,其实很多错误,都是一个“hive小白” 意识不到的配置问题,和启动等问题。对于老司机,这些都不是问题。  原因:因为没有正常启动Hive 的 Metastore Server服务进程。
      [hive@aznbhivel01 ~]$ hive
      Logging initialized using configuration in file:/usr/local/hadoop/apache-hive-1.2.1/conf/hive-log4j.properties
      Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
      at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:528)
      at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)
      at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
      ......
      Caused by: java.lang.reflect.InvocationTargetException
      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
      at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
      at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
      at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
      at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
      ... 14 more
      Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure:  ···

  -7. 解决方法:启动Hive 的 Metastore Server服务进程,执行如下命令,,遇到下一个问题
  [hive@aznbhivel01 ~]$ Starting Hive Metastore Server
  hiveorg.apache.thrift.transport.TTransportException: java.io.IOException: Login failure for hive/aznbhivel01.liang.com@ENVISIONCN.COM from keytab /etc/security/keytab/hive.keytab: javax.security.auth.login.LoginException: Unable to obtain password from user
  at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server.(HadoopThriftAuthBridge.java:358)
  at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge.createServer(HadoopThriftAuthBridge.java:102)
  at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:5990)
  at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:5909)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606)
  at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
  at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
  Caused by: java.io.IOException: Login failure for hive/aznbhivel01.liang.com@ENVISIONCN.COM from keytab /etc/security/keytab/hive.keytab: javax.security.auth.login.LoginException: Unable to obtain password from user```
  -8. keytab 没找到,修正hive.keytab文件权限问题。
  

$ ll /etc/security/keytab/  
total 100
  
-r-------- 1 hbase  hadoop 18002 Dec  5 17:06 hbase.keytab
  
-r-------- 1 hdfs   hadoop 18002 Dec  5 17:04 hdfs.keytab
  
-r-------- 1 hive   hadoop 18002 Dec  5 17:06 hive.keytab
  
-r-------- 1 mapred hadoop 18002 Dec  5 17:06 mapred.keytab
  
-r-------- 1 yarn   hadoop 18002 Dec  5 17:06 yarn.keytab
  

  -9. 再次重启metastore
  

$ hive --service metastore &  
[2] 41285
  
[1]   Killed                  hive --service metastore  (wd: ~)
  
(wd now: /etc/security)
  
[hive@aznbhivel01 security]$ Starting Hive Metastore Server
  

  -10. 然后启动hiveserver
  hive --service hiveserver2 &
  -11. 启动依然失败,很困惑。问题很明显,就是说kerberos的KDC中无法找到这个server。但是已经kinit并且成功了。而且日志前面也说了,认证成功。
  尝试重新生成keytab也无效。最后考虑是不是hive-site.xml中写的是IP的原因?修改成主机名,这个问题解决“thrift://aznbhivel01.liang.com:9083”
  

~~~~需要修改的配置文件信息~~~  hive.metastore.uris
  thrift://aznbhivel01.liang.com:9083
  Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.
  

~~~~~~~~~log信息~~~~~~  
2017-12-07 16:16:04,300 DEBUG [main]: security.UserGroupInformation (UserGroupInformation.java:login(221)) - hadoop login
  
2017-12-07 16:16:04,302 DEBUG [main]: security.UserGroupInformation (UserGroupInformation.java:commit(156)) - hadoop login commit
  
2017-12-07 16:16:04,303 DEBUG [main]: security.UserGroupInformation (UserGroupInformation.java:commit(170)) - using kerberos user:hive/aznbhivel01.liang.com@LIANG.COM
  
2017-12-07 16:16:04,303 DEBUG [main]: security.UserGroupInformation (UserGroupInformation.java:commit(192)) - Using user: "hive/aznbhivel01.liang.com@LIANG.COM" with name hive/aznbhivel01.liang.com@LIANG.COM
  
2017-12-07 16:16:04,303 DEBUG [main]: security.UserGroupInformation (UserGroupInformation.java:commit(202)) - User entry: "hive/aznbhivel01.liang.com@LIANG.COM"
  
2017-12-07 16:16:04,304 INFO  [main]: security.UserGroupInformation (UserGroupInformation.java:loginUserFromKeytab(965)) - Login successful for user hive/aznbhivel01.liang.com@LIANG.COM using keytab file /etc/security/keytab/hive.keytab
  
........
  
Client Addresses  Null
  
2017-12-07 16:16:04,408 INFO  [main]: hive.metastore (HiveMetaStoreClient.java:open(386)) - Trying to connect to metastore with URI thrift://172.16.13.88:9083
  
2017-12-07 16:16:04,446 DEBUG [main]: security.UserGroupInformation (UserGroupInformation.java:logPrivilegedAction(1681)) - PrivilegedAction as:hive/aznbhivel01.liang.com@LIANG.COM (auth:KERBEROS) from:org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
  
2017-12-07 16:16:04,448 DEBUG [main]: transport.TSaslTransport (TSaslTransport.java:open(261)) - opening transport org.apache.thrift.transport.TSaslClientTransport@5bb4d6c0
  
2017-12-07 16:16:04,460 ERROR [main]: transport.TSaslTransport (TSaslTransport.java:open(315)) - SASL negotiation failure
  
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - UNKNOWN_SERVER)]
  at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
  at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
  

  -12. 然后又遇到权限错误,他也不是哪里权限不对。hdfs已经可以看到hive写入的文件了,权限应该正确。继续分析.....
  

2017-12-07 20:30:59,168 DEBUG [IPC Client (1612726596) connection to AZcbetannL02.liang.com/172.16.13.77:9000 from hive/aznbhivel01.liang.com@LIANG.COM]: ipc.Client (Client.java:receiveRpcResponse(1089)) - IPC Client (1612726596) connection to AZcbetannL02.liang.com/172.16.13.77:9000 from hive/aznbhivel01.liang.com@LIANG.COM got value #8  
2017-12-07 20:30:59,168 DEBUG [main]: ipc.ProtobufRpcEngine (ProtobufRpcEngine.java:invoke(250)) - Call: getFileInfo took 2ms
  
2017-12-07 20:30:59,168 INFO  [main]: server.HiveServer2 (HiveServer2.java:stop(305)) - Shutting down HiveServer2
  
2017-12-07 20:30:59,169 INFO  [main]: server.HiveServer2 (HiveServer2.java:startHiveServer2(368)) - Exception caught when calling stop of HiveServer2 before retrying start
  
java.lang.NullPointerException
  at org.apache.hive.service.server.HiveServer2.stop(HiveServer2.java:309)
  at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:366)
  at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:74)
  at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:588)
  at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:461)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606)
  at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
  at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
  
2017-12-07 20:30:59,170 WARN  [main]: server.HiveServer2 (HiveServer2.java:startHiveServer2(376)) - Error starting HiveServer2 on attempt 1, will retry in 60 seconds
  
java.lang.RuntimeException: Error applying authorization policy on hive configuration: java.lang.RuntimeException: java.io.IOException: Permission denied
  at org.apache.hive.service.cli.CLIService.init(CLIService.java:114)
  at org.apache.hive.service.CompositeService.init(CompositeService.java:59)
  at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:100)
  at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:345)
  at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:74)
  at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:588)
  at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:461)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606)
  at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
  at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
  
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: Permission denied
  at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:528)
  at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:127)
  at org.apache.hive.service.cli.CLIService.init(CLIService.java:112)
  ... 12 more
  
Caused by: java.lang.RuntimeException: java.io.IOException: Permission denied
  at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:521)
  ... 14 more
  
Caused by: java.io.IOException: Permission denied
  at java.io.UnixFileSystem.createFileExclusively(Native Method)
  at java.io.File.createNewFile(File.java:1006)
  at java.io.File.createTempFile(File.java:1989)
  at org.apache.hadoop.hive.ql.session.SessionState.createTempFile(SessionState.java:824)
  at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:519)
  ... 14 more
  

  Google到的strace方法,看是什么权限问题
  strace is your friend if you are on Linux. Try the following from the
  shell in which you are starting hive...
  strace -f -e trace=file service hive-server2 start 2>&1 | grep ermission
  You should see the file it can't read/write.
  上面的问题,最后发现  /tmp/hive-security路径的权限不对,修改之后,这个问题过去了。
  

hive.exec.local.scratchdir  /tmp/hive-security
  

  -13. 下一个问题,继续:
  

2017-12-07 20:58:56,749 DEBUG [IPC Parameter Sending Thread #0]: ipc.Client (Client.java:run(1032)) - IPC Client (1612726596) connection to AZcbetannL02.liang.com/172.16.13.77:9000 from hive/aznbhivel01.liang.com@LIANG.COM sending #8  
2017-12-07 20:58:56,750 DEBUG [IPC Client (1612726596) connection to AZcbetannL02.liang.com/172.16.13.77:9000 from hive/aznbhivel01.liang.com@LIANG.COM]: ipc.Client (Client.java:receiveRpcResponse(1089)) - IPC Client (1612726596) connection to AZcbetannL02.liang.com/172.16.13.77:9000 from hive/aznbhivel01.liang.com@LIANG.COM got value #8
  
2017-12-07 20:58:56,750 DEBUG [main]: ipc.ProtobufRpcEngine (ProtobufRpcEngine.java:invoke(250)) - Call: getFileInfo took 2ms
  
2017-12-07 20:58:56,763 ERROR [main]: session.SessionState (SessionState.java:setupAuth(749)) - Error setting up authorization: java.lang.ClassNotFoundException: org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizerFactory
  
org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.ClassNotFoundException: org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizerFactory
  at org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthorizeProviderManager(HiveUtils.java:391)
  

  google查询关键字org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizerFactory
  找到文章
  https://www.cnblogs.com/wyl9527/p/6835620.html
  执行启动命令后需要进行重启hive服务.
  安装结束后:
  会看见多了几个配置文件。
  修改hiveserver2-site.xml 文件
  

  hive.security.authorization.enabled
  true
  
  
  hive.security.authorization.manager
  org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizerFactory
  
  
  hive.security.authenticator.manager
  org.apache.hadoop.hive.ql.security.SessionStateUserAuthenticator
  
  
  hive.conf.restricted.list
  hive.security.authorization.enabled,hive.security.authorization.manager,hive.security.authenticator.manager
  
  

  目前没有使用ranger安全认证,决定取消它。怎么取消呢?
  干脆删除hiveserver2-site.xml 文件。又向前爬了一步, hiveserver2启动成功了。hive进去了,遇到下一个错误。
  -14. 可以正常启动hive了,也可以通过hive命令进入查询,但是可以看到,命令执行是OK的,但是不能正常返回查询结果
  

[hive@aznbhivel01 hive]$ hive  
Logging initialized using configuration in file:/usr/local/hadoop/apache-hive-1.2.1/conf/hive-log4j.properties
  
hive> show databases;
  
OK
  
Failed with exception java.io.IOException:java.lang.RuntimeException: Error in configuring object
  
Time taken: 0.867 seconds
  

  百度解决方法
  http://blog.csdn.net/wodedipang_/article/details/72720257
  但是我的配置是,没有文中说到的情况。怀疑是这个文件夹的权限等问题
  

    hive.exec.local.scratchdir
  /tmp/hive-security
  Local scratch space for Hive jobs
  
  

  最后在日志hive.log中有如下错误,说明缺少jar包
  

Caused by: java.lang.IllegalArgumentException: Compression codec com.hadoop.compression.lzo.LzoCodec not found.  at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:139)
  at org.apache.hadoop.io.compress.CompressionCodecFactory.(CompressionCodecFactory.java:179)
  at org.apache.hadoop.mapred.TextInputFormat.configure(TextInputFormat.java:45)
  ... 26 more

  
Caused by: java.lang.ClassNotFoundException:>  at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2101)
  at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:132)
  ... 28 more
  

  -15. 是hadoop的core-site.xml中有设置,有设置lzo.LzoCodec的压缩方式,所以需要对应的jar包支持,才能正常执行Mapreducer
  

  io.compression.codecsorg.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,com.hadoop.compression.lzo.LzoCodec,org.apache.hadoop.io.compress.SnappyCodec,com.hadoop.compression.lzo.LzopCodec
  
  
  io.compression.codec.lzo.class
  com.hadoop.compression.lzo.LzoCodec
  
  
  hadoop.http.staticuser.user
  hadoop
  
  
  lzo.text.input.format.ignore.nonlzo
  false
  

  将需要的包,从其他正常的环境copy过来,解决了。
  注意,lzo jar包不只是在hive服务器上,在全部的yarn/MapReduce机器上,都需要有这个jar包,不然在调用mapreduce过程中,涉及到lzo压缩的话,就会出问题,而不只是hive发起的任务会遇到问题。
  

# pwd  
/usr/local/hadoop/hadoop-release/share/hadoop/common
  
# ls |grep lzo
  
hadoop-lzo-0.4.21-SNAPSHOT.jar
  

  至此,hive安装完成了。
  爬过一个有一个坑,来感受一下hive查询的输出吧:
  

hive> select count(*) from test.testxx;
  
Query>  
Total jobs = 1
  
Launching Job 1 out of 1
  
Number of reduce tasks determined at compile time: 1
  
Starting Job = job_1513915190261_0008, Tracking URL = https://aznbrmnl02.liang.com:8089/proxy/application_1513915190261_0008/
  
Kill Command = /usr/local/hadoop/hadoop-2.7.1/bin/hadoop job  -kill job_1513915190261_0008
  
Hadoop job information for Stage-1: number of mappers: 3; number of reducers: 1
  
2017-12-24 12:19:11,332 Stage-1 map = 0%,  reduce = 0%
  
2017-12-24 12:19:19,666 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 3.85 sec
  
2017-12-24 12:19:30,064 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 5.43 sec
  
MapReduce Total cumulative CPU time: 5 seconds 430 msec
  
Ended Job = job_1513915190261_0008
  
MapReduce Jobs Launched:
  
Stage-Stage-1: Map: 3  Reduce: 1   Cumulative CPU: 5.43 sec   HDFS Read: 32145 HDFS Write: 5 SUCCESS
  
Total MapReduce CPU Time Spent: 5 seconds 430 msec
  
OK
  
2526
  
Time taken: 38.834 seconds, Fetched: 1 row(s)
  

  需要注意的点
  -1. mysql的字符集是latin1,这个字符集在安装hive的时候是适合的,但是后面使用的时候,尤其有中午文件存入的时候,就会无法正常显示。所以建议,安装完hive之后,修改字符集到UTF8
  

mysql> SHOW VARIABLES LIKE 'character%';  
+--------------------------+----------------------------+
  
| Variable_name            | Value                      |
  
+--------------------------+----------------------------+
  
| character_set_client     | utf8                       |
  
| character_set_connection | utf8                       |
  
| character_set_database   | latin1                     |
  
| character_set_filesystem | binary                     |
  
| character_set_results    | utf8                       |
  
| character_set_server     | latin1                     |
  
| character_set_system     | utf8                       |
  
| character_sets_dir       | /usr/share/mysql/charsets/ |
  
+--------------------------+----------------------------+
  
8 rows in set (0.00 sec)
  

  -2修改字符集
  

-# vi /etc/my.cnf  
[mysqld]
  datadir=/var/lib/mysql
  
socket=/var/lib/mysql/mysql.sock
  
default-character-set = utf8
  
character_set_server = utf8
  
-# Disabling symbolic-links is recommended to prevent assorted security risks
  
symbolic-links=0
  
log-error=/var/log/mysqld.log
  
pid-file=/var/run/mysqld/mysqld.pid
  

  -3修改后
  

mysql> SHOW VARIABLES LIKE 'character%';  
+--------------------------+----------------------------+
  
| Variable_name            | Value                      |
  
+--------------------------+----------------------------+
  
| character_set_client     | utf8                       |
  
| character_set_connection | utf8                       |
  
| character_set_database   | utf8                       |
  
| character_set_filesystem | binary                     |
  
| character_set_results    | utf8                       |
  
| character_set_server     | utf8                       |
  
| character_set_system     | utf8                       |
  
| character_sets_dir       | /usr/share/mysql/charsets/ |
  
+--------------------------+----------------------------+
  
8 rows in set (0.00 sec)
  

  连接hive的方式
  a. hive直接连接的方式,如果有kerberos,注意先kinit认证
  

su - hive  
hive
  

  b. beeline连接
  beeline -u "jdbc:hive2://hive-hostname:10000/default;principal=hive/_HOST@LIANG.COM"
  如果是hiveserver2 HA的架构,连接方式如下:
  beeline -u "jdbc:hive2://zookeeper1-ip:2181,zookeeper2-ip:2181,zookeeper3-ip:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2_zk;principal=hive/_HOST@LIANG.COM"
  如果没有kerberos等安全认证的情况下,beeline连接hive,需要指明登陆的用户。
  beeline -u "jdbc:hive2://127.0.0.1:10000/default;" -n hive
  另外,Hive在执行过程中,是否会走mapreducer?
  hive 0.10.0为了执行效率考虑,简单的查询,就是只是select,不带count,sum,group by这样的,都不走map/reduce,直接读取hdfs文件进行filter过滤。这样做的好处就是不新开mr任务,执行效率要提高不少,但是不好的 地方就是用户界面不友好,有时候数据量大还是要等很长时间,但是又没有任何返回。
  改这个很简单,在hive-site.xml里面有个配置参数叫
  hive.fetch.task.conversion
  将这个参数设置为more,简单查询就不走map/reduce了,设置为minimal,就任何简单select都会走map/reduce
  ----Update 2018.2.11-----
  如果重新初始化hive的mysql库,需要先登陆mysql,drop原有的库,不然会遇到下面错误
  -# su - hive
  [hive@aznbhivel01 ~]$ schematool -initSchema -dbType mysql
  Metastore connection URL:        jdbc:mysql://10.24.101.88:3306/hive_beta?useUnicode=true&characterEncoding=UTF-8&createDatabaseIfNotExist=true
  Metastore Connection Driver :    com.mysql.jdbc.Driver
  Metastore connection User:       envision
  Starting metastore schema initialization to 1.2.0
  Initialization script hive-schema-1.2.0.mysql.sql
  Error: Specified key was too long; max key length is 3072 bytes (state=42000,code=1071)
  org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !!
   schemaTool failed
  删除原有hive库之后,再次初始化,就直接OK了
  [hive@aznbhivel01 ~]$ schematool -initSchema -dbType mysql
  Metastore connection URL:        jdbc:mysql://10.24.101.88:3306/hive_beta?useUnicode=true&characterEncoding=UTF-8&createDatabaseIfNotExist=true
  Metastore Connection Driver :    com.mysql.jdbc.Driver
  Metastore connection User:       envision
  Starting metastore schema initialization to 1.2.0
  Initialization script hive-schema-1.2.0.mysql.sql
  Initialization script completed
  schemaTool completed
  Hive的启动与关闭:
  1.启动metastore
  nohup /usr/local/hadoop/hive-release/bin/hive --service metastore --hiveconf hive.log4j.file=/usr/local/hadoop/hive-release/conf/meta-log4j.properties > /data1/hiveLogs-security/metastore.log 2>&1 &
  2.启动hiveserver2
  nohup /usr/local/hadoop/hive-release/bin/hive --service hiveserver2 > /data1/hiveLogs-security/hiveserver2.log  2>&1 &
  3.关闭HiveServer2
  kill -9ps ax --cols 2000 | grep java | grep HiveServer2 | grep -v 'ps ax' | awk '{print $1;}'``
  4.关闭metastore
  kill -9ps ax --cols 2000 | grep java | grep MetaStore | grep -v 'ps ax' | awk '{print $1;}'``



运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-627606-1-1.html 上篇帖子: Hive报错,NoSuchMethodError: org.apache.hadoop.mapred 下篇帖子: Hadoop分布式集群最快部署配置攻略
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表