1 2 Hive单机部署 Hive只在一个节点上安装即可
1.上传tar包
2.解压
3.安装mysql数据库(切换到root用户)(装在哪里没有限制,只要能联通hadoop集群的节点) 1 mysql安装仅供参考,不同版本mysql有各自的安装流程
3.1.安装mysql客户端:
3.2.安装mysql服务器端:
3.3.编辑mysql文件 1 2 3 4 5 6 7 [root@node02 ~] [mysqld] default-storage-engine = innodb innodb_file_per_table collation-server = utf8_general_ci init-connect = 'SET NAMES utf8' character-set-server = utf8
3.4.启动mysql服务
3.5.设置开机启动
3.6.初始化数据库 [root@controller ~]# mysql_install_db
3.7.配置数据库账号和密码 1 2 [root@node02 ~] (注意:删除匿名用户,允许用户远程连接)
3.8.进入Mysql数据库配置 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 [root@node02 ~] mysql> use mysql; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> select user,host from user; +------+-----------+ | user | host | +------+-----------+ | root | 127.0.0.1 | | root | localhost | +------+-----------+ 2 rows in set (0.01 sec) mysql> grant all privileges on *.* to 'root' @'%' identified by '123456' ; Query OK, 0 rows affected (0.00 sec) mysql> flush privileges; Query OK, 0 rows affected (0.00 sec) mysql> create database hive; Query OK, 1 row affected (0.01 sec) mysql> Grant all privileges on *.* to 'root' @'192.168.230.1' identified by '123456' ; Query OK, 0 rows affected (0.00 sec) mysql> flush privileges; Query OK, 0 rows affected (0.01 sec)
4.配置hive 4.1.配置HIVE_HOME环境变量
4.2.配置元数据库信息 1 2 [root@node02 ~] 添加如下内容:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 <configuration > <property > <name > hive.metastore.warehouse.dir</name > <value > /usr/hive/warehouse</value > </property > <property > <name > hive.metastore.local</name > <value > true</value > </property > <property > <name > javax.jdo.option.ConnectionURL</name > <value > jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value > </property > <property > <name > javax.jdo.option.ConnectionDriverName</name > <value > com.mysql.jdbc.Driver</value > </property > <property > <name > javax.jdo.option.ConnectionUserName</name > <value > root</value > </property > <property > <name > javax.jdo.option.ConnectionPassword</name > <value > 123456</value > </property > </configuration >
4.3、配置hive环境变量 1 2 3 4 [root@node02 ~] export HIVE_HOME=/opt/modules/hive-1.2.1export PATH=$PATH :$HIVE_HOME /bin
5.安装hive和mysq完成后,将mysql的连接jar包拷贝到$HIVE_HOME/lib目录下
6.Jline包版本不一致的问题,需要拷贝hive的lib目录中jline.2.12.jar的jar包替换掉hadoop中的 1 2 3 [root@node02 ~] [root@node02 ~] [root@node02 lib]
7.启动hive 1 2 3 4 5 6 7 8 9 10 [root@node02 hive-1.2.1] 17/01/23 20:47:38 WARN conf.HiveConf: HiveConf of name hive.metastore.local does not exist Logging initialized using configuration in jar:file:/opt/modules/hive-1.2.1/lib/hive-common-1.2.1.jar!/hive-log4j.properties hive> show databases; OK default Time taken: 7.458 seconds, Fetched: 1 row(s) hive> create database hivetest; OK Time taken: 1.034 seconds
8.建表(默认是内部表) 1 create table trade_detail(id bigint , account string , income double , expenses double , time string ) row format delimited fields terminated by '\t' ;
8.1.建分区表 1 create table td_part(id bigint , account string , income double , expenses double , time string ) partitioned by (logdate string ) row format delimited fields terminated by '\t' ;
8.2.建外部表 1 create external table td_ext(id bigint , account string , income double , expenses double , time string ) row format delimited fields terminated by '\t' location '/td_ext' ;
##9.创建分区表
1 2 create table book (id bigint , name string ) partitioned by (pubdate string ) row format delimited fields terminated by '\t' ;
分区表加载数据 1 load data local inpath './book.txt' overwrite into table book partition (pubdate='2010-08-22' );
1 load data local inpath '/root/data.am' into table beauty partition (nation="USA" );
1 select nation, avg (size ) from beauties group by nation order by avg (size );