本文共 3366 字,大约阅读时间需要 11 分钟。
hbase是一种面向列簇的nosql数据库,主要用于存储结构化和非结构化的数据,但是原生不支持sql,同时由于良好的写入性能,主要用于实时数据的存储。
hive将hdfs文件映射为一张表,支持类SQL语句对其进行管理,不支持实时更新,主要用于离线数据仓库。
hive同时也提供了与hbase的集成,使得能够在hbase表上使用HQL语句进行查询。
hive版本:apache 1.2.2hbase版本:apache 1.2.6
1.拷贝hbase相关jar包到hive lib目录下
cp /root/software/hbase-1.2.6/lib/hbase-it-1.2.6.jar /root/software/hive-1.2.2/lib cp /root/software/hbase-1.2.6/lib/hbase-server-1.2.6.jar /root/software/hive-1.2.2/lib cp /root/software/hbase-1.2.6/lib/hbase-hadoop2-compat-1.2.6.jar /root/software/hive-1.2.2/lib cp /root/software/hbase-1.2.6/lib/hbase-hadoop-compat-1.2.6.jar /root/software/hive-1.2.2/lib cp /root/software/hbase-1.2.6/lib/hbase-client-1.2.6.jar /root/software/hive-1.2.2/lib cp /root/software/hbase-1.2.6/lib/hbase-common-1.2.6.jar /root/software/hive-1.2.2/lib cp /root/software/hbase-1.2.6/lib/hbase-protocol-1.2.6.jar /root/software/hive-1.2.2/lib cp /root/software/hbase-1.2.6/lib/htrace-core-3.1.0-incubating.jar /root/software/hive-1.2.2/lib
2.拷贝hbase-site.xml到hive conf目录下
cp /root/software/hbase-1.2.6/conf/hbase-site.xml /root/software/hive-1.2.2/conf/hbase-site.xml
3.修改hive-env.sh文件
vim /root/software/hive-1.2.2/conf/hive-env.sh export HBASE_HOME=/root/software/hbase-1.2.6
1.创建hbase表并插入数据
# 创建表hbase(main):001:0> create 't1', { NAME=>'f1'};# 插入数据hbase(main):003:0> put 't1', '0001', 'f1:name', 'zhangsan' hbase(main):004:0> put 't1', '0001', 'f1:age', '30'hbase(main):006:0> put 't1', '0002', 'f1:name', 'lisi' hbase(main):007:0> put 't1', '0002', 'f1:age', '29'
2.创建hive表
hive> CREATE EXTERNAL TABLE hbase_t1(id int,name string,age int) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,f1:name,f1:age") TBLPROPERTIES("hbase.table.name" = "t1");#hive外部表字段与hbase表字段对应关系(id <--> rowkey, name <--> f1:name, age <--> f1:age)"hbase.columns.mapping" = ":key,f1:name,f1:age"#hive外部表对应的hbase表名为t1"hbase.table.name" = "t1"
3.通过hive sql访问hbase表数据
hive> select * from hbase_t1;OK1 zhangsan 302 lisi 29Time taken: 0.313 seconds, Fetched: 2 row(s)hive> select count(*) from hbase_t1;Query ID = root_20200303102934_f5bea28c-3c06-4d43-ac15-be436853189eTotal jobs = 1Launching Job 1 out of 1Number of reduce tasks determined at compile time: 1In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=In order to limit the maximum number of reducers: set hive.exec.reducers.max= In order to set a constant number of reducers: set mapreduce.job.reduces= Starting Job = job_1583248050275_0001, Tracking URL = http://master:8088/proxy/application_1583248050275_0001/Kill Command = /root/software/hadoop-2.6.5/bin/hadoop job -kill job_1583248050275_0001Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 12020-03-03 10:29:49,791 Stage-1 map = 0%, reduce = 0%2020-03-03 10:30:08,544 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 14.12 sec2020-03-03 10:30:17,965 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 16.93 secMapReduce Total cumulative CPU time: 16 seconds 930 msecEnded Job = job_1583248050275_0001MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 16.93 sec HDFS Read: 7030 HDFS Write: 2 SUCCESSTotal MapReduce CPU Time Spent: 16 seconds 930 msecOK2Time taken: 44.955 seconds, Fetched: 1 row(s)
转载地址:http://becmb.baihongyu.com/