你的位置:首页 > 数据库

[数据库]hbase伪分布式平台搭建(centos 6.3)


  搭建完《hadoop伪分布式平台》后就开始搭建hbase伪分布式平台了。有了hadoop环境,搭建hbase就变得很容易了。

  一、Hbase安装

  1、从官网下载最新版本Hbase安装包1.2.3,为了省去编译安装环节,我直接下载了hbase-1.2.3-bin.tar.gz,解压即可使用。(如果此链接下载速度过慢可更换官网其他下载链接)

[hadoop@master tar]$ tar -xzf hbase-1.2.3-bin.tar.gz [hadoop@master tar]$ mv hbase-1.2.3 /usr/local/hadoop/hbase[hadoop@master tar]$ cd /usr/local/hadoop/hbase/[hadoop@master hbase]$ ./bin/hbase versionHBase 1.2.3Source code repository git://kalashnikov.att.net/Users/stack/checkouts/hbase.git.commit revision=bd63744624a26dc3350137b564fe746df7a721a4Compiled by stack on Mon Aug 29 15:13:42 PDT 2016From source with checksum 0ca49367ef6c3a680888bbc4f1485d18

运行上面命令得到正常输出即表示安装成功,然后配置环境变量

  2、配置环境变量

修改~/.bashrc在PATH后面增加

:$HADOOP_HOME/hbase/bin

则~/.bashrc文件内容如下

export HADOOP_HOME=/usr/local/hadoopexport HADOOP_INSTALL=$HADOOP_HOMEexport HADOOP_MAPRED_HOME=$HADOOP_HOMEexport HADOOP_COMMON_HOME=$HADOOP_HOMEexport HADOOP_HDFS_HOME=$HADOOP_HOMEexport YARN_HOME=$HADOOP_HOMEexport HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/nativeexport PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin:$HADOOP_HOME/hbase/binexport HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

[hadoop@master hadoop]$ source ~/.bashrc

  二、Hbase单机模式

  1、修改配置文件 hbase/conf/hbase-env.sh 

# export JAVA_HOME=/usr/java/jdk1.6.0/  修改为export JAVA_HOME=/usr/local/java/#export HBASE_MANAGES_ZK=true  修改为export HBASE_MANAGES_ZK=true# 添加下面一行export HBASE_SSH_OPTS="-p 322"

  2、修改配置文件 hbase/conf/hbase-site.

<configuration>     <property>    <name>hbase.rootdir</name>    <value>file:/usr/local/hadoop/tmp/hbase/hbase-tmp</value>  </property></configuration>

  3、启动 Hbase

[hadoop@master hbase]$ start-hbase.sh starting master, logging to /usr/local/hadoop/hbase/bin/../logs/hbase-hadoop-master-master.outJava HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0

jps下多了一个HMaster进程  

[hadoop@master hbase]$ jps12178 ResourceManager11540 NameNode4277 Jps11943 SecondaryNameNode12312 NodeManager11707 DataNode3933 HMaster

  4、使用Hbase shell

[hadoop@master hbase]$ hbase shell2016-11-07 10:11:02,187 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicableSLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/usr/local/hadoop/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]HBase Shell; enter 'help<RETURN>' for list of supported commands.Type "exit<RETURN>" to leave the HBase ShellVersion 1.2.3, rbd63744624a26dc3350137b564fe746df7a721a4, Mon Aug 29 15:13:42 PDT 2016hbase(main):001:0> status1 active master, 0 backup masters, 1 servers, 0 dead, 2.0000 average loadhbase(main):002:0> exit

未启动Hbase直接使用Hbase Shell会报错

  5、停止Hbase

[hadoop@master hbase]$ stop-hbase.sh stopping hbase......................

  三、Hbase伪分布式

伪分布式和单机模式的区别主要是配置文件的不同

  1、修改配置文件  hbase/conf/hbase-env.sh 

# export JAVA_HOME=/usr/java/jdk1.6.0/  修改为export JAVA_HOME=/usr/local/java/# export HBASE_MANAGES_ZK=true  修改为export HBASE_MANAGES_ZK=true# export HBASE_CLASSPATH=  修改为export HBASE_CLASSPATH=/usr/local/hadoop/etc/hadoop/# 添加下面一行export HBASE_SSH_OPTS="-p 322"

zookeeper使用Hbase自带的即可,分布式才有必要开启独立的

  2、修改配置文件 hbase/conf/hbase-site.

<configuration>  <property>    <name>hbase.rootdir</name>    <value>hdfs://10.1.2.108:9000/hbase</value>  </property>  <property>    <name>hbase.cluster.distributed</name>    <value>true</value>  </property></configuration>

注意这里的hbase.rootdir设置为hdfs的存储路径前提是hadoop平台是伪分布式,只有一个NameNode

  3、启动Hbase

[hadoop@master hbase]$ start-hbase.sh localhost: starting zookeeper, logging to /usr/local/hadoop/hbase/bin/../logs/hbase-hadoop-zookeeper-master.outmaster running as process 3933. Stop it first.starting regionserver, logging to /usr/local/hadoop/hbase/bin/../logs/hbase-hadoop-1-regionserver-master.out

jps查看进程多了 HMaster和 HRegionServer

[hadoop@master hbase]$ jps7312 Jps12178 ResourceManager11540 NameNode11943 SecondaryNameNode12312 NodeManager11707 DataNode3933 HMaster7151 HRegionServer

  4、使用Hbase Shell

[hadoop@master hbase]$ hbase shell2016-11-07 10:35:05,262 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicableSLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/usr/local/hadoop/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]HBase Shell; enter 'help<RETURN>' for list of supported commands.Type "exit<RETURN>" to leave the HBase ShellVersion 1.2.3, rbd63744624a26dc3350137b564fe746df7a721a4, Mon Aug 29 15:13:42 PDT 2016

1) 查看集群状态和版本信息

hbase(main):001:0> status1 active master, 0 backup masters, 1 servers, 0 dead, 1.0000 average loadhbase(main):002:0> version1.2.3, rbd63744624a26dc3350137b564fe746df7a721a4, Mon Aug 29 15:13:42 PDT 2016

2) 创建user表和三个列族

hbase(main):003:0> create 'user','user_id','address','info'0 row(s) in 2.3570 seconds=> Hbase::Table - user

3) 查看所有表

hbase(main):005:0> create 'tmp', 't1', 't2'0 row(s) in 1.2320 seconds=> Hbase::Table - tmp
hbase(main):006:0> listTABLE tmp user 2 row(s) in 0.0100 seconds=> ["tmp", "user"]hbase(main):007:0>

4) 查看表结构

hbase(main):008:0> describe 'user'Table user is ENABLED                                                                                    user                                                                                             COLUMN FAMILIES DESCRIPTION                                                                                 {NAME => 'address', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}                                                    {NAME => 'info', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}                                                      {NAME => 'user_id', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}                                                    3 row(s) in 0.2060 secondshbase(main):009:0> 

5) 删除表

hbase(main):010:0> disable 'tmp'0 row(s) in 2.2580 secondshbase(main):011:0> drop 'tmp'0 row(s) in 1.2560 secondshbase(main):012:0> 

  5、停止Hbase

[hadoop@master hbase]$ stop-hbase.sh stopping hbase......................localhost: no zookeeper to stop because no pid file /tmp/hbase-hadoop-zookeeper.pid

停止Hadoop的顺序是停止hbase、停止YARN、停止Hdfs

  6、web使用

可通过Hdfs页面 http://10.1.2.108:50070进入Hbase页面

或者直接访问 http://10.1.2.108:60010/master.jsp

原创文章,转载请备注原文地址 http://www.cnblogs.com/lxmhhy/p/6026047.html

知识交流讨论请加qq:1130010617。谢谢合作。