Hbase 总结

2019-08-30 19:35

Hbase的安装与配置

2015年5月16日 10:44

[1]安装jdk(用户:root)

新建文件夹“/usr/share/java_1.6”,把jdk-6u45-linux-x64.bin上传至“/usr/share/java_1.6”文件夹下。 执行命令:

cd /usr/share/java_1.6

chmod +x jdk-6u45-linux-x64.bin ./jdk-6u45-linux-x64.bin

[2]添加Java环境变量(用户:etl)

修改“/home/etl/.bash_profile”,加上以下三句配置: export JAVA_HOME=/usr/share/java_1.6/jdk1.6.0_45 export PATH=$JAVA_HOME/bin:$PATH

export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

[3]安装hbase(用户:etl)

把hbase-0.98.7-hadoop2-bin.tar.gz上传至“/home/etl/_jyy/” 执行命令:

cd /home/etl/_jyy/

tar xfz hbase-0.98.7-hadoop2-bin.tar.gz

[4]配置hbase(用户:etl,需要手工新建以下两个文件夹)

修改/home/etl/_jyy/hbase-0.98.7/conf/hbase-site.xml,配置如下:

hbase.rootdir

/home/etl/_jyy/BIDATA/hadoop/hbase_data/hbase/

hbase.zookeeper.property.dataDir

/home/etl/_jyy/BIDATA/hadoop/hbase_data/zookeeper/

修改“/home/etl/.bash_profile”,加上以下配置:

alias hbase=\

[5]启动hbase(用户:etl)

cd /home/etl/_jyy/hbase-0.98.7-hadoop2/bin/

./start-hbase.sh

[6]停止hbase(用户:etl)

修改“/hbase/hbase-0.98.7-hadoop2/conf/hbase-env.sh”的以下配置,并建文件夹。 export HBASE_PID_DIR=/hbase/hbase-0.98.7-hadoop2/pids

cd /hbase/hbase-0.98.7-hadoop2/bin/ ./stop-hbase.sh

Hbase 表操作-DDL

2015年5月16日 10:51

【Oracle的模型】

模型名:DM_集团拍照TOP35客户(TM_CORP_SNMBR_TOP35_M) Erwin:广东移动市公司数据集市项目-物理模型-广州.ER1 Subject Area:

====================================================================================================

序号| 字段英文名 | 数据类型 |PK | NULL |字段中文名 |备注说明

====================================================================================================

1 |STAT_MO |NUMBER(10) |Yes|NOT NULL|统计月份 | 2 |LOC_LVL1_CD |VARCHAR2(20) |Yes|NOT NULL|归属层次1 | 3 |DATA_TYP_CD |NUMBER(10) |Yes|NOT NULL|数据类型编码 | 4 |SNAP_USR_CNT |NUMBER(14) |No |NULL |拍照用户数 |指标值

5 |RETN_USR_CNT |NUMBER(14) |No |NULL |保有客户数 | 6 |SNAP_ARPU |NUMBER(16,4) |No |NULL |拍照ARPU | 7 |RETN_ARPU |NUMBER(16,4) |No |NULL |保有ARPU | 8 |G4_PNTRN_RT |NUMBER(14,4) |No |NULL |4G渗透率 | 9 |BIND_PNTRN_RT |NUMBER(14,4) |No |NULL |捆绑渗透率 | 10 |SPAY_PNTRN_RT |NUMBER(14,4) |No |NULL |统付渗透率 | ====================================================================================================

【测试机环境】 测试机地址:10.200.1.5 用户名:hadoop 密码:123456

在shell命令行执行1-5步: [1]进入命令行:hbase shell [hadoop@bogon ~]$ hbase shell

2014-11-17 10:39:16,520 INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available

HBase Shell; enter 'help' for list of supported commands. Type \

Version 0.98.3-hadoop2, rd5e65a9144e315bb0a964e7730871af32f5018d5, Sat May 31 19:56:09 PDT 2014

hbase(main):001:0>

[2]把表设置成失效状态:disable 'TM_CORP_SNMBR_TOP35_M'

hbase(main):001:0> disable 'TM_CORP_SNMBR_TOP35_M'

2014-11-17 10:59:04,516 WARN [main] util.NativeCodeLoader: Unable to load

native-hadoop library for your platform... using builtin-java classes where applicable 0 row(s) in 2.2420 seconds

[3]删除表:drop 'TM_CORP_SNMBR_TOP35_M' 分两步:首先disable t1,然后drop t1

例如:删除表t1

hbase(main):001:0> drop 'TM_CORP_SNMBR_TOP35_M'

2014-11-17 10:59:54,311 WARN [main] util.NativeCodeLoader: Unable to load

native-hadoop library for your platform... using builtin-java classes where applicable 0 row(s) in 1.2140 seconds

[4]建表:create 'TM_CORP_SNMBR_TOP35_M','CF'

hbase(main):001:0> create 'TM_CORP_SNMBR_TOP35_M','CF'

2014-11-17 11:01:22,580 WARN [main] util.NativeCodeLoader: Unable to load

native-hadoop library for your platform... using builtin-java classes where applicable 0 row(s) in 1.0910 seconds

=> Hbase::Table - TM_CORP_SNMBR_TOP35_M

create 'test_hadoop','m_id','address','info' [5]删除一个列族m_id:

先将表disable(如果表中之前有数据的话,会把数据清空) hbase(main):030:0> disable 'member'

hbase(main):033:0> is_enabled 'member'

hbase(main):034:0> alter 'member',{NAME=>'m_id',METHOD=>'delete'} [6]查看是否存在表:list 'TM_CORP_SNMBR_TOP35_M'

hbase(main):002:0> list 'TM_CORP_SNMBR_TOP35_M'

TABLE

TM_CORP_SNMBR_TOP35_M 1 row(s) in 0.1010 seconds

=> [\

退出shell命令行执行6-9步:

[6]造数:《TM_CORP_SNMBR_TOP35_M.txt》 201410,GZ01,1,100,200,1.1,1.1,1.1,1.1,1.1 201411,GZ02,1,100,200,1.1,1.1,1.1,1.1,1.1

put插入几条记录

put'test_hadoop','scutshuxue','info:age','24'

put'test_hadoop','scutshuxue','info:birthday','1987-06-17' put'test_hadoop','scutshuxue','info:company','alibaba'

[7]删除输出文件夹的文件:rm -rf /BIDATA/hadoop/jyy/output [8]生成HFILE文件

hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=CF:STAT_MO,CF:LOC_LVL1_CD,CF:DATA_TYP_CD,CF:SNAP_USR_CNT,CF:RETN_USR_CNT,CF:SNAP_ARPU,CF:RETN_ARPU,CF:G4_PNTRN_RT,CF:BIND_PNTRN_RT,CF:SPAY_PNTRN_RT -Dimporttsv.rowkey.columns=CF:STAT_MO '-Dimporttsv.separator=,' -Dimporttsv.mapper.class=org.apache.hadoop.hbase.mapreduce.TsvImporterTextMapper -Dimporttsv.bulk.output=/BIDATA/hadoop/jyy/output TM_CORP_SNMBR_TOP35_M

/BIDATA/hadoop/jyy/input/TM_CORP_SNMBR_TOP35_M.txt

[hadoop@bogon ~]$ hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=CF:STAT_MO,CF:LOC_LVL1_CD,CF:DATA_TYP_CD,CF:SNAP_USR_CNT,CF:RETN_USR_CNT,CF:SNAP_ARPU,CF:RETN_ARPU,CF:G4_PNTRN_RT,CF:BIND_PNTRN_RT,CF:SPAY_PNTRN_RT -Dimporttsv.rowkey.columns=CF:STAT_MO '-Dimporttsv.separator=,' -Dimporttsv.mapper.class=org.apache.hadoop.hbase.mapreduce.TsvImporterTextMapper -Dimporttsv.bulk.output=/BIDATA/hadoop/jyy/output TM_CORP_SNMBR_TOP35_M

/BIDATA/hadoop/jyy/input/TM_CORP_SNMBR_TOP35_M.txt

2014-11-17 11:30:48,411 WARN [main] util.NativeCodeLoader: Unable to load

native-hadoop library for your platform... using builtin-java classes where applicable 2014-11-17 11:30:48,514 INFO [main] zookeeper.ZooKeeper: Client

environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT 2014-11-17 11:30:48,514 INFO [main] zookeeper.ZooKeeper: Client environment:host.name=bogon

2014-11-17 11:30:48,514 INFO [main] zookeeper.ZooKeeper: Client environment:java.version=1.7.0_60

2014-11-17 11:30:48,514 INFO [main] zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation

2014-11-17 11:30:48,514 INFO [main] zookeeper.ZooKeeper: Client environment:java.home=/BIDATA/hadoop/jdk7/jre

2014-11-17 11:30:48,514 INFO [main] zookeeper.ZooKeeper: Client

environment:java.class.path=/BIDATA/hadoop/hbase/conf:/BIDATA/hadoop/jdk7/lib/tools.jar:/BIDATA/hadoop/hbase:/BIDATA/hadoop/hbase/lib/activation-1.1.jar:/BIDATA/hadoop/hbase/lib/aopalliance-1.0.jar:/BIDATA/hadoop/hbase/lib/asm-3.1.jar:/BIDATA/hadoop/hbase/lib/avro-1.7.4.jar:/BIDATA/hadoop/hbase/lib/commons-beanutils-1.7.0.jar:/BIDATA/hadoop/hbase/lib/commons-beanutils-core-1.8.0.jar:/BIDATA/hadoop/hbase/lib/commons-cli-1.2.jar:/BIDATA/hadoop/hbase/lib/commons-codec-1.7.jar:/BIDATA/hadoop/hbase/lib/commons-collections-3.2.1.jar:/BIDATA/hadoop/hbase/lib/commons-compress-1.4.1.jar:/BIDATA/hadoop/hbase/lib/commons-configuration-1.6.jar:/BIDATA/hadoop/hbase/lib/commons-daemon-1.0.13.jar:/BIDATA/hadoop/hbase/lib/commons-digester-1.8.jar:/BIDATA/hadoop/hbase/lib/commons-el-1.0.jar:/BIDATA/hadoop/hbase/lib/commons-httpclient-3.1.jar:/BIDATA/hadoop/hbase/lib/commons-io-2.4.jar:/BIDATA/hadoop/hbase/lib/commons-lang-2.6.jar:/BIDATA/hadoop/hbase/lib/commons-logging-1.1.1.jar:/BIDATA/hadoop/hbase/lib/commons-math-2.1.jar:/BIDATA/hadoop/hbase/lib/commons-net-3.1.jar:/BIDATA/hadoop/hbase/lib/findbugs-annotations-1.3.9-1.jar:/BIDATA/hadoop/hbase/lib/gmbal-api-only-3.0.0-b023.jar:/BIDATA/hadoop/hbase/lib/grizzly-framework-2.1.2.jar:/BIDATA/hadoop/hbase/lib/grizzly-http-2.1.2.jar:/BIDATA/hadoop/hbase/lib/grizzly-http-server-2.1.2.jar:/BIDATA/hadoop/hbase/lib/grizzly-http-servlet-2.1.2.jar:/BIDATA/hadoop/hbase/lib/grizzly-rcm-2.1.2.jar:/BIDATA/hadoop/hbase/lib/guava-12.0.1.jar:/BIDATA/hadoop/hbase/lib/guice-3.0.jar:/BIDATA/hadoop/hbase/lib/guice-servlet-3.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-annotations-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-auth-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-client-2.2.0.jar:/BIDATA/hadoop/hbase/lib/hadoop-common-


Hbase 总结.doc 将本文的Word文档下载到电脑 下载失败或者文档不完整,请联系客服人员解决!

下一篇:道教咒语大全

相关阅读
本类排行
× 注册会员免费下载(下载后可以自由复制和排版)

马上注册会员

注:下载文档有可能“只有目录或者内容不全”等情况,请下载之前注意辨别,如果您已付费且无法下载或内容有问题,请联系我们协助你处理。
微信: QQ: