CDH集群部署_cdh集群定时任务-程序员宅基地

技术标签: cdh  

  • 官网解读
    CDH5.12.1官网文档链接
    spark2与kafka之类的安装文档

    • Linux
      本次搭建采用Centos7.2,而官网提示RHEL / CentOS / OEL 7.0 is not supported.,centos7.0是不支持5.12.1版本的安装
    • JDK
      Only 64 bit JDKs from Oracle are supported. Oracle JDK 7 is supported across all versions of Cloudera Manager 5 and CDH 5. Oracle JDK 8 is supported in C5.3.x and higher. JDK要使用Oracle版本的
    • database
      Use UTF8 encoding for all custom databases.
      Cloudera Manager installation fails if GTID-based replication is enabled in MySQL. 如果用GTID会安转失败的
      MySQL 5.6 and 5.7
    • 磁盘空间
      5 GB on the partition hosting /var.
      500 MB on the partition hosting /usr.
  • 部署安装

    • 购买阿里云
      在这里插入图片描述

    • 集群基础配置
      提醒:集群的每个机器都要操作一次

      1 关闭防火墙
      执行命令 service iptables stop
      验证: service iptables status

      2 关闭防火墙的自动运行
      执行命令 chkconfig iptables off
      验证: chkconfig --list | grep iptables

      vi /etc/selinux/config
      SELINUX=disabled

      清空防火墙策略:

      iptables -L 查看一下规则 是否还在
      iptables -F 清空

      3 设置主机名-- linux运维
      执行命令 (1)hostname hadoopcm-01
      (2)vi /etc/sysconfig/network
      NETWORKING=yes
      NETWORKING_IPV6=yes
      HOSTNAME=hadoopcm-01

      4 ip与hostname绑定(关键,每个机器)
      执行命令 (1)vi /etc/hosts

        		127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
        		::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
      
        		172.16.101.54   hadoopcm-01.xxx.com hadoopcm-01
        		172.16.101.55   hadoopnn-01.xxx.com hadoopnn-01
        		172.16.101.56   hadoopnn-02.xxx.com hadoopnn-02
        		172.16.101.58   hadoopdn-01.xxx.com hadoopdn-01
        		172.16.101.59   hadoopdn-02.xxx.com hadoopdn-02
        		172.16.101.60   hadoopdn-03.xxx.com hadoopdn-03
      

      如果公司机器有dn解析需要添加上面的第二列
      验证: ping hadoopcm-01

      集群每台机器同步 scp /etc/hosts root@hadoopnn-01:/etc/hosts

      hadoop001
      hadoop002

      5 安装oracle jdk,不要安装
      (1)下载,指定目录解压
      [root@hadoopcm-01 tmp]# tar -xzvf jdk-7u79-linux-x64.gz -C /usr/java/
      [root@hadoop001 java]# chown -R root:root jdk1.8.0_144

      (2)vi /etc/profile 增加内容如下:

      export JAVA_HOME=/usr/java/jdk1.8.0_45
      export PATH=.:$JAVA_HOME/bin:$PATH
      

      (3)source /etc/profile
      验证: java -version

      which java

      6 创建hadoop用户,密码admin (三个文件/etc/passwd, /etc/shadow, /etc/group) (此步可以省略,可以直接用root安装,最后CDH集群环境的各个进程是以各自的用户管理的)
      要求: root或者sudo无密码 user

        6.1 没LDAP,root-->happy
        6.2 刚开始给你机器,root,这时候拿root用户安装,过了一个月机器加上公司LDAP-->安装开心,要一个sudo user
        6.3 始终不会加LDAP认证,都有root用户,但是想要用另外一个用户安装管理,必须sudo
        6.4 给你的机器,就是有LDAP-->不要怕 ,搞个sudo user
      
        [root@hadoopcm-01 ~]# adduser hadoop
        [root@hadoopcm-01 ~]# passwd hadoop
        Changing password for user hadoop.
        New password: 
        BAD PASSWORD: it is too short
        BAD PASSWORD: is too simple
        Retype new password: 
        passwd: all authentication tokens updated successfully.
      
      
        [root@hadoopcm-01 etc]# vi /etc/sudoers
      
        ## Allow root to run any commands anywhere
        root    ALL=(ALL)       ALL
      
        hadoop  ALL=(root)	NOPASSWD:ALL
      
        hadoop  ALL=(ALL)      ALL
      
        jpwu	ALL=(root)	NOPASSWD:ALL
        jpwu    ALL=(ALL)       NOPASSWD:ALL
      
        ###验证sudo权限
        [root@hadoopcm-01 etc]# sudo su hadoop
        [hadoop@hadoopcm-01 ~]$ sudo ls -l /root
        total 4
        -rw------- 1 root root 8 Apr  2 09:45 dead.letter
      

      7 检查python:
      cdh4.x系列 系统默认python2.6.6 --> 升级2.7.5–>hdfs ha,过不去. (2个月)
      cdh5.x系列 python2.6.6 or python2.7
      #建议是python2.6.6版本
      python --version

      centos6.x python2.6.x
      centos7.x python2.7.x

      但是 假如以后你们集群是2.7.x 跑Python服务需要3.5.1

      8 时区+时钟同步
      https://www.cloudera.com/documentation/enterprise/5-10-x/topics/install_cdh_enable_ntp.html

        [root@hadoopcm-01 cdh5.7.0]# grep ZONE /etc/sysconfig/clock
        ZONE="Asia/Shanghai"
      
        运维: 时区一致 + 时间同步
      
        ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime 
            yum install -y ntpdate
      
        配置集群时间同步服务:通过ntp服务配置
      

      172.16.101.54-90
      172.16.101.0
      172.16.101.1~255

      ntp主节点配置:
      cp /etc/ntp.conf /etc/ntp.conf.bak
      cp /etc/sysconfig/ntpd /etc/sysconfig/ntpd.bak
      echo “restrict 172.16.101.0 mask 255.255.255.0 nomodify notrap” >> /etc/ntp.conf
      echo “SYNC_HWCLOCK=yes” >> /etc/sysconfig/ntpd

      service ntpd restart

      ntp客户端节点配置:
      然后在所有节点都设置定时任务 crontab –e 添加如下内容:
      */30 * * * * /usr/sbin/ntpdate 172.16.101.54

      [root@hadoop002 ~]# /usr/sbin/ntpdate 192.168.1.131
      16 Sep 11:44:06 ntpdate[5027]: no server suitable for synchronization found
      防火墙没有关闭 清空

      9 关闭大页面
      echo never > /sys/kernel/mm/transparent_hugepage/defrag
      echo never > /sys/kernel/mm/transparent_hugepage/enabled

      echo ‘echo never > /sys/kernel/mm/transparent_hugepage/defrag’>> /etc/rc.local
      echo ‘echo never > /sys/kernel/mm/transparent_hugepage/enabled’>> /etc/rc.local

      10 swap 物理磁盘空间 作为内存
      echo ‘vm.swappiness = 10’ >> /etc/sysctl.conf
      sysctl -p 生效

      swap=0-100
      0不代表禁用 而是惰性最高
      100表示 使用积极性最高

      集群计算对实时性 要求高的 swap=0 允许job挂 迅速的加内存或调大参数 重启job
      集群计算对实时性 要求不高的 swap=10/30 不允许job挂 慢慢的运行

      4G内存 8Gswap
      0: 3.5G–》3.9G 0
      30: 3G 2G

    • mysql安装
      1.解压及创建目录
      [root@hadoop39 local]# tar xzvf mysql-5.7.11-linux-glibc2.5-x86_64.tar.gz
      [root@hadoop39 local]# mv mysql-5.7.11-linux-glibc2.5-x86_64 mysql

      [root@hadoop39 local]# mkdir mysql/arch mysql/data mysql/tmp

      2.创建my.cnf(见文件)
      [root@hadoop39 local]# vi /etc/my.cnf

      [client]
      port            = 3306
      socket          = /usr/local/mysql/data/mysql.sock
      default-character-set=utf8mb4
      
      [mysqld]
      port            = 3306
      socket          = /usr/local/mysql/data/mysql.sock
      
      
      skip-slave-start
      
      
      skip-external-locking
      key_buffer_size = 256M
      sort_buffer_size = 2M
      read_buffer_size = 2M
      read_rnd_buffer_size = 4M
      query_cache_size= 32M
      max_allowed_packet = 16M
      myisam_sort_buffer_size=128M
      tmp_table_size=32M
      
      table_open_cache = 512
      thread_cache_size = 8
      wait_timeout = 86400
      interactive_timeout = 86400
      max_connections = 600
      
      # Try number of CPU's*2 for thread_concurrency
      #thread_concurrency = 32 
      
      #isolation level and default engine 
      default-storage-engine = INNODB
      transaction-isolation = READ-COMMITTED
      
      server-id  = 1739
      basedir     = /usr/local/mysql
      datadir     = /usr/local/mysql/data
      pid-file     = /usr/local/mysql/data/hostname.pid
      
      #open performance schema
      log-warnings
      sysdate-is-now
      
      binlog_format = ROW
      log_bin_trust_function_creators=1
      log-error  = /usr/local/mysql/data/hostname.err
      log-bin = /usr/local/mysql/arch/mysql-bin
      expire_logs_days = 7
      
      innodb_write_io_threads=16
      
      relay-log  = /usr/local/mysql/relay_log/relay-log
      relay-log-index = /usr/local/mysql/relay_log/relay-log.index
      relay_log_info_file= /usr/local/mysql/relay_log/relay-log.info
      
      #need to sync tables
      replicate-wild-do-table=omsprd.%
      replicate_wild_do_table=wmsb01.%
      replicate_wild_do_table=wmsb02.%
      replicate_wild_do_table=wmsb03.%
      replicate_wild_do_table=wmsb04.%
      replicate_wild_do_table=wmsb05.%
      replicate_wild_do_table=wmsb06.%
      replicate_wild_do_table=wmsb07.%
      replicate_wild_do_table=wmsb08.%
      replicate_wild_do_table=wmsb08.%
      replicate_wild_do_table=wmsb09.%
      replicate_wild_do_table=wmsb10.%
      replicate_wild_do_table=wmsb11.%
      replicate_wild_do_table=wmsb27.%
      replicate_wild_do_table=wmsb31.%
      replicate_wild_do_table=wmsb32.%
      replicate_wild_do_table=wmsb33.%
      replicate_wild_do_table=wmsb34.%
      replicate_wild_do_table=wmsb35.%
      
      
      log_slave_updates=1
      gtid_mode=OFF
      enforce_gtid_consistency=OFF
      
      
      # slave
      slave-parallel-type=LOGICAL_CLOCK
      slave-parallel-workers=4
      master_info_repository=TABLE
      relay_log_info_repository=TABLE
      relay_log_recovery=ON
      
      
      
      
      #other logs
      #general_log =1
      #general_log_file  = /usr/local/mysql/data/general_log.err
      #slow_query_log=1
      #slow_query_log_file=/usr/local/mysql/data/slow_log.err
      
      #for replication slave
      sync_binlog = 500
      
      
      #for innodb options 
      innodb_data_home_dir = /usr/local/mysql/data/
      innodb_data_file_path = ibdata1:1G;ibdata2:1G:autoextend
      
      innodb_log_group_home_dir = /usr/local/mysql/arch
      innodb_log_files_in_group = 4
      innodb_log_file_size = 1G
      innodb_log_buffer_size = 200M
      
      innodb_buffer_pool_size = 8G
      #innodb_additional_mem_pool_size = 50M #deprecated in 5.6
      tmpdir = /usr/local/mysql/tmp
      
      innodb_lock_wait_timeout = 1000
      #innodb_thread_concurrency = 0
      innodb_flush_log_at_trx_commit = 2
      
      innodb_locks_unsafe_for_binlog=1
      
      #innodb io features: add for mysql5.5.8
      performance_schema
      innodb_read_io_threads=4
      innodb-write-io-threads=4
      innodb-io-capacity=200
      #purge threads change default(0) to 1 for purge
      innodb_purge_threads=1
      innodb_use_native_aio=on
      
      #case-sensitive file names and separate tablespace
      innodb_file_per_table = 1
      lower_case_table_names=1
      
      [mysqldump]
      quick
      max_allowed_packet = 128M
      
      [mysql]
      no-auto-rehash
      default-character-set=utf8mb4
      
      [mysqlhotcopy]
      interactive-timeout
      
      [myisamchk]
      key_buffer_size = 256M
      sort_buffer_size = 256M
      read_buffer = 2M
      write_buffer = 2M
      

      3.创建用户组及用户
      [root@hadoop39 local]# groupadd -g 101 dba
      [root@hadoop39 local]# useradd -u 514 -g dba -G root -d /usr/local/mysql mysqladmin
      [root@hadoop39 local]# id mysqladmin
      uid=514(mysqladmin) gid=101(dba) groups=101(dba),0(root)

      ##一般不需要设置mysqladmin的密码,直接从root或者LDAP用户sudo切换
      #[root@hadoop39 local]# passwd mysqladmin
      Changing password for user mysqladmin.
      New UNIX password:
      BAD PASSWORD: it is too simplistic/systematic
      Retype new UNIX password:
      passwd: all authentication tokens updated successfully.

      ##if user mysqladmin is existing,please execute the following command of usermod.
      #[root@hadoop39 local]# usermod -u 514 -g dba -G root -d /usr/local/mysql mysqladmin

      4.copy 环境变量配置文件至mysqladmin用户的home目录中,为了以下步骤配置个人环境变量
      [root@hadoop39 local]# cp /etc/skel/.* /usr/local/mysql ###important

      5.配置环境变量
      [root@hadoop39 local]# vi mysql/.bash_profile
      # .bash_profile
      # Get the aliases and functions

      if [ -f ~/.bashrc ]; then
      . ~/.bashrc
      fi

      #User specific environment and startup programs

      export MYSQL_BASE=/usr/local/mysql
      			export PATH=${MYSQL_BASE}/bin:$PATH
      

      unset USERNAME

      #stty erase ^H
      set umask to 022
      umask 022
      PS1=uname -n":"‘ U S E R ′ " : " ′ USER'":"' USER":"PWD’":>"; export PS1

      ##end

      6.赋权限和用户组,切换用户mysqladmin,安装
      [root@hadoop39 local]# chown mysqladmin:dba /etc/my.cnf
      [root@hadoop39 local]# chmod 640 /etc/my.cnf

      [root@hadoop39 local]# chown -R mysqladmin:dba /usr/local/mysql
      [root@hadoop39 local]# chmod -R 755 /usr/local/mysql

      7.配置服务及开机自启动
      [root@hadoop39 local]# cd /usr/local/mysql
      #将服务文件拷贝到init.d下,并重命名为mysql
      [root@hadoop39 mysql]# cp support-files/mysql.server /etc/rc.d/init.d/mysql
      #赋予可执行权限
      [root@hadoop39 mysql]# chmod +x /etc/rc.d/init.d/mysql
      #删除服务
      [root@hadoop39 mysql]# chkconfig --del mysql
      #添加服务
      [root@hadoop39 mysql]# chkconfig --add mysql
      [root@hadoop39 mysql]# chkconfig --level 345 mysql on

      8.安装libaio及安装mysql的初始db
      [root@hadoop39 mysql]# yum -y install libaio
      [root@hadoop39 mysql]# sudo su - mysqladmin

      hadoop39.jiuye:mysqladmin:/usr/local/mysql:> bin/mysqld
      –defaults-file=/etc/my.cnf
      –user=mysqladmin
      –basedir=/usr/local/mysql/
      –datadir=/usr/local/mysql/data/
      –initialize

      在初始化时如果加上 –initial-insecure,则会创建空密码的 root@localhost 账号,否则会创建带密码的 root@localhost 账号,密码直接写在 log-error 日志文件中
      (在5.6版本中是放在 ~/.mysql_secret 文件里,更加隐蔽,不熟悉的话可能会无所适从)

      9.查看临时密码
      hadoop39.jiuye:mysqladmin:/usr/local/mysql/data:>cat hostname.err |grep password
      2017-07-22T02:15:29.439671Z 1 [Note] A temporary password is generated for root@localhost: kFCqrXeh2y(0
      hadoop39.jiuye:mysqladmin:/usr/local/mysql/data:>

      10.启动
      /usr/local/mysql/bin/mysqld_safe --defaults-file=/etc/my.cnf &

      11.登录及修改用户密码
      hadoop39.jiuye:mysqladmin:/usr/local/mysql/data:>mysql -uroot -p’kFCqrXeh2y(0’
      mysql: [Warning] Using a password on the command line interface can be insecure.
      Welcome to the MySQL monitor. Commands end with ; or \g.
      Your MySQL connection id is 2
      Server version: 5.7.11-log

      Copyright 2000, 2016, Oracle and/or its affiliates. All rights reserved.

      Oracle is a registered trademark of Oracle Corporation and/or its
      affiliates. Other names may be trademarks of their respective
      owners.

      Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

      mysql> alter user root@localhost identified by ‘syncdb123!’;
      Query OK, 0 rows affected (0.05 sec)

      mysql> GRANT ALL PRIVILEGES ON . TO ‘root’@’%’ IDENTIFIED BY ‘syncdb123!’ ;
      Query OK, 0 rows affected, 1 warning (0.02 sec)

      mysql> flush privileges;
      Query OK, 0 rows affected (0.00 sec)

      mysql> exit;
      Bye

      12.重启
      hadoop39.jiuye:mysqladmin:/usr/local/mysql:> service mysql restart

      hadoop39.jiuye:mysqladmin:/usr/local/mysql/data:>mysql -uroot -psyncdb123!
      mysql: [Warning] Using a password on the command line interface can be insecure.
      Welcome to the MySQL monitor. Commands end with ; or \g.
      Your MySQL connection id is 2
      Server version: 5.7.11-log MySQL Community Server (GPL)

      Copyright 2000, 2016, Oracle and/or its affiliates. All rights reserved.

      Oracle is a registered trademark of Oracle Corporation and/or its
      affiliates. Other names may be trademarks of their respective
      owners.

      Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

      mysql>

    • Configure http(rpm+parcels)
      1.安装 http 和启动 http 服务
      [root@hadoop-01 ~]# rpm -qa|grep httpd [root@hadoop-01 ~]# yum install -y httpd
      [root@hadoop-01 ~]# chkconfig --list|grep httpd httpd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
      [root@hadoop-01 ~]# chkconfig httpd on [root@hadoop-01 ~]# chkconfig --list|grep httpd
      httpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root@hadoop-01 ~]# service httpd start

      2.创建 parcels 文件
      [root@hadoop-01 rpminstall]# cd /var/www/html [root@hadoop-01 html]# mkdir parcels [root@hadoop-01 html]# cd parcels

      3.将 parcel 文件下载
      http://archive.cloudera.com/cdh5/parcels/5.7.0/, 将
      CDH-5.7.0-1.cdh5.7.0.p0.45-el6.parcel CDH-5.7.0-1.cdh5.7.0.p0.45-el6.parcel.sha1 manifest.json
      这三个文件下载到 window 桌面上(在网络比较通畅情况下,可以直接 wget),
      然后通过 rz 命令上传到 /var/www/html/parcels/目录中,
      然后将 CDH-5.7.0-1.cdh5.7.0.p0.45-el6.parcel.sha1 重命名(去掉名称结尾"1",不然 cdh 在装的时候,会一直认为 在下载,是未完成的)
      [root@hadoop-01 parcels]# wget http://archive.cloudera.com/cdh5/parcels/5.7.0/CDH-5.7.0- 1.cdh5.7.0.p0.45-el6.parcel
      [root@hadoop-01 parcels]# wget http://archive.cloudera.com/cdh5/parcels/5.7.0/CDH-5.7.0- 1.cdh5.7.0.p0.45-el6.parcel.sha1
      [root@hadoop-01 parcels]# wget http://archive.cloudera.com/cdh5/parcels/5.7.0/manifest.json
      [root@hadoop-01 parcels]# ll
      total 1230064
      -rw-r–r-- 1 root root 1445356350 Nov 16 21:14 CDH-5.7.0-1.cdh5.7.0.p0.45-el6.parcel
      -rw-r–r-- 1 root root 41 Sep 22 04:25 CDH-5.7.0-1.cdh5.7.0.p0.45-el6.parcel.sha1
      -rw-r–r-- 1 root root 56892 Sep 22 04:27 manifest.json
      [root@hadoop-01 parcels]# mv CDH-5.7.0-1.cdh5.7.0.p0.45-el6.parcel.sha1 CDH-5.7.0-1.cdh5.7.0.p0.45- el6.parcel.sha
      校验文件下载未损坏:
      [root@sht-sgmhadoopcm-01 parcels]# sha1sum CDH-5.7.0-1.cdh5.7.0.p0.45-el6.parcel b6d4bafacd1cfad6a9e1c8f951929c616ca02d8f CDH-5.7.0-1.cdh5.7.0.p0.45-el6.parcel [root@sht-sgmhadoopcm-01 parcels]#
      [root@sht-sgmhadoopcm-01 parcels]# cat CDH-5.7.0-1.cdh5.7.0.p0.45-el6.parcel.sha b6d4bafacd1cfad6a9e1c8f951929c616ca02d8f
      [root@sht-sgmhadoopcm-01 parcels]#

      4.在 http://archive.cloudera.com/cm5/repo-as-tarball/5.7.0/,下载 cm5.7.0-centos6.tar.gz 包
      [root@hadoop-01 ~]$ cd /opt/rpminstall
      [root@hadoop-01 rpminstall]$ wget http://archive.cloudera.com/cm5/repo-as-tarball/5.7.0/cm5.7.0- centos6.tar.gz
      [root@hadoop-01 rpminstall]$ ll
      total 1523552
      -rw-r–r-- 1 root root 815597424 Sep 22 02:00 cm5.7.0-centos6.tar.gz

      5.解压 cm5.7.0-centos6.tar.gz 包,必须要和官网的下载包的路径地址一致
      [root@hadoop-01 rpminstall]$ tar -zxf cm5.7.0-centos6.tar.gz -C /var/www/html/ [root@hadoop-01 rpminstall]$ cd /var/www/html/
      [root@hadoop-01 html]$ ll
      total 8
      drwxrwxr-x 3 1106 592 4096 Oct 27 10:09 cm drwxr-xr-x 2 root root 4096 Apr 2 15:55 parcels

      6.创建和官网一样的目录路径,然后 mv
      [root@hadoop-01 html]$ mkdir -p cm5/redhat/6/x86_64/ [root@hadoop-01 html]$ mv cm cm5/redhat/6/x86_64/ [root@hadoop-01 html]$

      7.配置本地的 yum 源,cdh 集群在安装时会就从本地 down 包,不会从官网了
      [root@hadoop-01 html]$ vim /etc/yum.repos.d/cloudera-manager.repo [cloudera-manager]
      name = Cloudera Manager, Version 5.7.0
      baseurl = http://172.16.101.54/cm5/redhat/6/x86_64/cm/5/
      gpgcheck = 0
      ###提醒: 每个机器都要配置 cloudera-manager.repo 文件

      8.浏览器查看下面两个网址是否出来,假如有,就配置成功 http://172.16.101.54/parcels/ 这里用阿里云 采用的是内网ip
      http://172.16.101.54/cm5/redhat/6/x86_64/cm/5/
      cm: deamons+server+agent 闭源
      parcel: 后缀为 parcel,是将 Apache hadoop zk hbase hue oozie 等等,打包一个文件,后缀取名为包 裹 英文 parcel

      9.在什么时候用这两个地址???(重点)
      参考 CDH5.7.0 Installation.docx 文档的第二.05 选择存储库的界面时:
      a.单击"更多选项",弹出界面中,在"远程 Parcel 存储库 URL"的右侧地址,
      b.删除 url,只保留一个,然后替换为 http://172.16.101.54/parcels/
      c.然后单击"save changes",稍微等一下,页面会自动刷新一下,
      d.然后在"选择 CDH 的版本",选择 cdh
      f.“其他 Parcels”,全部选择"无"
      g.选择"自定义存储库",将 http://172.16.101.54/cm5/redhat/6/x86_64/cm/5/ 粘贴上去

      10.官网参考链接 http://archive.cloudera.com/cdh5/parcels/5.8.2/
      http://archive.cloudera.com/cm5/repo-as-tarball/5.8.2/ http://archive.cloudera.com/cm5/redhat/6/x86_64/cm/5.8.2/

    • Using RPM Packages Installation and Start CM Server
      1.install server rpm in cm instance
      cd /var/www/html/cm5/redhat/6/x86_64/cm/5/RPMS/x86_64
      yum install -y cloudera-manager-daemons-5.7.0-1.cm570.p0.76.el6.x86_64.rpm yum install -y cloudera-manager-server-5.7.0-1.cm570.p0.76.el6.x86_64.rpm

      2.configure the jdbc driver of mysql-connector-java.jar
      cd /usr/share/java
      wget http://cdn.mysql.com//Downloads/Connector-J/mysql-connector-java-5.1.37.zip [invaild] unzip mysql-connector-java-5.1.37.zip
      cd mysql-connector-java-5.1.37
      cp mysql-connector-java-5.1.37-bin.jar …/mysql-connector-java.jar
      Must rename to “mysql-connector-java.jar”

      3.on the CM machine,install MySQL and configure cmf user and db
      create database cmf DEFAULT CHARACTER SET utf8;
      grant all on cmf.* TO ‘cmf’@‘localhost’ IDENTIFIED BY ‘cmf_password’;
      flush privileges;
      mysql> drop database cmf;
      mysql> CREATE DATABASE cmf /*!40100 DEFAULT CHARACTER SET utf8 */;

      4.modify cloudera-scm-server connect to MySQL
      [root@hadoopcm-01 cloudera-scm-server]# cd /etc/cloudera-scm-server/ [root@hadoopcm-01 cloudera-scm-server]# vi db.properties

      #Copyright (c) 2012 Cloudera, Inc. All rights reserved.
      		#
      # This file describes the database connection. #
      # The database type
      # Currently 'mysql', 'postgresql' and 'oracle' are valid databases. com.cloudera.cmf.db.type=mysql
      # The database host
      # If a non standard port is needed, use 'hostname:port' com.cloudera.cmf.db.host=localhost
      # The database name com.cloudera.cmf.db.name=cmf
      # The database user com.cloudera.cmf.db.user=cmf
      # The database user's password com.cloudera.cmf.db.password=cmf_password
      

      注意:> CDH5.9.1/5.10 多加一行 db=init----->db=external 5.start cm server
      service cloudera-scm-server start
      6.real-time checking server log in cm instance
      cd /var/log/cloudera-scm-server/
      tail -f cloudera-scm-server.log
      2017-03-17 21:32:05,253 INFO WebServerImpl:org.mortbay.log: Started [email protected]:7180 2017-03-17 21:32:05,253 INFO WebServerImpl:com.cloudera.server.cmf.WebServerImpl: Started Jetty server.
      #mark: configure clouder manager metadata to saved in the cmf db.
      7.waiting 1 minute,open http://172.16.101.54:7180 #Log message:
      User:admin Password:admin
      至此安装完毕

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/hucuoshi8718/article/details/87929741

智能推荐

位操作符详解_char 左移-程序员宅基地

文章浏览阅读418次。p_data 是指定的源数据,position是指定位(取值范围1~8) flag表示置0 置1操作,true:置1 false:置0.方法二本质上与方法一是一样的,只是改变了循环条件,不用循环8次,当最后一个1检测完毕后,a变为0,退出循环。实现对一个8 bit 数据(unsigned char 类型)的指定位的置0或者置1操作,并保持其他位不变。| 或——相同位只要有1,操作结果就为1。^ 异或——对应位不相同时,操作结果为1。& 与——相同位都为1,操作结果才位1。..._char 左移

【计算机考研】408究竟有多难?-程序员宅基地

文章浏览阅读866次,点赞25次,收藏24次。全文超过 1 万字,包括如何有效规划备考时间,选择老师和资料。

postgresql数据库使用—— 跨表复制数据_postgres 将其他表的字段复制到当前表-程序员宅基地

文章浏览阅读4.3k次。使用SELECT INTO 和 INSERT INTO SELECT 表复制语句了。1.INSERT INTOSELECT语句语句形式为:Insert into Table2(field1,field2,...) select value1,value2,... from Table1 [where column =value][]为可选内容要求目标表Table2必须..._postgres 将其他表的字段复制到当前表

Zemax操作35--双高斯镜头优化_双高斯镜头zemax-程序员宅基地

文章浏览阅读5.8k次,点赞5次,收藏50次。双高斯_双高斯镜头zemax

学习笔记2:Linux-程序员宅基地

文章浏览阅读173次。实操篇一、远程登入XShell(一)为什么需要远程登入?Linux服务器是开发小组共享的,而正式上线的项目是运行在公网的,因此程序员需要远程登入到CentOS进行项目管理或者开发。(二)需要Linux开启sshd服务,打开22号端口才能进行监听。(三)XShell里的一些配置:需要得到Linux系统上的地址,以及一个用户名和密码就可以连接。...

canal初探-程序员宅基地

文章浏览阅读36次。我计划以后将单位的搜索项目,慢慢过渡到ES集群,然后实现准实时的搜索.canal可以感知数据库的变化,作为一个mysql的伪slave,我可以通过canal获取数据库变化,然后批量刷新到ES集群.canal是阿里开源..._canal execute(map map)

随便推点

如何使用IBM-QISKit完成量子计算编程(英文)_qiskit支持gpu-程序员宅基地

文章浏览阅读3.5k次,点赞3次,收藏8次。如何使用IBM-QISKit完成量子计算编程(英文)IBM-QISKitIBM-QISKitThis project is based on the IBM QISKit tutorial on Github.If this is the first time for you to run QISKit, please follow the following steps to instal..._qiskit支持gpu

nginx部署vue项目,给访问路径加前缀的方法:vue.config.js配置publicPath和nginx配置alias_vue2 静态资源 绝对路径 前缀-程序员宅基地

文章浏览阅读6.8k次。本文主要涉及到 Vue.js 项目部署在 Nginx 上的相关问题。其中,publicPath 选项可以用于设置 Vue.js 项目的访问路径前缀,alias 指令可以用于 Nginx 中将请求路径映射到指定的文件系统路径。同时,通过设置 Nginx 配置文件,可以将多个 Vue.js 项目部署在同一个域名下的不同路径中。使用 alias 指令可以更加方便地管理多个 Vue.js 项目,并使配置文件更加简洁易读。_vue2 静态资源 绝对路径 前缀

Python第三方模块apscheduler安装和基本使用_apscheduler下载-程序员宅基地

文章浏览阅读1.2w次,点赞5次,收藏32次。文章目录apscheduler 模块安装apscheduler 模块apscheduler 模块介绍支持的后端存储作业APScheduler有四种组成部分各组件简介触发器作业存储器执行器选择合适的调度器apscheduler 模块使用添加作业移除作业触发器类型代码实现使用SQLAlchemy作业存储器存放作业暂停和恢复作业获得job列表关闭调度器apscheduler 模块安装apschedu........._apscheduler下载

nginx配置_nginx assets-程序员宅基地

文章浏览阅读2.1k次。nginx配置_nginx assets

自定义UISearchBar外观-程序员宅基地

文章浏览阅读103次。本文转载至http://www.jianshu.com/p/66b5b777f5dc最近,在项目过程中遇到要自定义SearchBar的外观,虽然自己觉得用系统默认的外观就行了,不过UI设计师要求不用系统的默认样式,要跟app主题保持一致。图1:设计效果图从上图可以看出,我们要做的UISearchBar要有圆角,边框颜色,取消按钮颜色,背景透明等等。...

JXFCZX - 信使(最短路)-程序员宅基地

文章浏览阅读2.7k次。题目链接:http://www.jxsfczx.cn:888/problem/42时间:1 秒空间:512 MB问题描述战争时期,前线有n个哨所,每个哨所可能会与其他若干个哨所之间有通信联系。信使负责在哨所之间传递信息,当然,这是要花费一定时间的(以天为单位)。指挥部设在第一个哨所。当指挥部下达一个命令后,指挥部就派出若干个信使向与指挥部相连的哨所送信。当一个哨所接到信后,这个哨所内的信...

推荐文章

热门文章

相关标签