基于Docker的Hadoop集群快速搭建_link veth1pl20509 exists and is up_程裕强的博客-程序员秘密

技术标签: Docker  Docker学习笔记  Hadoop基础教程  hadoop集群  

1、创建镜像

1.1 Dockerfile

[root@hadron docker]# cd hadoop/
[root@hadron hadoop]# cat Dockerfile 
FROM centos7-ssh
ADD jdk-8u144-linux-x64.tar.gz /usr/local/
RUN mv /usr/local/jdk1.8.0_144 /usr/local/jdk1.8
ENV JAVA_HOME /usr/local/jdk1.8
ENV PATH $JAVA_HOME/bin:$PATH

ADD hadoop-2.7.4.tar.gz /usr/local
RUN mv /usr/local/hadoop-2.7.4 /usr/local/hadoop
ENV HADOOP_HOME /usr/local/hadoop
ENV PATH $HADOOP_HOME/bin:$PATH

RUN yum install -y which sudo

1.2 构建镜像

[[email protected] hadoop]# docker build -t="hadoop" .
Sending build context to Docker daemon 452.2 MB
Step 1 : FROM centos7-ssh
 ---> 9fd1b9b60b8a
Step 2 : ADD jdk-8u144-linux-x64.tar.gz /usr/local/
 ---> 5f9ccbf28306
Removing intermediate container ead3b58b742c
Step 3 : RUN mv /usr/local/jdk1.8.0_144 /usr/local/jdk1.8
 ---> Running in fbb66c308560
 ---> a90f2adeeb43
Removing intermediate container fbb66c308560
Step 4 : ENV JAVA_HOME /usr/local/jdk1.8
 ---> Running in 2838722b055c
 ---> 8110b0338156
Removing intermediate container 2838722b055c
Step 5 : ENV PATH $JAVA_HOME/bin:$PATH
 ---> Running in 0a8469fb58c2
 ---> 6476d6abfc71
Removing intermediate container 0a8469fb58c2
Step 6 : ADD hadoop-2.7.4.tar.gz /usr/local
 ---> 171a1424d7bc
Removing intermediate container 9a3abffca38e
Step 7 : RUN mv /usr/local/hadoop-2.7.4 /usr/local/hadoop
 ---> Running in 0ec4bbd4c87e
 ---> 3a5f0c590232
Removing intermediate container 0ec4bbd4c87e
Step 8 : ENV HADOOP_HOME /usr/local/hadoop
 ---> Running in cf9c7b8d2be9
 ---> 0c55f791f81b
Removing intermediate container cf9c7b8d2be9
Step 9 : ENV PATH $HADOOP_HOME/bin:$PATH
 ---> Running in d19152ccdeaf
 ---> 1e90c8eeda4b
Removing intermediate container d19152ccdeaf
Step 10 : RUN yum install -y which sudo
 ---> Running in 711fc680e8ed
Loaded plugins: fastestmirror, ovl
Loading mirror speeds from cached hostfile
 * base: mirrors.163.com
 * extras: mirrors.163.com
 * updates: mirrors.cn99.com
Package sudo-1.8.6p7-23.el7_3.x86_64 already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package which.x86_64 0:2.20-7.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package          Arch              Version               Repository       Size
================================================================================
Installing:
 which            x86_64            2.20-7.el7            base             41 k

Transaction Summary
================================================================================
Install  1 Package

Total download size: 41 k
Installed size: 75 k
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : which-2.20-7.el7.x86_64                                      1/1 
install-info: No such file or directory for /usr/share/info/which.info.gz
  Verifying  : which-2.20-7.el7.x86_64                                      1/1 

Installed:
  which.x86_64 0:2.20-7.el7                                                     

Complete!
 ---> 8d5814823951
Removing intermediate container 711fc680e8ed
Successfully built 8d5814823951
[[email protected] hadoop]# 

2 、配置IP

/24的意思是,子网掩码255.255.255.0
@后面的ip为Docker容器宿主机的网关

[[email protected] hadoop]# pipework br1 hadoop0 192.168.3.30/[email protected]
Link veth1pl8545 exists and is up
您在 /var/spool/mail/root 中有新邮件
[[email protected] hadoop]# ping 192.168.3.30
PING 192.168.3.30 (192.168.3.30) 56(84) bytes of data.
64 bytes from 192.168.3.30: icmp_seq=1 ttl=64 time=0.158 ms
64 bytes from 192.168.3.30: icmp_seq=2 ttl=64 time=0.068 ms
64 bytes from 192.168.3.30: icmp_seq=3 ttl=64 time=0.079 ms
^C
--- 192.168.3.30 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.068/0.101/0.158/0.041 ms
[[email protected] hadoop]# pipework br1 hadoop1 192.168.3.31/[email protected]
Link veth1pl8687 exists and is up
[[email protected] hadoop]# ping 192.168.3.31
PING 192.168.3.31 (192.168.3.31) 56(84) bytes of data.
64 bytes from 192.168.3.31: icmp_seq=1 ttl=64 time=0.148 ms
64 bytes from 192.168.3.31: icmp_seq=2 ttl=64 time=0.070 ms
^C
--- 192.168.3.31 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.070/0.109/0.148/0.039 ms
[[email protected] hadoop]# pipework br1 hadoop2 192.168.3.32/[email protected]
Link veth1pl8817 exists and is up
[[email protected] hadoop]# ping 192.168.3.32
PING 192.168.3.32 (192.168.3.32) 56(84) bytes of data.
64 bytes from 192.168.3.32: icmp_seq=1 ttl=64 time=0.143 ms
64 bytes from 192.168.3.32: icmp_seq=2 ttl=64 time=0.038 ms
64 bytes from 192.168.3.32: icmp_seq=3 ttl=64 time=0.036 ms
^C
--- 192.168.3.32 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.036/0.072/0.143/0.050 ms
[[email protected] hadoop]# 

3、配置Hadoop集群

3.1 连接

新开3个终端窗口,分别连接到 hadoop0,hadoop1,hadoop2,便于操作
(1)hadoop0

[root@hadron docker]# docker exec -it hadoop0 /bin/bash
[root@hadoop0 /]# ls
anaconda-post.log  bin  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
[root@hadoop0 /]# pwd
/
[root@hadoop0 /]# 

(2)hadoop1

[root@hadron docker]# docker exec -it hadoop1 /bin/bash
[root@hadoop1 /]# ls
anaconda-post.log  bin  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
[root@hadoop1 /]# 

(3)hadoop2

[root@hadoop2 /]# ls                                                                                                                                                        
anaconda-post.log  bin  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
[root@hadoop2 /]# 

3.2 配置hosts

[[email protected] /]# vi /etc/hosts
[[email protected] /]# cat /etc/hosts
127.0.0.1   localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.3.30    hadoop0
192.168.3.31    hadoop1
192.168.3.32    hadoop2
[[email protected] /]#

分发hosts

[[email protected] /]# scp /etc/hosts hadoop1:/etc
The authenticity of host 'hadoop1 (192.168.3.31)' can't be established.
RSA key fingerprint is 53:74:e1:1e:26:63:bb:14:c2:42:94:b6:63:ec:83:15.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop1,192.168.3.31' (RSA) to the list of known hosts.
[email protected]'s password: 
hosts                                                                                                                                      100%  213     0.2KB/s   00:00    
[[email protected] /]# scp /etc/hosts hadoop2:/etc
The authenticity of host 'hadoop2 (192.168.3.32)' can't be established.
RSA key fingerprint is 53:74:e1:1e:26:63:bb:14:c2:42:94:b6:63:ec:83:15.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop2,192.168.3.32' (RSA) to the list of known hosts.
[email protected]'s password: 
hosts                                                                                                                                      100%  213     0.2KB/s   00:00    
[[email protected] /]# 

3.3 免密登录配置

(1)hadoop0

[[email protected] /]# cd ~
[[email protected] ~]# vi sshUtil.sh 
[[email protected] ~]# cat sshUtil.sh 
#!/bin/bash
ssh-keygen -q -t rsa -N "" -f /root/.ssh/id_rsa
ssh-copy-id -i localhost
ssh-copy-id -i hadoop0
ssh-copy-id -i hadoop1
ssh-copy-id -i hadoop2
[[email protected] ~]# sh sshUtil.sh 
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 53:74:e1:1e:26:63:bb:14:c2:42:94:b6:63:ec:83:15.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'localhost'"
and check to make sure that only the key(s) you wanted were added.

The authenticity of host 'hadoop0 (192.168.3.30)' can't be established.
RSA key fingerprint is 53:74:e1:1e:26:63:bb:14:c2:42:94:b6:63:ec:83:15.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.

/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'hadoop1'"
and check to make sure that only the key(s) you wanted were added.

/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'hadoop2'"
and check to make sure that only the key(s) you wanted were added.

[[email protected] ~]# scp sshUtil.sh hadoop1:/root
sshUtil.sh                                                                                                                                 100%  154     0.2KB/s   00:00    
[[email protected] ~]# scp sshUtil.sh hadoop2:/root
sshUtil.sh                                                                                                                                 100%  154     0.2KB/s   00:00    
[[email protected] ~]#

(2)hadoop1

[[email protected] /]# cd ~
[[email protected] ~]# sh sshUtil.sh 
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 53:74:e1:1e:26:63:bb:14:c2:42:94:b6:63:ec:83:15.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'localhost'"
and check to make sure that only the key(s) you wanted were added.

The authenticity of host 'hadoop0 (192.168.3.30)' can't be established.
RSA key fingerprint is 53:74:e1:1e:26:63:bb:14:c2:42:94:b6:63:ec:83:15.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'hadoop0'"
and check to make sure that only the key(s) you wanted were added.

The authenticity of host 'hadoop1 (192.168.3.31)' can't be established.
RSA key fingerprint is 53:74:e1:1e:26:63:bb:14:c2:42:94:b6:63:ec:83:15.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.

The authenticity of host 'hadoop2 (192.168.3.32)' can't be established.
RSA key fingerprint is 53:74:e1:1e:26:63:bb:14:c2:42:94:b6:63:ec:83:15.
Are you sure you want to continue connecting (yes/no)? 123456
Please type 'yes' or 'no': yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'hadoop2'"
and check to make sure that only the key(s) you wanted were added.

[[email protected] ~]# 

(3)hadoop2

[[email protected] /]# sh /root/sshUtil.sh 
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 53:74:e1:1e:26:63:bb:14:c2:42:94:b6:63:ec:83:15.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'localhost'"
and check to make sure that only the key(s) you wanted were added.

The authenticity of host 'hadoop0 (192.168.3.30)' can't be established.
RSA key fingerprint is 53:74:e1:1e:26:63:bb:14:c2:42:94:b6:63:ec:83:15.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'hadoop0'"
and check to make sure that only the key(s) you wanted were added.

The authenticity of host 'hadoop1 (192.168.3.31)' can't be established.
RSA key fingerprint is 53:74:e1:1e:26:63:bb:14:c2:42:94:b6:63:ec:83:15.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'hadoop1'"
and check to make sure that only the key(s) you wanted were added.

The authenticity of host 'hadoop2 (192.168.3.32)' can't be established.
RSA key fingerprint is 53:74:e1:1e:26:63:bb:14:c2:42:94:b6:63:ec:83:15.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.

[[email protected] /]# 

3.4 重启容器

[root@hadron ~]# docker stop hadoop2
hadoop2
[root@hadron ~]# docker stop hadoop1
hadoop1
[root@hadron ~]# docker stop hadoop0
hadoop0
[root@hadron ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND               CREATED             STATUS                      PORTS               NAMES
c0adf86c7b54        hadoop              "/usr/sbin/sshd -D"   5 days ago          Exited (0) 23 seconds ago                       hadoop2
c480310285cd        hadoop              "/usr/sbin/sshd -D"   5 days ago          Exited (0) 7 seconds ago                        hadoop1
5dc1bd0178b4        hadoop              "/usr/sbin/sshd -D"   5 days ago          Exited (0) 3 seconds ago                        hadoop0
f5a002ad0f0e        centos7-ssh         "/usr/sbin/sshd -D"   5 days ago          Exited (0) 5 days ago                           centos7-demo
[root@hadron ~]# 
[root@hadron ~]# docker start hadoop0
hadoop0
[root@hadron ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND               CREATED             STATUS                          PORTS                                                                     NAMES
c0adf86c7b54        hadoop              "/usr/sbin/sshd -D"   5 days ago          Exited (0) About a minute ago                                                                             hadoop2
c480310285cd        hadoop              "/usr/sbin/sshd -D"   5 days ago          Exited (0) About a minute ago                                                                             hadoop1
5dc1bd0178b4        hadoop              "/usr/sbin/sshd -D"   5 days ago          Up 1 seconds                    0.0.0.0:8088->8088/tcp, 0.0.0.0:50070->50070/tcp, 0.0.0.0:32774->22/tcp   hadoop0
f5a002ad0f0e        centos7-ssh         "/usr/sbin/sshd -D"   5 days ago          Exited (0) 5 days ago                                                                                     centos7-demo
[root@hadron ~]# 
版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/chengyuqiang/article/details/78887393

智能推荐

序列化和反序列化_proto文件,对数据的约束_IT狗探求的博客-程序员秘密

序列化和反序列化  摘要  序列化和反序列化几乎是工程师们每天都要面对的事情,但是要精确掌握这两个概念并不容易:一方面,它们往往作为框架的一部分出现而湮没在框架之中;另一方面,它们会以其他更容易理解的概念出现,例如加密、持久化。然而,序列化和反序列化的选型却是系统设计或重构一个重要的环节,在分布式、大数据量系统设计里面更为显著。恰当的序列化协议不仅可以提高系统的通用性、强健

这份Java面试宝典,你值得拥有(2020版下篇)_2020java面试宝典_NObug-369的博客-程序员秘密

63. 深拷贝和浅拷贝区别是什么?浅拷贝只是复制了对象的引用地址,两个对象指向同一个内存地址,所以修改其中任意的值,另一个值都会随之变化,这就是浅拷贝(例:assign())深拷贝是将对象及值复制过来,两个对象修改其中任意的值另一个值不会改变,这就是深拷贝(例:JSON.parse()和JSON.stringify(),但是此方法无法复制函数类型)六、Java Web64. jsp ...

写通信时发现_hktk120320的博客-程序员秘密

偶然感悟:          写程序是考验人耐心的体力活,才接触不久的人 都会像我样的因为代码摆放得太零乱 导致自己的思维混乱不堪 就算是自己的代码实现了自己想要的功能后  想要优化一下自己的代码都会觉得没什么头绪  看着自己写的这一块那一块的代码头都大了 开始要优化它的想法早就被厌烦感替代了 习惯很重要 一个优秀的程序员写出来的代码就是要比生手写的代码看上去要舒服很多 不要不以为然 人与人的...

BZOJ 1725: [Usaco2006 Nov]Corn Fields牧场的安排_「一本通 5.4 例 2」牧场的安排_YYHS_WSF的博客-程序员秘密

1725: [Usaco2006 Nov]Corn Fields牧场的安排Time Limit: 5 Sec  Memory Limit: 64 MBDescriptionFarmer John新买了一块长方形的牧场,这块牧场被划分成M列N行(1<=M<=12; 1<=N<=12),每一格都是一块正方形的土地。FJ打算在牧场上的某几格土地里种上美味的草,供他的奶...

Linux下(Debian9.8和Ubuntu16.04测试通过)ekho安装和使用_酣楼驻海的博客-程序员秘密

前言TTS技术,TTS是Text To Speech的缩写,即“从文本到语音”。它将计算机自己产生的、或外部输入的文字信息转变为可以听得懂的、流利的汉语口语(或者其他语言语音)输出的技术,隶属于语音合成(SpeechSynthesis)。Ekho(余音)是一个免费、开源的中文语音合成(TTS)软件。它目前支持粤语、普通话(国语)、广东台山话、诏...

给自己定一个小目标,先会100道面试_子上的博客-程序员秘密

一、你的项目是干什么的?(什么ERP?)ERP 是企业资源管理系统,是企业的物料、人力、业务、财务等信息集成一体化的管理软件。(供应链的流程,就是公司需要什么物料、物料的采购、能制造出什么成品等方面管理软件)。二、你用的框架是什么,SSM 是什么?SSM=Spring + Springmvc + MyBatis三、SpringBoot 和 SpringMVC 的区别?Spring 开始使用的 Ao...

随便推点

图像质量评估系统的Matlab GUIDE实现(二)——处理_uciqe matlab_Type真是太帅了的博客-程序员秘密

在导入完图像后,接下来是选择参与的算法,GUIDE默认点击让该多选框的value改变,不用自己挨个写在callback函数里。三、评估按钮(pushbutton3):点击后会将右侧的图像和导入参考图像按钮隐藏掉(腾出空间显示结果和日志)。使用warning('off')关闭警告,使用tic,toc计时。使用diary('xxxx.txt')用来写入运行时在命令行窗口显示的中间结果、提示等,...

HGE引擎 簡介_hge游戏引擎_python二级题库的博客-程序员秘密

HGE引擎 - 转2007-08-24 17:52 HGE简单的说,就是以3D加速实现2D图像的做法, 听起来很玄,其实在幻想森林的大家早已在使用。 没错,就是RMXP内部的绘图功能(Game Maker也是), HGE使用DX8作为图像加速库,在Windows XP以后的系统, 无须更新DX版本即可运行

如何用循环语句打印九九乘法表。c语言。_c语言打印出九九惩罚变的原理_Zen_Wolf的博客-程序员秘密

今天是建国68周年,也是踏上技术之路的第一天。原理是这样的,共两重循环互相嵌套,第一轮循环是被乘数从1开始,循环到9结束。第二重循环,是从1到第一个数为止。为什么?因为我们背乘法口诀是这样背的:一一得一。一二得二,二二得四。一三得三,二三得六。三三的九......看,要循环的第一个数的取值是不是从1到9?要循环的第二个数的取值是不是从1到第一个数?因此,将这个思路转化成代码,要几个变量?2

拷贝wamp后不能启动的最简单解决方法_QQgenie的博客-程序员秘密

wamp:windows7_64 + apache2.4.9+ mysql5.6.17 + php7.3.9(原先是5.5.12)在已安装WAMP的机子上拷贝到另一台机后,在win7的环境路径path上加上D:\wamp\bin\php\php7.3.9;这时在d:\wamp下启动wampmanager.exe,右下角图标颜色是红色,启动失败。网上基本上是重装解决的,但这样也要覆盖一遍,还...

商城后台之order订单表的实现附详解(六)_order表_学不透java不改名的博客-程序员秘密

先来看看order表的结构目录类(Order)实现类(OrderBO)实现 ---1类OrderDAO实现类(OrderBO)CRUD的实现 ---2类(OrderRequestDTO)的实现 类(OrderDTO)的实现类(OrderResponseDTO)的实现类(OrderListResponseDTO)的实现类(OrderResponseDT...

java中.lpad_Oracle中Lpad函数和Rpad函数的用法_weixin_39578516的博客-程序员秘密

一、Lpad函数lpad函数将左边的字符串填充一些特定的字符其语法格式如下:lpad(string,n,[pad_string])string字符或者参数n字符的长度,是返回的字符串的数量,如果这个数量比原字符串的长度要短,lpad函数将会把字符串截取成从左到右的n个字符;pad_string可选参数,这个字符串是要粘贴到string的左边,若这个参数未写,lpad函数将会在string的左边粘贴...

推荐文章

热门文章

相关标签