Bash
1 前言
之前的章节手动完成Hadoop HDFS的NameNode节点备份,本章重点是实现名称节点的自动备份。
2 最佳实践
2.1 环境准备
本章以Hadoop HDFS为基础,如果你没有环境,请先创建,
另外,建议你先学习如下章节,否则你可能会不能很好地理解的内容,
2.2 配置自动备份
2.2.1 配置备份存储
本章以samba存储为基础,如果你没有创建,请参阅此章节,
注:备份存储请挂载于“/backup/nameNode”目录
2.2.2 配置备份脚本
mkdir ~/scripts/ vim ~/scripts/backupNameNode.sh
添加如下内容,
#!/bin/bash backupDIR="/data/dfs/nn/current" backupStorageDIR="/backup/nameNode" dataTime=`date +"%Y-%m-%d %H:%M:%S"` logFile="/var/log/nameNodeBackup.log" cd $backupStorageDIR if [ $? != 0 ]; then echo $dataTime" Directory mount error!" | tee -a $logFile exit 1 fi if [ ! -d $backupDIR ]; then echo $dataTime" backupDIR not found!" | tee -a $logFile exit 1 fi if [ ! -d $backupStorageDIR ]; then echo $dataTime" backupStorageDIR not found!" | tee -a $logFile exit 1 fi # backup VERSION cd $backupDIR scp VERSION $backupStorageDIR # backup fsimage hdfs dfsadmin -fetchImage $backupStorageDIR 2> $backupStorageDIR/nn-backup.log # Backup archive cd $backupStorageDIR tar --remove-files -cvjf "nn-backup-"`date +"%Y%m%d%H%M%S"`".tar.bz2" VERSION fsimage_* nn-backup.log # Clean up expired backups # find $backupStorageDIR -type f -name nn-backup-\*.tar.bz2 -ctime +90 -exec ls {} \; find $backupStorageDIR -type f -name nn-backup-\*.tar.bz2 -ctime +90 -exec rm -f {} \;
– 变量“backupDIR”声明名称节点的目录,本范例为“/data/dfs/nn/current”
– 变量“backupStorageDIR”声明备份存储目录,本范例为“/backup/nameNode”
– 变量“logFile”声明备份日志路径,本范例为“/var/log/nameNodeBackup.log”
2.2.3 配置脚本触发
crontab -e
需要添加如下配置,
0 2 * * * bash ~/scripts/backupNameNode.sh
没有评论