在实际生活中,经常会有文件重复的困扰,即同一个文件可能既在A目录中,又在B目录中,更可恶的是,即便是同一个文件,文件名可能还不一样。在文件较少的情况下,该类情况还比较容易处理,最不济就是one by one的人工比较——即便如此,也很难保证你的眼神足够犀利。倘若文件很多,这岂不是个impossible mission?最近在看《Python UNIX和Linux系统管理指南》,里面就有有关“数据比较”的内容,在其基础上,结合实际整理如下。
该脚本主要包括以下模块:diskwalk,chechsum,find_dupes,delete。其中diskwalk模块是遍历文件的,给定路径,遍历输出该路径下的所有文件。chechsum模块是求文件的md5值。find_dupes导入了diskwalk和chechsum模块,根据md5的值来判断文件是否相同。delete是删除模块。具体如下:
1. diskwalk.py
import os,sys
class diskwalk(object):
def __init__(self,path):
self.path = path
def paths(self):
path=self.path
path_collection=[]
for dirpath,dirnames,filenames in os.walk(path):
for file in filenames:
fullpath=os.path.join(dirpath,file)
path_collection.append(fullpath)
return path_collection
if __name__ == '__main__':
for file in diskwalk(sys.argv[1]).paths():
print file
2. chechsum.py
import hashlib,sys
def create_checksum(path):
fp = open(path)
checksum = hashlib.md5()
while True:
buffer = fp.read(8192)
if not buffer:break
checksum.update(buffer)
fp.close()
checksum = checksum.digest()
return checksum
if __name__ == '__main__':
create_checksum(sys.argv[1])
3. find_dupes.py
from checksum import create_checksum
from diskwalk import diskwalk
from os.path import getsize
import sys
def findDupes(path):
record = {}
dup = {}
d = diskwalk(path)
files = d.paths()
for file in files:
compound_key = (getsize(file),create_checksum(file))
if compound_key in record:
dup[file] = record[compound_key]
else:
record[compound_key]=file
return dup
if __name__ == '__main__':
for file in findDupes(sys.argv[1]).items():
print "The duplicate file is %s" % file[0]
print "The original file is %s\n" % file[1]