算算书里有多少单词应该是很大路简单的事,但实际上各种状况层出不穷。有些是你料到的,比如排版的用了全角的标点符号,程序默认会删掉标点符号,万一排版那个没有规范地使用空格呢?有些是你不会料到的,比如手误创造出奇葩字符串。很早以前我就发现Notepad++和Word里算的字数是不一致的,Notepad++通常算出来的数都会大一些。谁对谁错,随缘吧,知道大概差不多也就行了,毕竟高考的时候你写少几个字不到800也不会真扣你的分。
字典和列表的相爱相杀我体会得越来越深刻了。
words.txt在这里,emma.txt在这里。
Exercise 1: Write a program that reads a file, breaks each line into words, strips whitespace and punctuation from the words, and converts them to lowercase. Hint: The string module provides a string named whitespace, which contains space, tab, newline, etc., and punctuation which contains the punctuation characters. Let’s see if we can make Python swear:
>>> import string
>>> string.punctuation
‘!”#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~’
Also, you might consider using the string methods strip, replace and translate.
Exercise 2: Go to Project Gutenberg (http://gutenberg.org) and download your favorite out-of-copyright book in plain text format. Modify your program from the previous exercise to read the book you downloaded, skip over the header information at the beginning of the file, and process the rest of the words as before. Then modify the program to count the total number of words in the book, and the number of times each word is used. Print the number of different words used in the book. Compare different books by different authors, written in different eras. Which author uses the most extensive vocabulary?
Exercise 3: Modify the program from the previous exercise to print the 20 most frequently used words in the book.
Exercise 4: Modify the previous program to read a word list (see Section 9.1) and then print all the words in the book that are not in the word list. How many of them are typos? How many of them are common words that should be in the word list, and how many of them are really obscure?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
| import string
fin = open('words.txt')
mydict = {}
for line in fin:
word = line.strip()
mydict[word] = ''
file = open('emma.txt', encoding = 'utf-8')
essay = file.read().lower()
essay = essay.replace('-', ' ')
pun = {}
str_all = '“' + '”' + string.punctuation
for x in str_all: # 建立各种标点符号字符的字典
pun[x] = ''
useless = essay.maketrans(pun) # maketrans必须被替换和替换等长,字典完美解决这个问题
l = essay.translate(useless).split() # 那些含-的单词会死得很惨,但仍然算是个单词
print('this book has', len(l), 'words')
book = {}
for item in l: # 读取文件为字符串,字符串转为单词列表,列表转为计数的字典,单词为键,次数为键值
book[item] = book.get(item, 0) + 1
list_words1 = sorted(list(zip(book.values(), book.keys())), reverse = True) # 字典转为列表,键与键值换位
print('this book has', len(list_words1), 'different words')
print('times', 'word', sep='\t')
count = 1
word_len = 0 # 限制最小词长
for times, word in list_words1: # 打印大于某长度用得最多的20个词(不限制,3个字母及以下最最简单的会刷屏)
if len(word) > word_len:
print(times, word, sep='\t')
count += 1
if count > 20:
break
count = 0
for word in book:
if word not in mydict:
# print(word, end=' ')
count += 1
print(count, 'words in book not in dict') # 结果惨不忍睹,合计590个
# this book has 164065 words
# this book has 7479 different words
# times word
# 5379 the
# 5322 to
# 4965 and
# 4412 of
# 3191 i
# 3187 a
# 2544 it
# 2483 her
# 2401 was
# 2365 she
# 2246 in
# 2172 not
# 2069 you
# 1995 be
# 1815 that
# 1813 he
# 1626 had
# 1448 as
# 1446 but
# 1373 for
# 590 words in book not in dict
# -----------------------------解法二----------------------------- 其实就是切单词方法有差异
import string
def set_book(fin1):
useless = string.punctuation + string.whitespace + '“' + '”'
d = {}
for line in fin1:
line = line.replace('-', ' ')
for word in line.split():
word = word.strip(useless)
word = word.lower()
d[word] = d.get(word, 0) + 1
return d
def set_dict(fin2):
d = {}
for line in fin2:
word = line.strip()
d[word] = d.get(word, 0) + 1
return d
fin1 = open('emma.txt', encoding='utf-8')
fin2 = open('words.txt')
book = set_book(fin1)
mydict = set_dict(fin2)
l = sorted(list(zip(book.values(), book.keys())), reverse=True)
count = 0
for key in book:
count = count + book[key]
print('this book has', count, 'words')
print('this book has', len(book), 'different words')
num = 20
print(num, 'most common words in this book')
print('times', 'word', sep='\t')
for times, word in l:
print(times, word, sep='\t')
num -= 1
if num < 1:
break
count = 0
for word in book:
if word not in mydict:
# print(word, end=' ')
count += 1
# print()
print(count, 'words in book not in dict')
# this book has 164120 words
# this book has 7531 different words
# 20 most common words in this book
# times word
# 5379 the
# 5322 to
# 4965 and
# 4412 of
# 3191 i
# 3187 a
# 2544 it
# 2483 her
# 2401 was
# 2364 she
# 2246 in
# 2172 not
# 2069 you
# 1995 be
# 1815 that
# 1813 he
# 1626 had
# 1448 as
# 1446 but
# 1373 for
# 683 words in book not in dict |
import string
fin = open('words.txt')
mydict = {}
for line in fin:
word = line.strip()
mydict[word] = ''
file = open('emma.txt', encoding = 'utf-8')
essay = file.read().lower()
essay = essay.replace('-', ' ')
pun = {}
str_all = '“' + '”' + string.punctuation
for x in str_all: # 建立各种标点符号字符的字典
pun[x] = ''
useless = essay.maketrans(pun) # maketrans必须被替换和替换等长,字典完美解决这个问题
l = essay.translate(useless).split() # 那些含-的单词会死得很惨,但仍然算是个单词
print('this book has', len(l), 'words')
book = {}
for item in l: # 读取文件为字符串,字符串转为单词列表,列表转为计数的字典,单词为键,次数为键值
book[item] = book.get(item, 0) + 1
list_words1 = sorted(list(zip(book.values(), book.keys())), reverse = True) # 字典转为列表,键与键值换位
print('this book has', len(list_words1), 'different words')
print('times', 'word', sep='\t')
count = 1
word_len = 0 # 限制最小词长
for times, word in list_words1: # 打印大于某长度用得最多的20个词(不限制,3个字母及以下最最简单的会刷屏)
if len(word) > word_len:
print(times, word, sep='\t')
count += 1
if count > 20:
break
count = 0
for word in book:
if word not in mydict:
# print(word, end=' ')
count += 1
print(count, 'words in book not in dict') # 结果惨不忍睹,合计590个
# this book has 164065 words
# this book has 7479 different words
# times word
# 5379 the
# 5322 to
# 4965 and
# 4412 of
# 3191 i
# 3187 a
# 2544 it
# 2483 her
# 2401 was
# 2365 she
# 2246 in
# 2172 not
# 2069 you
# 1995 be
# 1815 that
# 1813 he
# 1626 had
# 1448 as
# 1446 but
# 1373 for
# 590 words in book not in dict
# -----------------------------解法二----------------------------- 其实就是切单词方法有差异
import string
def set_book(fin1):
useless = string.punctuation + string.whitespace + '“' + '”'
d = {}
for line in fin1:
line = line.replace('-', ' ')
for word in line.split():
word = word.strip(useless)
word = word.lower()
d[word] = d.get(word, 0) + 1
return d
def set_dict(fin2):
d = {}
for line in fin2:
word = line.strip()
d[word] = d.get(word, 0) + 1
return d
fin1 = open('emma.txt', encoding='utf-8')
fin2 = open('words.txt')
book = set_book(fin1)
mydict = set_dict(fin2)
l = sorted(list(zip(book.values(), book.keys())), reverse=True)
count = 0
for key in book:
count = count + book[key]
print('this book has', count, 'words')
print('this book has', len(book), 'different words')
num = 20
print(num, 'most common words in this book')
print('times', 'word', sep='\t')
for times, word in l:
print(times, word, sep='\t')
num -= 1
if num < 1:
break
count = 0
for word in book:
if word not in mydict:
# print(word, end=' ')
count += 1
# print()
print(count, 'words in book not in dict')
# this book has 164120 words
# this book has 7531 different words
# 20 most common words in this book
# times word
# 5379 the
# 5322 to
# 4965 and
# 4412 of
# 3191 i
# 3187 a
# 2544 it
# 2483 her
# 2401 was
# 2364 she
# 2246 in
# 2172 not
# 2069 you
# 1995 be
# 1815 that
# 1813 he
# 1626 had
# 1448 as
# 1446 but
# 1373 for
# 683 words in book not in dict