python - Efficiently check permutations of a small string against a dictionary -
i'm trying see words can made scrambled string, compared dictionary. got long string case already, think short string case dragging program 20 seconds range. testing 1000 scrambles , dictionary of 170,000 "words".
for short scrambled word case thought more efficient create every permutation of string , compare against dictionary entries, so:
from itertools import permutations wordstore = { 7:[], 8:['acowbtec', 'acowbtce', 'acowbetc', 'aocwbtec', 'acwobetc', 'acotbecw', 'caowbtec', 'caowbtce', 'caowbetc', 'zsdfvsvv', 'sdffbrfv', 'sdjfjsjf', 'sjnshsnj', 'adhnsrhn', 'sdfbhxdf', 'zsdfgzdf', 'cnzsdfgf', 'sdbdzvff', 'dbgtbzdf', 'zsvrvrdz', 'zdrvrvrn', 'nhcncnby', 'mmmnyndd', 'zswewedf', 'zeswffee', 'sefdedee', 'sefeefee', 'iuygfjhg', 'uytmjnbb', 'uythbgvf', 'ytrgfdcv', 'ytregfcv', 'ytrevcxd', 'ytrevcxs', 'ytrewgfd', 'trewgfds', 'uytrgfdd', 'uytrenhg', 'ytrebgfd', 'jhgfdbvc', 'mnbvyhtr', 'ytrehbgv', 'uytrwwsz', 'mnbtrexx', 'uytrebgv', 'fgfgfvdw', 'werfdcse', 'mnbvcdes', 'kjhgfnbv', 'sdfhgfdw', 'yujhredq', 'wsxrtyhn', 'jfrvsdxw', 'jmrtgedw', 'ujrtgedw', 'ujtgedws', 'yhvedsgy', 'yhygdfex', 'kjjkjuhy', 'rffdddwe', 'esrdtfgd', 'uytrewww', 'vfcdtred', 'kjhgfnbv', 'uytrbvcd', 'jhgfhgfd', 'adfgdfgg', 'mnbvtred', 'jhgfrewb', 'hgfdtred', 'dsfgdfgg', 'dfgdgggg'] } scrambles = set([''.join(p) p in permutations('acowbtec',8)]) x in scrambles.intersection(wordstore[8]): print('found ', x)
i created small simple set test against here.
as can see, it's rather straight forward, it's way slow. here's relevant cprofile sections larger data set test.
ncalls tottime percall cumtime percall filename:lineno(function) 1 9.324 9.324 29.804 29.804 wordplayer.py:2(<module>) 990 9.053 0.009 16.147 0.016 wordplayer.py:28(<listcomp>) 990 2.205 0.002 2.205 0.002 {method 'intersection' of 'set' objects} 39916800 7.093 0.000 7.093 0.000 {method 'join' of 'str' objects}
i don't understand cprofile results. looks on per call basis aren't slow, overall take time. ideas on how can speed up?
update:
with dan's have drastically sped program. have initialization doesn't seem right. how supposed done?
with open(file1) f: line in f: line = line.rstrip() try: wordstore[len(line)].setdefault(''.join(sorted(line)), []).append(line) except: wordstore[len(line)] = {} wordstore[len(line)].setdefault(''.join(sorted(line)), []).append(line)
rather generating permutations, search strings after normalizing them using sorted order. start linear search , use hash index:
>>> 8 = ['acowbtec', 'acowbtce', 'acowbetc', 'aocwbtec', 'acwobetc', 'acotbecw', 'caowbtec', 'caowbtce', 'caowbetc', ... 'zsdfvsvv', 'sdffbrfv', 'sdjfjsjf', 'sjnshsnj', 'adhnsrhn', 'sdfbhxdf', 'zsdfgzdf', 'cnzsdfgf', 'sdbdzvff', ... 'dbgtbzdf', 'zsvrvrdz', 'zdrvrvrn', 'nhcncnby', 'mmmnyndd', 'zswewedf', 'zeswffee', 'sefdedee', 'sefeefee', ... 'iuygfjhg', 'uytmjnbb', 'uythbgvf', 'ytrgfdcv', 'ytregfcv', 'ytrevcxd', 'ytrevcxs', 'ytrewgfd', 'trewgfds', ... 'uytrgfdd', 'uytrenhg', 'ytrebgfd', 'jhgfdbvc', 'mnbvyhtr', 'ytrehbgv', 'uytrwwsz', 'mnbtrexx', 'uytrebgv', ... 'fgfgfvdw', 'werfdcse', 'mnbvcdes', 'kjhgfnbv', 'sdfhgfdw', 'yujhredq', 'wsxrtyhn', 'jfrvsdxw', 'jmrtgedw', ... 'ujrtgedw', 'ujtgedws', 'yhvedsgy', 'yhygdfex', 'kjjkjuhy', 'rffdddwe', 'esrdtfgd', 'uytrewww', 'vfcdtred', ... 'kjhgfnbv', 'uytrbvcd', 'jhgfhgfd', 'adfgdfgg', 'mnbvtred', 'jhgfrewb', 'hgfdtred', 'dsfgdfgg', 'dfgdgggg'] >>> map(lambda s: ''.join(sorted(s)), eight) ['abcceotw', 'abcceotw', 'abcceotw', 'abcceotw', 'abcceotw', 'abcceotw', 'abcceotw', 'abcceotw', 'abcceotw', 'dfssvvvz', 'bdfffrsv', 'dffjjjss', 'hjjnnsss', 'adhhnnrs', 'bddffhsx', 'ddffgszz', 'cdffgnsz', 'bddffsvz', 'bbddfgtz', 'drrsvvzz', 'dnrrrvvz', 'bcchnnny', 'ddmmmnny', 'deefswwz', 'eeeffswz', 'ddeeeefs', 'eeeeeffs', 'fgghijuy', 'bbjmntuy', 'bfghtuvy', 'cdfgrtvy', 'cefgrtvy', 'cdertvxy', 'cerstvxy', 'defgrtwy', 'defgrstw', 'ddfgrtuy', 'eghnrtuy', 'bdefgrty', 'bcdfghjv', 'bhmnrtvy', 'beghrtvy', 'rstuwwyz', 'bemnrtxx', 'begrtuvy', 'dfffggvw', 'cdeefrsw', 'bcdemnsv', 'bfghjknv', 'ddffghsw', 'dehjqruy', 'hnrstwxy', 'dfjrsvwx', 'degjmrtw', 'degjrtuw', 'degjstuw', 'deghsvyy', 'defghxyy', 'hjjjkkuy', 'dddeffrw', 'ddefgrst', 'ertuwwwy', 'cddefrtv', 'bfghjknv', 'bcdrtuvy', 'dffgghhj', 'addffggg', 'bdemnrtv', 'befghjrw', 'ddefghrt', 'ddffgggs', 'ddfggggg'] >>> ''.join(sorted('acowbtec')) 'abcceotw'
a linear search fast enough data set possible use dictionary , index strings sorted versions.
>>> [v v in 8 if ''.join(sorted(v)) == ''.join(sorted('acowbtec'))] ['acowbtec', 'acowbtce', 'acowbetc', 'aocwbtec', 'acwobetc', 'acotbecw', 'caowbtec', 'caowbtce', 'caowbetc']
timeit reports linear search takes:
>>> timeit.timeit(setup="eight = ['acowbtec', 'acowbtce', 'acowbetc', 'aocwbtec', 'acwobetc', 'acotbecw', 'caowbtec', 'caowbtce', 'caowbetc','zsdfvsvv', 'sdffbrfv', 'sdjfjsjf', 'sjnshsnj', 'adhnsrhn', 'sdfbhxdf', 'zsdfgzdf', 'cnzsdfgf', 'sdbdzvff','dbgtbzdf', 'zsvrvrdz', 'zdrvrvrn', 'nhcncnby', 'mmmnyndd', 'zswewedf', 'zeswffee', 'sefdedee', 'sefeefee','iuygfjhg', 'uytmjnbb', 'uythbgvf', 'ytrgfdcv', 'ytregfcv', 'ytrevcxd', 'ytrevcxs', 'ytrewgfd', 'trewgfds','uytrgfdd', 'uytrenhg', 'ytrebgfd', 'jhgfdbvc', 'mnbvyhtr', 'ytrehbgv', 'uytrwwsz', 'mnbtrexx', 'uytrebgv','fgfgfvdw', 'werfdcse', 'mnbvcdes', 'kjhgfnbv', 'sdfhgfdw', 'yujhredq', 'wsxrtyhn', 'jfrvsdxw', 'jmrtgedw','ujrtgedw', 'ujtgedws', 'yhvedsgy', 'yhygdfex', 'kjjkjuhy', 'rffdddwe', 'esrdtfgd', 'uytrewww', 'vfcdtred','kjhgfnbv', 'uytrbvcd', 'jhgfhgfd', 'adfgdfgg', 'mnbvtred', 'jhgfrewb', 'hgfdtred', 'dsfgdfgg', 'dfgdgggg']",stmt="[v v in 8 if ''.join(sorted(v)) == ''.join(sorted('acowbtec'))]",number=1000) 0.22520709037780762
0.2 seconds 1000 iterations.
creating index of {sorted:[unsorted]}
, indexing dictionary sorted query string can make performing multiple queries faster performing each of separately linear searches.
building index simply:
>>> index = {} >>> v in eight: ... index.setdefault(''.join(sorted(v)), []).append(v) ... >>> index {'hjjnnsss': ['sjnshsnj'], 'bbddfgtz': ['dbgtbzdf'], 'ddffgggs': ['dsfgdfgg'], 'defghxyy': ['yhygdfex'], 'begrtuvy': ['uytrebgv'], 'dffjjjss': ['sdjfjsjf'], 'cefgrtvy': ['ytregfcv'], 'dddeffrw': ['rffdddwe'], 'befghjrw': ['jhgfrewb'], 'eeeeeffs': ['sefeefee'], 'ddfgrtuy': ['uytrgfdd'], 'cdfgrtvy': ['ytrgfdcv'], 'deefswwz': ['zswewedf'], 'cerstvxy': ['ytrevcxs'], 'bdemnrtv': ['mnbvtred'], 'bbjmntuy': ['uytmjnbb'], 'ddmmmnny': ['mmmnyndd'], 'ddfggggg': ['dfgdgggg'], 'bcchnnny': ['nhcncnby'], 'ddeeeefs': ['sefdedee'], 'bcdfghjv': ['jhgfdbvc'], 'dfffggvw': ['fgfgfvdw'], 'bemnrtxx': ['mnbtrexx'], 'bhmnrtvy': ['mnbvyhtr'], 'cdeefrsw': ['werfdcse'], 'dnrrrvvz': ['zdrvrvrn'], 'cdertvxy': ['ytrevcxd'], 'bdefgrty': ['ytrebgfd'], 'dffgghhj': ['jhgfhgfd'], 'ddffgszz': ['zsdfgzdf'], 'cdffgnsz': ['cnzsdfgf'], 'fgghijuy': ['iuygfjhg'], 'hjjjkkuy': ['kjjkjuhy'], 'bddffhsx': ['sdfbhxdf'], 'ddefgrst': ['esrdtfgd'], 'degjrtuw': ['ujrtgedw'], 'bcdemnsv': ['mnbvcdes'], 'bfghjknv': ['kjhgfnbv', 'kjhgfnbv'], 'defgrtwy': ['ytrewgfd'], 'rstuwwyz': ['uytrwwsz'], 'bdfffrsv': ['sdffbrfv'], 'ddefghrt': ['hgfdtred'], 'bfghtuvy': ['uythbgvf'], 'eeeffswz': ['zeswffee'], 'drrsvvzz': ['zsvrvrdz'], 'ddffghsw': ['sdfhgfdw'], 'abcceotw': ['acowbtec', 'acowbtce', 'acowbetc', 'aocwbtec', 'acwobetc', 'acotbecw', 'caowbtec', 'caowbtce', 'caowbetc'], 'dfjrsvwx': ['jfrvsdxw'], 'eghnrtuy': ['uytrenhg'], 'addffggg': ['adfgdfgg'], 'cddefrtv': ['vfcdtred'], 'bcdrtuvy': ['uytrbvcd'], 'degjmrtw': ['jmrtgedw'], 'bddffsvz': ['sdbdzvff'], 'adhhnnrs': ['adhnsrhn'], 'ertuwwwy': ['uytrewww'], 'degjstuw': ['ujtgedws'], 'dfssvvvz': ['zsdfvsvv'], 'hnrstwxy': ['wsxrtyhn'], 'beghrtvy': ['ytrehbgv'], 'deghsvyy': ['yhvedsgy'], 'defgrstw': ['trewgfds'], 'dehjqruy': ['yujhredq']}
timeit states takes:
>>> timeit.timeit(setup="eight = ['acowbtec', 'acowbtce', 'acowbetc', 'aocwbtec', 'acwobetc', 'acotbecw', 'caowbtec', 'caowbtce', 'caowbetc','zsdfvsvv', 'sdffbrfv', 'sdjfjsjf', 'sjnshsnj', 'adhnsrhn', 'sdfbhxdf', 'zsdfgzdf', 'cnzsdfgf', 'sdbdzvff','dbgtbzdf', 'zsvrvrdz', 'zdrvrvrn', 'nhcncnby', 'mmmnyndd', 'zswewedf', 'zeswffee', 'sefdedee', 'sefeefee','iuygfjhg', 'uytmjnbb', 'uythbgvf', 'ytrgfdcv', 'ytregfcv', 'ytrevcxd', 'ytrevcxs', 'ytrewgfd', 'trewgfds','uytrgfdd', 'uytrenhg', 'ytrebgfd', 'jhgfdbvc', 'mnbvyhtr', 'ytrehbgv', 'uytrwwsz', 'mnbtrexx', 'uytrebgv','fgfgfvdw', 'werfdcse', 'mnbvcdes', 'kjhgfnbv', 'sdfhgfdw', 'yujhredq', 'wsxrtyhn', 'jfrvsdxw', 'jmrtgedw','ujrtgedw', 'ujtgedws', 'yhvedsgy', 'yhygdfex', 'kjjkjuhy', 'rffdddwe', 'esrdtfgd', 'uytrewww', 'vfcdtred','kjhgfnbv', 'uytrbvcd', 'jhgfhgfd', 'adfgdfgg', 'mnbvtred', 'jhgfrewb', 'hgfdtred', 'dsfgdfgg', 'dfgdgggg']",stmt="index={}\nfor v in eight:index.setdefault(''.join(sorted(v)), []).append(v)",number=1000) 0.14768695831298828
0.2 seconds 1000 iterations.
then querying is:
>>> index[''.join(sorted('acowbtec'))] ['acowbtec', 'acowbtce', 'acowbetc', 'aocwbtec', 'acwobetc', 'acotbecw', 'caowbtec', 'caowbtce', 'caowbetc']
timeit states takes:
>>> timeit.timeit(setup="index = {'hjjnnsss': ['sjnshsnj'], 'bbddfgtz': ['dbgtbzdf'], 'ddffgggs': ['dsfgdfgg'], 'defghxyy': ['yhygdfex'], 'begrtuvy': ['uytrebgv'], 'dffjjjss': ['sdjfjsjf'], 'cefgrtvy': ['ytregfcv'], 'dddeffrw': ['rffdddwe'], 'befghjrw': ['jhgfrewb'], 'eeeeeffs': ['sefeefee'], 'ddfgrtuy': ['uytrgfdd'], 'cdfgrtvy': ['ytrgfdcv'], 'deefswwz': ['zswewedf'], 'cerstvxy': ['ytrevcxs'], 'bdemnrtv': ['mnbvtred'], 'bbjmntuy': ['uytmjnbb'], 'ddmmmnny': ['mmmnyndd'], 'ddfggggg': ['dfgdgggg'], 'bcchnnny': ['nhcncnby'], 'ddeeeefs': ['sefdedee'], 'bcdfghjv': ['jhgfdbvc'], 'dfffggvw': ['fgfgfvdw'], 'bemnrtxx': ['mnbtrexx'], 'bhmnrtvy': ['mnbvyhtr'], 'cdeefrsw': ['werfdcse'], 'dnrrrvvz': ['zdrvrvrn'], 'cdertvxy': ['ytrevcxd'], 'bdefgrty': ['ytrebgfd'], 'dffgghhj': ['jhgfhgfd'], 'ddffgszz': ['zsdfgzdf'], 'cdffgnsz': ['cnzsdfgf'], 'fgghijuy': ['iuygfjhg'], 'hjjjkkuy': ['kjjkjuhy'], 'bddffhsx': ['sdfbhxdf'], 'ddefgrst': ['esrdtfgd'], 'degjrtuw': ['ujrtgedw'], 'bcdemnsv': ['mnbvcdes'], 'bfghjknv': ['kjhgfnbv', 'kjhgfnbv'], 'defgrtwy': ['ytrewgfd'], 'rstuwwyz': ['uytrwwsz'], 'bdfffrsv': ['sdffbrfv'], 'ddefghrt': ['hgfdtred'], 'bfghtuvy': ['uythbgvf'], 'eeeffswz': ['zeswffee'], 'drrsvvzz': ['zsvrvrdz'], 'ddffghsw': ['sdfhgfdw'], 'abcceotw': ['acowbtec', 'acowbtce', 'acowbetc', 'aocwbtec', 'acwobetc', 'acotbecw', 'caowbtec', 'caowbtce', 'caowbetc'], 'dfjrsvwx': ['jfrvsdxw'], 'eghnrtuy': ['uytrenhg'], 'addffggg': ['adfgdfgg'], 'cddefrtv': ['vfcdtred'], 'bcdrtuvy': ['uytrbvcd'], 'degjmrtw': ['jmrtgedw'], 'bddffsvz': ['sdbdzvff'], 'adhhnnrs': ['adhnsrhn'], 'ertuwwwy': ['uytrewww'], 'degjstuw': ['ujtgedws'], 'dfssvvvz': ['zsdfvsvv'], 'hnrstwxy': ['wsxrtyhn'], 'beghrtvy': ['ytrehbgv'], 'deghsvyy': ['yhvedsgy'], 'defgrstw': ['trewgfds'], 'dehjqruy': ['yujhredq']}",stmt="index[''.join(sorted('acowbtec'))]",number=1000) 0.0015790462493896484
0.002 seconds 1000 iterations.
both of these steps efficient.
the way remove try
-except
in:
wordstore = {} open(file1) f: line in f: line = line.rstrip() try: wordstore[len(line)].setdefault(''.join(sorted(line)), []).append(line) except: wordstore[len(line)] = {} wordstore[len(line)].setdefault(''.join(sorted(line)), []).append(line)
is use setdefault
twice:
wordstore = {} open(file1) f: line in f: line = line.rstrip() wordstore.setdefault(len(line), {}).setdefault(''.join(sorted(line)), []).append(line)
another option use defaultdict
requires this:
from collections import defaultdict wordstore = defaultdict(lambda: defaultdict(list)) open(file1) f: line in f: line = line.rstrip() wordstore[len(line)][''.join(sorted(line))].append(line)
it has shorter lines defaultdict
initialization harder understand use of setdefault
, subscription hide magic setdefault
explains. , every access creates entry if doesn't exist.
Comments
Post a Comment