view tests/test-manifest.py @ 30155:b7a966ce89ed

changelog: disable delta chains This patch disables delta chains on changelogs. After this patch, new entries on changelogs - including existing changelogs - will be stored as the fulltext of that data (likely compressed). No delta computation will be performed. An overview of delta chains and data justifying this change follows. Revlogs try to store entries as a delta against a previous entry (either a parent revision in the case of generaldelta or the previous physical revision when not using generaldelta). Most of the time this is the correct thing to do: it frequently results in less CPU usage and smaller storage. Delta chains are most effective when the base revision being deltad against is similar to the current data. This tends to occur naturally for manifests and file data, since only small parts of each tend to change with each revision. Changelogs, however, are a different story. Changelog entries represent changesets/commits. And unless commits in a repository are homogonous (same author, changing same files, similar commit messages, etc), a delta from one entry to the next tends to be relatively large compared to the size of the entry. This means that delta chains tend to be short. How short? Here is the full vs delta revision breakdown on some real world repos: Repo % Full % Delta Max Length hg 45.8 54.2 6 mozilla-central 42.4 57.6 8 mozilla-unified 42.5 57.5 17 pypy 46.1 53.9 6 python-zstandard 46.1 53.9 3 (I threw in python-zstandard as an example of a repo that is homogonous. It contains a small Python project with changes all from the same author.) Contrast this with the manifest revlog for these repos, where 99+% of revisions are deltas and delta chains run into the thousands. So delta chains aren't as useful on changelogs. But even a short delta chain may provide benefits. Let's measure that. Delta chains may require less CPU to read revisions if the CPU time spent reading smaller deltas is less than the CPU time used to decompress larger individual entries. We can measure this via `hg perfrevlog -c -d 1` to iterate a revlog to resolve each revision's fulltext. Here are the results of that command on a repo using delta chains in its changelog and on a repo without delta chains: hg (forward) ! wall 0.407008 comb 0.410000 user 0.410000 sys 0.000000 (best of 25) ! wall 0.390061 comb 0.390000 user 0.390000 sys 0.000000 (best of 26) hg (reverse) ! wall 0.515221 comb 0.520000 user 0.520000 sys 0.000000 (best of 19) ! wall 0.400018 comb 0.400000 user 0.390000 sys 0.010000 (best of 25) mozilla-central (forward) ! wall 4.508296 comb 4.490000 user 4.490000 sys 0.000000 (best of 3) ! wall 4.370222 comb 4.370000 user 4.350000 sys 0.020000 (best of 3) mozilla-central (reverse) ! wall 5.758995 comb 5.760000 user 5.720000 sys 0.040000 (best of 3) ! wall 4.346503 comb 4.340000 user 4.320000 sys 0.020000 (best of 3) mozilla-unified (forward) ! wall 4.957088 comb 4.950000 user 4.940000 sys 0.010000 (best of 3) ! wall 4.660528 comb 4.650000 user 4.630000 sys 0.020000 (best of 3) mozilla-unified (reverse) ! wall 6.119827 comb 6.110000 user 6.090000 sys 0.020000 (best of 3) ! wall 4.675136 comb 4.670000 user 4.670000 sys 0.000000 (best of 3) pypy (forward) ! wall 1.231122 comb 1.240000 user 1.230000 sys 0.010000 (best of 8) ! wall 1.164896 comb 1.160000 user 1.160000 sys 0.000000 (best of 9) pypy (reverse) ! wall 1.467049 comb 1.460000 user 1.460000 sys 0.000000 (best of 7) ! wall 1.160200 comb 1.170000 user 1.160000 sys 0.010000 (best of 9) The data clearly shows that it takes less wall and CPU time to resolve revisions when there are no delta chains in the changelogs, regardless of the direction of traversal. Furthermore, not using a delta chain means that fulltext resolution in reverse is as fast as iterating forward. So not using delta chains on the changelog is a clear CPU win for reading operations. An example of a user-visible operation showing this speed-up is revset evaluation. Here are results for `hg perfrevset 'author(gps) or author(mpm)'`: hg ! wall 1.655506 comb 1.660000 user 1.650000 sys 0.010000 (best of 6) ! wall 1.612723 comb 1.610000 user 1.600000 sys 0.010000 (best of 7) mozilla-central ! wall 17.629826 comb 17.640000 user 17.600000 sys 0.040000 (best of 3) ! wall 17.311033 comb 17.300000 user 17.260000 sys 0.040000 (best of 3) What about 00changelog.i size? Repo Delta Chains No Delta Chains hg 7,033,250 6,976,771 mozilla-central 82,978,748 81,574,623 mozilla-unified 88,112,349 86,702,162 pypy 20,740,699 20,659,741 The data shows that removing delta chains from the changelog makes the changelog smaller. Delta chains are also used during changegroup generation. This operation essentially converts a series of revisions to one large delta chain. And changegroup generation is smart: if the delta in the revlog matches what the changegroup is emitting, it will reuse the delta instead of recalculating it. We can measure the impact removing changelog delta chains has on changegroup generation via `hg perfchangegroupchangelog`: hg ! wall 1.589245 comb 1.590000 user 1.590000 sys 0.000000 (best of 7) ! wall 1.788060 comb 1.790000 user 1.790000 sys 0.000000 (best of 6) mozilla-central ! wall 17.382585 comb 17.380000 user 17.340000 sys 0.040000 (best of 3) ! wall 20.161357 comb 20.160000 user 20.120000 sys 0.040000 (best of 3) mozilla-unified ! wall 18.722839 comb 18.720000 user 18.680000 sys 0.040000 (best of 3) ! wall 21.168075 comb 21.170000 user 21.130000 sys 0.040000 (best of 3) pypy ! wall 4.828317 comb 4.830000 user 4.820000 sys 0.010000 (best of 3) ! wall 5.415455 comb 5.420000 user 5.410000 sys 0.010000 (best of 3) The data shows eliminating delta chains makes the changelog part of changegroup generation slower. This is expected since we now have to compute deltas for revisions where we could recycle the delta before. It is worth putting this regression into context of overall changegroup times. Here is the rough total CPU time spent in changegroup generation for various repos while using delta chains on the changelog: Repo CPU Time (s) CPU Time w/ compression hg 4.50 7.05 mozilla-central 111.1 222.0 pypy 28.68 75.5 Before compression, removing delta chains from the changegroup adds ~4.4% overhead to hg changegroup generation, 1.3% to mozilla-central, and 2.0% to pypy. When you factor in zlib compression, these percentages are roughly divided by 2. While the increased CPU usage for changegroup generation is unfortunate, I think it is acceptable because the percentage is small, server operators (those likely impacted most by this) have other mechanisms to mitigate CPU consumption (namely reducing zlib compression level and pre-generated clone bundles), and because there is room to optimize this in the future. For example, we could use the nullid as the base revision, effectively encoding the full revision for each entry in the changegroup. When doing this, `hg perfchangegroupchangelog` nearly halves: mozilla-unified ! wall 21.168075 comb 21.170000 user 21.130000 sys 0.040000 (best of 3) ! wall 11.196461 comb 11.200000 user 11.190000 sys 0.010000 (best of 3) This looks very promising as a future optimization opportunity. It's worth that the changes in test-acl.t to the changegroup part size. This is because revision 6 in the changegroup had a delta chain of length 2 before and after this patch the base revision is nullrev. When the base revision is nullrev, cg2packer.deltaparent() hardcodes the *previous* revision from the changegroup as the delta parent. This caused the delta in the changegroup to switch base revisions, the delta to change, and the size to change accordingly. While the size increased in this case, I think sizes will remain the same on average, as the delta base for changelog revisions doesn't matter too much (as this patch shows). So, I don't consider this a regression.
author Gregory Szorc <gregory.szorc@gmail.com>
date Thu, 13 Oct 2016 12:50:27 +0200
parents b9ed5a88710c
children 959ebff3505a
line wrap: on
line source

from __future__ import absolute_import

import binascii
import itertools
import silenttestrunner
import unittest

from mercurial import (
    manifest as manifestmod,
    match as matchmod,
)

EMTPY_MANIFEST = ''
EMTPY_MANIFEST_V2 = '\0\n'

HASH_1 = '1' * 40
BIN_HASH_1 = binascii.unhexlify(HASH_1)
HASH_2 = 'f' * 40
BIN_HASH_2 = binascii.unhexlify(HASH_2)
HASH_3 = '1234567890abcdef0987654321deadbeef0fcafe'
BIN_HASH_3 = binascii.unhexlify(HASH_3)
A_SHORT_MANIFEST = (
    'bar/baz/qux.py\0%(hash2)s%(flag2)s\n'
    'foo\0%(hash1)s%(flag1)s\n'
    ) % {'hash1': HASH_1,
         'flag1': '',
         'hash2': HASH_2,
         'flag2': 'l',
         }

# Same data as A_SHORT_MANIFEST
A_SHORT_MANIFEST_V2 = (
    '\0\n'
    '\x00bar/baz/qux.py\0%(flag2)s\n%(hash2)s\n'
    '\x00foo\0%(flag1)s\n%(hash1)s\n'
    ) % {'hash1': BIN_HASH_1,
         'flag1': '',
         'hash2': BIN_HASH_2,
         'flag2': 'l',
         }

# Same data as A_SHORT_MANIFEST
A_METADATA_MANIFEST = (
    '\0foo\0bar\n'
    '\x00bar/baz/qux.py\0%(flag2)s\0foo\0bar\n%(hash2)s\n' # flag and metadata
    '\x00foo\0%(flag1)s\0foo\n%(hash1)s\n' # no flag, but metadata
    ) % {'hash1': BIN_HASH_1,
         'flag1': '',
         'hash2': BIN_HASH_2,
         'flag2': 'l',
         }

A_STEM_COMPRESSED_MANIFEST = (
    '\0\n'
    '\x00bar/baz/qux.py\0%(flag2)s\n%(hash2)s\n'
    '\x04qux/foo.py\0%(flag1)s\n%(hash1)s\n' # simple case of 4 stem chars
    '\x0az.py\0%(flag1)s\n%(hash1)s\n' # tricky newline = 10 stem characters
    '\x00%(verylongdir)sx/x\0\n%(hash1)s\n'
    '\xffx/y\0\n%(hash2)s\n' # more than 255 stem chars
    ) % {'hash1': BIN_HASH_1,
         'flag1': '',
         'hash2': BIN_HASH_2,
         'flag2': 'l',
         'verylongdir': 255 * 'x',
         }

A_DEEPER_MANIFEST = (
    'a/b/c/bar.py\0%(hash3)s%(flag1)s\n'
    'a/b/c/bar.txt\0%(hash1)s%(flag1)s\n'
    'a/b/c/foo.py\0%(hash3)s%(flag1)s\n'
    'a/b/c/foo.txt\0%(hash2)s%(flag2)s\n'
    'a/b/d/baz.py\0%(hash3)s%(flag1)s\n'
    'a/b/d/qux.py\0%(hash1)s%(flag2)s\n'
    'a/b/d/ten.txt\0%(hash3)s%(flag2)s\n'
    'a/b/dog.py\0%(hash3)s%(flag1)s\n'
    'a/b/fish.py\0%(hash2)s%(flag1)s\n'
    'a/c/london.py\0%(hash3)s%(flag2)s\n'
    'a/c/paper.txt\0%(hash2)s%(flag2)s\n'
    'a/c/paris.py\0%(hash2)s%(flag1)s\n'
    'a/d/apple.py\0%(hash3)s%(flag1)s\n'
    'a/d/pizza.py\0%(hash3)s%(flag2)s\n'
    'a/green.py\0%(hash1)s%(flag2)s\n'
    'a/purple.py\0%(hash2)s%(flag1)s\n'
    'app.py\0%(hash3)s%(flag1)s\n'
    'readme.txt\0%(hash2)s%(flag1)s\n'
    ) % {'hash1': HASH_1,
         'flag1': '',
         'hash2': HASH_2,
         'flag2': 'l',
         'hash3': HASH_3,
         }

HUGE_MANIFEST_ENTRIES = 200001

A_HUGE_MANIFEST = ''.join(sorted(
    'file%d\0%s%s\n' % (i, h, f) for i, h, f in
    itertools.izip(xrange(200001),
                   itertools.cycle((HASH_1, HASH_2)),
                   itertools.cycle(('', 'x', 'l')))))

class basemanifesttests(object):
    def parsemanifest(self, text):
        raise NotImplementedError('parsemanifest not implemented by test case')

    def assertIn(self, thing, container, msg=None):
        # assertIn new in 2.7, use it if available, otherwise polyfill
        sup = getattr(unittest.TestCase, 'assertIn', False)
        if sup:
            return sup(self, thing, container, msg=msg)
        if not msg:
            msg = 'Expected %r in %r' % (thing, container)
        self.assert_(thing in container, msg)

    def testEmptyManifest(self):
        m = self.parsemanifest(EMTPY_MANIFEST)
        self.assertEqual(0, len(m))
        self.assertEqual([], list(m))

    def testEmptyManifestv2(self):
        m = self.parsemanifest(EMTPY_MANIFEST_V2)
        self.assertEqual(0, len(m))
        self.assertEqual([], list(m))

    def testManifest(self):
        m = self.parsemanifest(A_SHORT_MANIFEST)
        self.assertEqual(['bar/baz/qux.py', 'foo'], list(m))
        self.assertEqual(BIN_HASH_2, m['bar/baz/qux.py'])
        self.assertEqual('l', m.flags('bar/baz/qux.py'))
        self.assertEqual(BIN_HASH_1, m['foo'])
        self.assertEqual('', m.flags('foo'))
        self.assertRaises(KeyError, lambda : m['wat'])

    def testParseManifestV2(self):
        m1 = self.parsemanifest(A_SHORT_MANIFEST)
        m2 = self.parsemanifest(A_SHORT_MANIFEST_V2)
        # Should have same content as A_SHORT_MANIFEST
        self.assertEqual(m1.text(), m2.text())

    def testParseManifestMetadata(self):
        # Metadata is for future-proofing and should be accepted but ignored
        m = self.parsemanifest(A_METADATA_MANIFEST)
        self.assertEqual(A_SHORT_MANIFEST, m.text())

    def testParseManifestStemCompression(self):
        m = self.parsemanifest(A_STEM_COMPRESSED_MANIFEST)
        self.assertIn('bar/baz/qux.py', m)
        self.assertIn('bar/qux/foo.py', m)
        self.assertIn('bar/qux/foz.py', m)
        self.assertIn(256 * 'x' + '/x', m)
        self.assertIn(256 * 'x' + '/y', m)
        self.assertEqual(A_STEM_COMPRESSED_MANIFEST, m.text(usemanifestv2=True))

    def testTextV2(self):
        m1 = self.parsemanifest(A_SHORT_MANIFEST)
        v2text = m1.text(usemanifestv2=True)
        self.assertEqual(A_SHORT_MANIFEST_V2, v2text)

    def testSetItem(self):
        want = BIN_HASH_1

        m = self.parsemanifest(EMTPY_MANIFEST)
        m['a'] = want
        self.assertIn('a', m)
        self.assertEqual(want, m['a'])
        self.assertEqual('a\0' + HASH_1 + '\n', m.text())

        m = self.parsemanifest(A_SHORT_MANIFEST)
        m['a'] = want
        self.assertEqual(want, m['a'])
        self.assertEqual('a\0' + HASH_1 + '\n' + A_SHORT_MANIFEST,
                         m.text())

    def testSetFlag(self):
        want = 'x'

        m = self.parsemanifest(EMTPY_MANIFEST)
        # first add a file; a file-less flag makes no sense
        m['a'] = BIN_HASH_1
        m.setflag('a', want)
        self.assertEqual(want, m.flags('a'))
        self.assertEqual('a\0' + HASH_1 + want + '\n', m.text())

        m = self.parsemanifest(A_SHORT_MANIFEST)
        # first add a file; a file-less flag makes no sense
        m['a'] = BIN_HASH_1
        m.setflag('a', want)
        self.assertEqual(want, m.flags('a'))
        self.assertEqual('a\0' + HASH_1 + want + '\n' + A_SHORT_MANIFEST,
                         m.text())

    def testCopy(self):
        m = self.parsemanifest(A_SHORT_MANIFEST)
        m['a'] = BIN_HASH_1
        m2 = m.copy()
        del m
        del m2 # make sure we don't double free() anything

    def testCompaction(self):
        unhex = binascii.unhexlify
        h1, h2 = unhex(HASH_1), unhex(HASH_2)
        m = self.parsemanifest(A_SHORT_MANIFEST)
        m['alpha'] = h1
        m['beta'] = h2
        del m['foo']
        want = 'alpha\0%s\nbar/baz/qux.py\0%sl\nbeta\0%s\n' % (
            HASH_1, HASH_2, HASH_2)
        self.assertEqual(want, m.text())
        self.assertEqual(3, len(m))
        self.assertEqual(['alpha', 'bar/baz/qux.py', 'beta'], list(m))
        self.assertEqual(h1, m['alpha'])
        self.assertEqual(h2, m['bar/baz/qux.py'])
        self.assertEqual(h2, m['beta'])
        self.assertEqual('', m.flags('alpha'))
        self.assertEqual('l', m.flags('bar/baz/qux.py'))
        self.assertEqual('', m.flags('beta'))
        self.assertRaises(KeyError, lambda : m['foo'])

    def testSetGetNodeSuffix(self):
        clean = self.parsemanifest(A_SHORT_MANIFEST)
        m = self.parsemanifest(A_SHORT_MANIFEST)
        h = m['foo']
        f = m.flags('foo')
        want = h + 'a'
        # Merge code wants to set 21-byte fake hashes at times
        m['foo'] = want
        self.assertEqual(want, m['foo'])
        self.assertEqual([('bar/baz/qux.py', BIN_HASH_2),
                          ('foo', BIN_HASH_1 + 'a')],
                         list(m.iteritems()))
        # Sometimes it even tries a 22-byte fake hash, but we can
        # return 21 and it'll work out
        m['foo'] = want + '+'
        self.assertEqual(want, m['foo'])
        # make sure the suffix survives a copy
        match = matchmod.match('', '', ['re:foo'])
        m2 = m.matches(match)
        self.assertEqual(want, m2['foo'])
        self.assertEqual(1, len(m2))
        m2 = m.copy()
        self.assertEqual(want, m2['foo'])
        # suffix with iteration
        self.assertEqual([('bar/baz/qux.py', BIN_HASH_2),
                          ('foo', want)],
                         list(m.iteritems()))

        # shows up in diff
        self.assertEqual({'foo': ((want, f), (h, ''))}, m.diff(clean))
        self.assertEqual({'foo': ((h, ''), (want, f))}, clean.diff(m))

    def testMatchException(self):
        m = self.parsemanifest(A_SHORT_MANIFEST)
        match = matchmod.match('', '', ['re:.*'])
        def filt(path):
            if path == 'foo':
                assert False
            return True
        match.matchfn = filt
        self.assertRaises(AssertionError, m.matches, match)

    def testRemoveItem(self):
        m = self.parsemanifest(A_SHORT_MANIFEST)
        del m['foo']
        self.assertRaises(KeyError, lambda : m['foo'])
        self.assertEqual(1, len(m))
        self.assertEqual(1, len(list(m)))
        # now restore and make sure everything works right
        m['foo'] = 'a' * 20
        self.assertEqual(2, len(m))
        self.assertEqual(2, len(list(m)))

    def testManifestDiff(self):
        MISSING = (None, '')
        addl = 'z-only-in-left\0' + HASH_1 + '\n'
        addr = 'z-only-in-right\0' + HASH_2 + 'x\n'
        left = self.parsemanifest(
            A_SHORT_MANIFEST.replace(HASH_1, HASH_3 + 'x') + addl)
        right = self.parsemanifest(A_SHORT_MANIFEST + addr)
        want = {
            'foo': ((BIN_HASH_3, 'x'),
                    (BIN_HASH_1, '')),
            'z-only-in-left': ((BIN_HASH_1, ''), MISSING),
            'z-only-in-right': (MISSING, (BIN_HASH_2, 'x')),
            }
        self.assertEqual(want, left.diff(right))

        want = {
            'bar/baz/qux.py': (MISSING, (BIN_HASH_2, 'l')),
            'foo': (MISSING, (BIN_HASH_3, 'x')),
            'z-only-in-left': (MISSING, (BIN_HASH_1, '')),
            }
        self.assertEqual(want, self.parsemanifest(EMTPY_MANIFEST).diff(left))

        want = {
            'bar/baz/qux.py': ((BIN_HASH_2, 'l'), MISSING),
            'foo': ((BIN_HASH_3, 'x'), MISSING),
            'z-only-in-left': ((BIN_HASH_1, ''), MISSING),
            }
        self.assertEqual(want, left.diff(self.parsemanifest(EMTPY_MANIFEST)))
        copy = right.copy()
        del copy['z-only-in-right']
        del right['foo']
        want = {
            'foo': (MISSING, (BIN_HASH_1, '')),
            'z-only-in-right': ((BIN_HASH_2, 'x'), MISSING),
            }
        self.assertEqual(want, right.diff(copy))

        short = self.parsemanifest(A_SHORT_MANIFEST)
        pruned = short.copy()
        del pruned['foo']
        want = {
            'foo': ((BIN_HASH_1, ''), MISSING),
            }
        self.assertEqual(want, short.diff(pruned))
        want = {
            'foo': (MISSING, (BIN_HASH_1, '')),
            }
        self.assertEqual(want, pruned.diff(short))
        want = {
            'bar/baz/qux.py': None,
            'foo': (MISSING, (BIN_HASH_1, '')),
            }
        self.assertEqual(want, pruned.diff(short, True))

    def testReversedLines(self):
        backwards = ''.join(
            l + '\n' for l in reversed(A_SHORT_MANIFEST.split('\n')) if l)
        try:
            self.parsemanifest(backwards)
            self.fail('Should have raised ValueError')
        except ValueError as v:
            self.assertIn('Manifest lines not in sorted order.', str(v))

    def testNoTerminalNewline(self):
        try:
            self.parsemanifest(A_SHORT_MANIFEST + 'wat')
            self.fail('Should have raised ValueError')
        except ValueError as v:
            self.assertIn('Manifest did not end in a newline.', str(v))

    def testNoNewLineAtAll(self):
        try:
            self.parsemanifest('wat')
            self.fail('Should have raised ValueError')
        except ValueError as v:
            self.assertIn('Manifest did not end in a newline.', str(v))

    def testHugeManifest(self):
        m = self.parsemanifest(A_HUGE_MANIFEST)
        self.assertEqual(HUGE_MANIFEST_ENTRIES, len(m))
        self.assertEqual(len(m), len(list(m)))

    def testMatchesMetadata(self):
        '''Tests matches() for a few specific files to make sure that both
        the set of files as well as their flags and nodeids are correct in
        the resulting manifest.'''
        m = self.parsemanifest(A_HUGE_MANIFEST)

        match = matchmod.match('/', '',
                ['file1', 'file200', 'file300'], exact=True)
        m2 = m.matches(match)

        w = ('file1\0%sx\n'
             'file200\0%sl\n'
             'file300\0%s\n') % (HASH_2, HASH_1, HASH_1)
        self.assertEqual(w, m2.text())

    def testMatchesNonexistentFile(self):
        '''Tests matches() for a small set of specific files, including one
        nonexistent file to make sure in only matches against existing files.
        '''
        m = self.parsemanifest(A_DEEPER_MANIFEST)

        match = matchmod.match('/', '',
                ['a/b/c/bar.txt', 'a/b/d/qux.py', 'readme.txt', 'nonexistent'],
                exact=True)
        m2 = m.matches(match)

        self.assertEqual(
                ['a/b/c/bar.txt', 'a/b/d/qux.py', 'readme.txt'],
                m2.keys())

    def testMatchesNonexistentDirectory(self):
        '''Tests matches() for a relpath match on a directory that doesn't
        actually exist.'''
        m = self.parsemanifest(A_DEEPER_MANIFEST)

        match = matchmod.match('/', '', ['a/f'], default='relpath')
        m2 = m.matches(match)

        self.assertEqual([], m2.keys())

    def testMatchesExactLarge(self):
        '''Tests matches() for files matching a large list of exact files.
        '''
        m = self.parsemanifest(A_HUGE_MANIFEST)

        flist = m.keys()[80:300]
        match = matchmod.match('/', '', flist, exact=True)
        m2 = m.matches(match)

        self.assertEqual(flist, m2.keys())

    def testMatchesFull(self):
        '''Tests matches() for what should be a full match.'''
        m = self.parsemanifest(A_DEEPER_MANIFEST)

        match = matchmod.match('/', '', [''])
        m2 = m.matches(match)

        self.assertEqual(m.keys(), m2.keys())

    def testMatchesDirectory(self):
        '''Tests matches() on a relpath match on a directory, which should
        match against all files within said directory.'''
        m = self.parsemanifest(A_DEEPER_MANIFEST)

        match = matchmod.match('/', '', ['a/b'], default='relpath')
        m2 = m.matches(match)

        self.assertEqual([
            'a/b/c/bar.py', 'a/b/c/bar.txt', 'a/b/c/foo.py', 'a/b/c/foo.txt',
            'a/b/d/baz.py', 'a/b/d/qux.py', 'a/b/d/ten.txt', 'a/b/dog.py',
            'a/b/fish.py'], m2.keys())

    def testMatchesExactPath(self):
        '''Tests matches() on an exact match on a directory, which should
        result in an empty manifest because you can't perform an exact match
        against a directory.'''
        m = self.parsemanifest(A_DEEPER_MANIFEST)

        match = matchmod.match('/', '', ['a/b'], exact=True)
        m2 = m.matches(match)

        self.assertEqual([], m2.keys())

    def testMatchesCwd(self):
        '''Tests matches() on a relpath match with the current directory ('.')
        when not in the root directory.'''
        m = self.parsemanifest(A_DEEPER_MANIFEST)

        match = matchmod.match('/', 'a/b', ['.'], default='relpath')
        m2 = m.matches(match)

        self.assertEqual([
            'a/b/c/bar.py', 'a/b/c/bar.txt', 'a/b/c/foo.py', 'a/b/c/foo.txt',
            'a/b/d/baz.py', 'a/b/d/qux.py', 'a/b/d/ten.txt', 'a/b/dog.py',
            'a/b/fish.py'], m2.keys())

    def testMatchesWithPattern(self):
        '''Tests matches() for files matching a pattern that reside
        deeper than the specified directory.'''
        m = self.parsemanifest(A_DEEPER_MANIFEST)

        match = matchmod.match('/', '', ['a/b/*/*.txt'])
        m2 = m.matches(match)

        self.assertEqual(
                ['a/b/c/bar.txt', 'a/b/c/foo.txt', 'a/b/d/ten.txt'],
                m2.keys())

class testmanifestdict(unittest.TestCase, basemanifesttests):
    def parsemanifest(self, text):
        return manifestmod.manifestdict(text)

class testtreemanifest(unittest.TestCase, basemanifesttests):
    def parsemanifest(self, text):
        return manifestmod.treemanifest('', text)

if __name__ == '__main__':
    silenttestrunner.main(__name__)