Mercurial > hg
view tests/test-revlog-raw.py @ 40326:fed697fa1734
sqlitestore: file storage backend using SQLite
This commit provides an extension which uses SQLite to store file
data (as opposed to revlogs).
As the inline documentation describes, there are still several
aspects to the extension that are incomplete. But it's a start.
The extension does support basic clone, checkout, and commit
workflows, which makes it suitable for simple use cases.
One notable missing feature is support for "bundlerepos." This is
probably responsible for the most test failures when the extension
is activated as part of the test suite.
All revision data is stored in SQLite. Data is stored as zstd
compressed chunks (default if zstd is available), zlib compressed
chunks (default if zstd is not available), or raw chunks (if
configured or if a compressed delta is not smaller than the raw
delta). This makes things very similar to revlogs.
Unlike revlogs, the extension doesn't yet enforce a limit on delta
chain length. This is an obvious limitation and should be addressed.
This is somewhat mitigated by the use of zstd, which is much faster
than zlib to decompress.
There is a dedicated table for storing deltas. Deltas are stored
by the SHA-1 hash of their uncompressed content. The "fileindex" table
has columns that reference the delta for each revision and the base
delta that delta should be applied against. A recursive SQL query
is used to resolve the delta chain along with the delta data.
By storing deltas by hash, we are able to de-duplicate delta storage!
With revlogs, the same deltas in different revlogs would result in
duplicate storage of that delta. In this scheme, inserting the
duplicate delta is a no-op and delta chains simply reference the
existing delta.
When initially implementing this extension, I did not have
content-indexed deltas and deltas could be duplicated across files
(just like revlogs). When I implemented content-indexed deltas, the
size of the SQLite database for a full clone of mozilla-unified
dropped:
before: 2,554,261,504 bytes
after: 2,488,754,176 bytes
Surprisingly, this is still larger than the bytes size of revlog
files:
revlog files: 2,104,861,230 bytes
du -b: 2,254,381,614
I would have expected storage to be smaller since we're not limiting
delta chain length and since we're using zstd instead of zlib. I
suspect the SQLite indexes and per-column overhead account for the
bulk of the differences. (Keep in mind that revlog uses a 64-byte
packed struct for revision index data and deltas are stored without
padding. Aside from the 12 unused bytes in the 32 byte node field,
revlogs are pretty efficient.) Another source of overhead is file
name storage. With revlogs, file names are stored in the filesystem.
But with SQLite, we need to store file names in the database. This is
roughly equivalent to the size of the fncache file, which for the
mozilla-unified repository is ~34MB.
Since the SQLite database isn't append-only and since delta chains
can reference any delta, this opens some interesting possibilities.
For example, we could store deltas in reverse, such that fulltexts
are stored for newer revisions and deltas are applied to reconstruct
older revisions. This is likely a more optimal storage strategy for
version control, as new data tends to be more frequently accessed
than old data. We would obviously need wire protocol support for
transferring revision data from newest to oldest. And we would
probably need some kind of mechanism for "re-encoding" stores. But
it should be doable.
This extension is very much experimental quality. There are a handful
of features that don't work. It probably isn't suitable for day-to-day
use. But it could be used in limited cases (e.g. read-only checkouts
like in CI). And it is also a good proving ground for alternate
storage backends. As we continue to define interfaces for all things
storage, it will be useful to have a viable alternate storage backend
to see how things shake out in practice.
test-storage.py passes on Python 2 and introduces no new test failures on
Python 3. Having the storage-level unit tests has proved to be insanely
useful when developing this extension. Those tests caught numerous bugs
during development and I'm convinced this style of testing is the way
forward for ensuring alternate storage backends work as intended. Of
course, test coverage isn't close to what it needs to be. But it is
a start. And what coverage we have gives me confidence that basic store
functionality is implemented properly.
Differential Revision: https://phab.mercurial-scm.org/D4928
author | Gregory Szorc <gregory.szorc@gmail.com> |
---|---|
date | Tue, 09 Oct 2018 08:50:13 -0700 |
parents | 0a5b20c107a6 |
children | cca12a31ede5 |
line wrap: on
line source
# test revlog interaction about raw data (flagprocessor) from __future__ import absolute_import, print_function import sys from mercurial import ( encoding, node, revlog, transaction, vfs, ) # TESTTMP is optional. This makes it convenient to run without run-tests.py tvfs = vfs.vfs(encoding.environ.get(b'TESTTMP', b'/tmp')) # Enable generaldelta otherwise revlog won't use delta as expected by the test tvfs.options = {b'generaldelta': True, b'revlogv1': True} # The test wants to control whether to use delta explicitly, based on # "storedeltachains". revlog.revlog._isgooddeltainfo = lambda self, d, textlen: self._storedeltachains def abort(msg): print('abort: %s' % msg) # Return 0 so run-tests.py could compare the output. sys.exit() # Register a revlog processor for flag EXTSTORED. # # It simply prepends a fixed header, and replaces '1' to 'i'. So it has # insertion and replacement, and may be interesting to test revlog's line-based # deltas. _extheader = b'E\n' def readprocessor(self, rawtext): # True: the returned text could be used to verify hash text = rawtext[len(_extheader):].replace(b'i', b'1') return text, True def writeprocessor(self, text): # False: the returned rawtext shouldn't be used to verify hash rawtext = _extheader + text.replace(b'1', b'i') return rawtext, False def rawprocessor(self, rawtext): # False: do not verify hash. Only the content returned by "readprocessor" # can be used to verify hash. return False revlog.addflagprocessor(revlog.REVIDX_EXTSTORED, (readprocessor, writeprocessor, rawprocessor)) # Utilities about reading and appending revlog def newtransaction(): # A transaction is required to write revlogs report = lambda msg: None return transaction.transaction(report, tvfs, {'plain': tvfs}, b'journal') def newrevlog(name=b'_testrevlog.i', recreate=False): if recreate: tvfs.tryunlink(name) rlog = revlog.revlog(tvfs, name) return rlog def appendrev(rlog, text, tr, isext=False, isdelta=True): '''Append a revision. If isext is True, set the EXTSTORED flag so flag processor will be used (and rawtext is different from text). If isdelta is True, force the revision to be a delta, otherwise it's full text. ''' nextrev = len(rlog) p1 = rlog.node(nextrev - 1) p2 = node.nullid if isext: flags = revlog.REVIDX_EXTSTORED else: flags = revlog.REVIDX_DEFAULT_FLAGS # Change storedeltachains temporarily, to override revlog's delta decision rlog._storedeltachains = isdelta try: rlog.addrevision(text, tr, nextrev, p1, p2, flags=flags) return nextrev except Exception as ex: abort('rev %d: failed to append: %s' % (nextrev, ex)) finally: # Restore storedeltachains. It is always True, see revlog.__init__ rlog._storedeltachains = True def addgroupcopy(rlog, tr, destname=b'_destrevlog.i', optimaldelta=True): '''Copy revlog to destname using revlog.addgroup. Return the copied revlog. This emulates push or pull. They use changegroup. Changegroup requires repo to work. We don't have a repo, so a dummy changegroup is used. If optimaldelta is True, use optimized delta parent, so the destination revlog could probably reuse it. Otherwise it builds sub-optimal delta, and the destination revlog needs more work to use it. This exercises some revlog.addgroup (and revlog._addrevision(text=None)) code path, which is not covered by "appendrev" alone. ''' class dummychangegroup(object): @staticmethod def deltachunk(pnode): pnode = pnode or node.nullid parentrev = rlog.rev(pnode) r = parentrev + 1 if r >= len(rlog): return {} if optimaldelta: deltaparent = parentrev else: # suboptimal deltaparent deltaparent = min(0, parentrev) if not rlog.candelta(deltaparent, r): deltaparent = -1 return {b'node': rlog.node(r), b'p1': pnode, b'p2': node.nullid, b'cs': rlog.node(rlog.linkrev(r)), b'flags': rlog.flags(r), b'deltabase': rlog.node(deltaparent), b'delta': rlog.revdiff(deltaparent, r)} def deltaiter(self): chain = None for chunkdata in iter(lambda: self.deltachunk(chain), {}): node = chunkdata[b'node'] p1 = chunkdata[b'p1'] p2 = chunkdata[b'p2'] cs = chunkdata[b'cs'] deltabase = chunkdata[b'deltabase'] delta = chunkdata[b'delta'] flags = chunkdata[b'flags'] chain = node yield (node, p1, p2, cs, deltabase, delta, flags) def linkmap(lnode): return rlog.rev(lnode) dlog = newrevlog(destname, recreate=True) dummydeltas = dummychangegroup().deltaiter() dlog.addgroup(dummydeltas, linkmap, tr) return dlog def lowlevelcopy(rlog, tr, destname=b'_destrevlog.i'): '''Like addgroupcopy, but use the low level revlog._addrevision directly. It exercises some code paths that are hard to reach easily otherwise. ''' dlog = newrevlog(destname, recreate=True) for r in rlog: p1 = rlog.node(r - 1) p2 = node.nullid if r == 0 or (rlog.flags(r) & revlog.REVIDX_EXTSTORED): text = rlog.revision(r, raw=True) cachedelta = None else: # deltaparent cannot have EXTSTORED flag. deltaparent = max([-1] + [p for p in range(r) if rlog.flags(p) & revlog.REVIDX_EXTSTORED == 0]) text = None cachedelta = (deltaparent, rlog.revdiff(deltaparent, r)) flags = rlog.flags(r) ifh = dfh = None try: ifh = dlog.opener(dlog.indexfile, b'a+') if not dlog._inline: dfh = dlog.opener(dlog.datafile, b'a+') dlog._addrevision(rlog.node(r), text, tr, r, p1, p2, flags, cachedelta, ifh, dfh) finally: if dfh is not None: dfh.close() if ifh is not None: ifh.close() return dlog # Utilities to generate revisions for testing def genbits(n): '''Given a number n, generate (2 ** (n * 2) + 1) numbers in range(2 ** n). i.e. the generated numbers have a width of n bits. The combination of two adjacent numbers will cover all possible cases. That is to say, given any x, y where both x, and y are in range(2 ** n), there is an x followed immediately by y in the generated sequence. ''' m = 2 ** n # Gray Code. See https://en.wikipedia.org/wiki/Gray_code gray = lambda x: x ^ (x >> 1) reversegray = dict((gray(i), i) for i in range(m)) # Generate (n * 2) bit gray code, yield lower n bits as X, and look for # the next unused gray code where higher n bits equal to X. # For gray codes whose higher bits are X, a[X] of them have been used. a = [0] * m # Iterate from 0. x = 0 yield x for i in range(m * m): x = reversegray[x] y = gray(a[x] + x * m) & (m - 1) assert a[x] < m a[x] += 1 x = y yield x def gentext(rev): '''Given a revision number, generate dummy text''' return b''.join(b'%d\n' % j for j in range(-1, rev % 5)) def writecases(rlog, tr): '''Write some revisions interested to the test. The test is interested in 3 properties of a revision: - Is it a delta or a full text? (isdelta) This is to catch some delta application issues. - Does it have a flag of EXTSTORED? (isext) This is to catch some flag processor issues. Especially when interacted with revlog deltas. - Is its text empty? (isempty) This is less important. It is intended to try to catch some careless checks like "if text" instead of "if text is None". Note: if flag processor is involved, raw text may be not empty. Write 65 revisions. So that all combinations of the above flags for adjacent revisions are covered. That is to say, len(set( (r.delta, r.ext, r.empty, (r+1).delta, (r+1).ext, (r+1).empty) for r in range(len(rlog) - 1) )) is 64. Where "r.delta", "r.ext", and "r.empty" are booleans matching properties mentioned above. Return expected [(text, rawtext)]. ''' result = [] for i, x in enumerate(genbits(3)): isdelta, isext, isempty = bool(x & 1), bool(x & 2), bool(x & 4) if isempty: text = b'' else: text = gentext(i) rev = appendrev(rlog, text, tr, isext=isext, isdelta=isdelta) # Verify text, rawtext, and rawsize if isext: rawtext = writeprocessor(None, text)[0] else: rawtext = text if rlog.rawsize(rev) != len(rawtext): abort('rev %d: wrong rawsize' % rev) if rlog.revision(rev, raw=False) != text: abort('rev %d: wrong text' % rev) if rlog.revision(rev, raw=True) != rawtext: abort('rev %d: wrong rawtext' % rev) result.append((text, rawtext)) # Verify flags like isdelta, isext work as expected # isdelta can be overridden to False if this or p1 has isext set if bool(rlog.deltaparent(rev) > -1) and not isdelta: abort('rev %d: isdelta is unexpected' % rev) if bool(rlog.flags(rev)) != isext: abort('rev %d: isext is ineffective' % rev) return result # Main test and checking def checkrevlog(rlog, expected): '''Check if revlog has expected contents. expected is [(text, rawtext)]''' # Test using different access orders. This could expose some issues # depending on revlog caching (see revlog._cache). for r0 in range(len(rlog) - 1): r1 = r0 + 1 for revorder in [[r0, r1], [r1, r0]]: for raworder in [[True], [False], [True, False], [False, True]]: nlog = newrevlog() for rev in revorder: for raw in raworder: t = nlog.revision(rev, raw=raw) if t != expected[rev][int(raw)]: abort('rev %d: corrupted %stext' % (rev, raw and 'raw' or '')) def maintest(): expected = rl = None with newtransaction() as tr: rl = newrevlog(recreate=True) expected = writecases(rl, tr) checkrevlog(rl, expected) print('local test passed') # Copy via revlog.addgroup rl1 = addgroupcopy(rl, tr) checkrevlog(rl1, expected) rl2 = addgroupcopy(rl, tr, optimaldelta=False) checkrevlog(rl2, expected) print('addgroupcopy test passed') # Copy via revlog.clone rl3 = newrevlog(name=b'_destrevlog3.i', recreate=True) rl.clone(tr, rl3) checkrevlog(rl3, expected) print('clone test passed') # Copy via low-level revlog._addrevision rl4 = lowlevelcopy(rl, tr) checkrevlog(rl4, expected) print('lowlevelcopy test passed') try: maintest() except Exception as ex: abort('crashed: %s' % ex)