comparison mercurial/exchangev2.py @ 39640:039bf1eddc2e

exchangev2: fetch file revisions Now that the server has an API for fetching file data, we can call into it to fetch file revisions. The implementation is relatively straightforward: we examine the manifests that we fetched and find all new file revisions referenced by them. We build up a mapping from file path to file nodes to manifest node. (The mapping to first manifest node allows us to map back to first changelog node/revision, which is used for the linkrev.) Once that map is built up, we iterate over it in a deterministic manner and fetch and store file data. The code is very similar to manifest fetching. So similar that we could probably extract the common bits into a generic function. With file data retrieval implemented, `hg clone` and `hg pull` are effectively feature complete, at least as far as the completeness of data transfer for essential repository data (changesets, manifests, files, phases, and bookmarks). We're still missing support for obsolescence markers, the hgtags fnodes cache, and the branchmap cache. But these are non-essential for the moment (and will be implemented later). This is a good point to assess the state of exchangev2 in terms of performance. I ran a local `hg clone` for the mozilla-unified repository using both version 1 and version 2 of the wire protocols and exchange methods. This is effectively comparing the performance of the wire protocol overhead and "getbundle" versus domain-specific commands. Wire protocol version 2 doesn't have compression implemented yet. So I tested version 1 with `server.compressionengines=none` to remove compression overhead from the equation. server before: user 220.420+0.000 sys 14.420+0.000 after: user 321.980+0.000 sys 18.990+0.000 client before: real 561.650 secs (user 497.670+0.000 sys 28.160+0.000) after: real 1226.260 secs (user 944.240+0.000 sys 354.150+0.000) We have substantial regressions on both client and server. This is obviously not desirable. I'm aware of some reasons: * Lack of hgtagsfnodes transfer (contributes significant CPU to client). * Lack of branch cache transfer (contributes significant CPU to client). * Little to no profiling / optimization performed on wire protocol version 2 code. * There appears to be a memory leak on the client and that is likely causing swapping on my machine. * Using multiple threads on the client may be counter-productive because Python. * We're not compressing on the server. * We're tracking file nodes on the client via manifest diffing rather than using linkrev shortcuts on the server. I'm pretty confident that most of these issues are addressable. But even if we can't get wire protocol version 2 on performance parity with "getbundle," I still think it is important to have the set of low level data-specific retrieval commands that we have implemented so far. This is because the existence of such commands allows flexibility in how clients access server data. Differential Revision: https://phab.mercurial-scm.org/D4491
author Gregory Szorc <gregory.szorc@gmail.com>
date Tue, 04 Sep 2018 10:42:24 -0700
parents d292328e0143
children aa7e312375cf
comparison
equal deleted inserted replaced
39639:0e03e6a44dee 39640:039bf1eddc2e
5 # This software may be used and distributed according to the terms of the 5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version. 6 # GNU General Public License version 2 or any later version.
7 7
8 from __future__ import absolute_import 8 from __future__ import absolute_import
9 9
10 import collections
10 import weakref 11 import weakref
11 12
12 from .i18n import _ 13 from .i18n import _
13 from .node import ( 14 from .node import (
14 nullid, 15 nullid,
56 # Write bookmark updates. 57 # Write bookmark updates.
57 bookmarks.updatefromremote(repo.ui, repo, csetres['bookmarks'], 58 bookmarks.updatefromremote(repo.ui, repo, csetres['bookmarks'],
58 remote.url(), pullop.gettransaction, 59 remote.url(), pullop.gettransaction,
59 explicit=pullop.explicitbookmarks) 60 explicit=pullop.explicitbookmarks)
60 61
61 _fetchmanifests(repo, tr, remote, csetres['manifestnodes']) 62 manres = _fetchmanifests(repo, tr, remote, csetres['manifestnodes'])
63
64 # Find all file nodes referenced by added manifests and fetch those
65 # revisions.
66 fnodes = _derivefilesfrommanifests(repo, manres['added'])
67 _fetchfiles(repo, tr, remote, fnodes, manres['linkrevs'])
62 68
63 def _pullchangesetdiscovery(repo, remote, heads, abortwhenunrelated=True): 69 def _pullchangesetdiscovery(repo, remote, heads, abortwhenunrelated=True):
64 """Determine which changesets need to be pulled.""" 70 """Determine which changesets need to be pulled."""
65 71
66 if heads: 72 if heads:
289 295
290 progress.complete() 296 progress.complete()
291 297
292 return { 298 return {
293 'added': added, 299 'added': added,
300 'linkrevs': linkrevs,
294 } 301 }
302
303 def _derivefilesfrommanifests(repo, manifestnodes):
304 """Determine what file nodes are relevant given a set of manifest nodes.
305
306 Returns a dict mapping file paths to dicts of file node to first manifest
307 node.
308 """
309 ml = repo.manifestlog
310 fnodes = collections.defaultdict(dict)
311
312 for manifestnode in manifestnodes:
313 m = ml.get(b'', manifestnode)
314
315 # TODO this will pull in unwanted nodes because it takes the storage
316 # delta into consideration. What we really want is something that takes
317 # the delta between the manifest's parents. And ideally we would
318 # ignore file nodes that are known locally. For now, ignore both
319 # these limitations. This will result in incremental fetches requesting
320 # data we already have. So this is far from ideal.
321 md = m.readfast()
322
323 for path, fnode in md.items():
324 fnodes[path].setdefault(fnode, manifestnode)
325
326 return fnodes
327
328 def _fetchfiles(repo, tr, remote, fnodes, linkrevs):
329 def iterrevisions(objs, progress):
330 for filerevision in objs:
331 node = filerevision[b'node']
332
333 if b'deltasize' in filerevision:
334 basenode = filerevision[b'deltabasenode']
335 delta = next(objs)
336 elif b'revisionsize' in filerevision:
337 basenode = nullid
338 revision = next(objs)
339 delta = mdiff.trivialdiffheader(len(revision)) + revision
340 else:
341 continue
342
343 yield (
344 node,
345 filerevision[b'parents'][0],
346 filerevision[b'parents'][1],
347 node,
348 basenode,
349 delta,
350 # Flags not yet supported.
351 0,
352 )
353
354 progress.increment()
355
356 progress = repo.ui.makeprogress(
357 _('files'), unit=_('chunks'),
358 total=sum(len(v) for v in fnodes.itervalues()))
359
360 # TODO make batch size configurable
361 batchsize = 10000
362 fnodeslist = [x for x in sorted(fnodes.items())]
363
364 for i in pycompat.xrange(0, len(fnodeslist), batchsize):
365 batch = [x for x in fnodeslist[i:i + batchsize]]
366 if not batch:
367 continue
368
369 with remote.commandexecutor() as e:
370 fs = []
371 locallinkrevs = {}
372
373 for path, nodes in batch:
374 fs.append((path, e.callcommand(b'filedata', {
375 b'path': path,
376 b'nodes': sorted(nodes),
377 b'fields': {b'parents', b'revision'}
378 })))
379
380 locallinkrevs[path] = {
381 node: linkrevs[manifestnode]
382 for node, manifestnode in nodes.iteritems()}
383
384 for path, f in fs:
385 objs = f.result()
386
387 # Chomp off header objects.
388 next(objs)
389
390 store = repo.file(path)
391 store.addgroup(
392 iterrevisions(objs, progress),
393 locallinkrevs[path].__getitem__,
394 weakref.proxy(tr))