Kostia Balytskyi <ikostia@fb.com> [Thu, 10 Nov 2016 10:57:10 -0800] rev 30455
shelve: move rebasing logic to a separate function
Rebasing restored shelved commit onto the right destination is done
differently in traditional and obs-based unshelve:
- for traditional, we just rebase it
- for obs-based, we need to check whether a successor of
the restored commit already exists in the destination (this
might happen when unshelving twice on the same destination)
This is the reason why this piece of logic should be in its own
function: to not have excessive complexity in the main function.
Kostia Balytskyi <ikostia@fb.com> [Thu, 10 Nov 2016 10:51:06 -0800] rev 30454
shelve: move commit restoration logic to a separate function
Kostia Balytskyi <ikostia@fb.com> [Sun, 13 Nov 2016 03:35:52 -0800] rev 30453
shelve: move temporary commit creation to a separate function
Committing working copy changes before rebasing a shelved commit
on top of them is an independent piece of behavior, which fits
into its own function.
Similar to the previous series, this and a couple of following
patches are for unshelve refactoring.
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 17 Nov 2016 20:30:00 -0800] rev 30452
commands: print chunk type in debugrevlog
Each data entry ("chunk") in a revlog has a type based on the first
byte of the data. This type indicates how to interpret the data.
This seems like a useful thing to be able to query through a debug
command. So let's add that to `hg debugrevlog`.
This does make `hg debugrevlog` slightly slower, as it has to read
more than just the index. However, even on the mozilla-unified
manifest (which is ~200MB spread over ~350K revisions), this takes
<400ms.
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 17 Nov 2016 20:17:51 -0800] rev 30451
perf: add command for measuring revlog chunk operations
Upcoming commits will teach revlogs to leverage the new compression
engine API so that new compression formats can more easily be
leveraged in revlogs. We want to be sure this refactoring doesn't
regress performance. So this commit introduces "perfrevchunks" to
explicitly test performance of reading, decompressing, and
recompressing revlog chunks.
Here is output when run on the mozilla-unified repo:
$ hg perfrevlogchunks -c
! read
! wall 0.346603 comb 0.350000 user 0.340000 sys 0.010000 (best of 28)
! read w/ reused fd
! wall 0.337707 comb 0.340000 user 0.320000 sys 0.020000 (best of 30)
! read batch
! wall 0.013206 comb 0.020000 user 0.000000 sys 0.020000 (best of 221)
! read batch w/ reused fd
! wall 0.013259 comb 0.030000 user 0.010000 sys 0.020000 (best of 222)
! chunk
! wall 1.909939 comb 1.910000 user 1.900000 sys 0.010000 (best of 6)
! chunk batch
! wall 1.750677 comb 1.760000 user 1.740000 sys 0.020000 (best of 6)
! compress
! wall 5.668004 comb 5.670000 user 5.670000 sys 0.000000 (best of 3)
$ hg perfrevlogchunks -m
! read
! wall 0.365834 comb 0.370000 user 0.350000 sys 0.020000 (best of 26)
! read w/ reused fd
! wall 0.350160 comb 0.350000 user 0.320000 sys 0.030000 (best of 28)
! read batch
! wall 0.024777 comb 0.020000 user 0.000000 sys 0.020000 (best of 119)
! read batch w/ reused fd
! wall 0.024895 comb 0.030000 user 0.000000 sys 0.030000 (best of 118)
! chunk
! wall 2.514061 comb 2.520000 user 2.480000 sys 0.040000 (best of 4)
! chunk batch
! wall 2.380788 comb 2.380000 user 2.360000 sys 0.020000 (best of 5)
! compress
! wall 9.815297 comb 9.820000 user 9.820000 sys 0.000000 (best of 3)
We already see some interesting data, such as how much slower
non-batched chunk reading is and that zlib compression appears to be
>2x slower than decompression.
I didn't have the data when I wrote this commit message, but I ran this
on Mozilla's NFS-based Mercurial server and the time for reading with a
reused file descriptor was faster. So I think it is worth testing both
with and without file descriptor reuse so we can make informed
decisions about recycling file descriptors.
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 17 Nov 2016 20:09:10 -0800] rev 30450
setup: add flag to build_ext to control building zstd
Downstream packagers will inevitably want to disable building the
vendored python-zstandard Python package. Rather than force them
to patch setup.py, let's give them a knob to use.
distutils Command classes support defining custom options. It requires
setting certain class attributes (yes, class attributes: instance
attributes don't work because the class type is consulted before it
is instantiated).
We already have a custom child class of build_ext, so we set these
class attributes, implement some scaffolding, and override
build_extensions to filter the Extension instance for the zstd
extension if the `--no-zstd` argument is specified.
Example usage:
$ python setup.py build_ext --no-zstd
Jun Wu <quark@fb.com> [Wed, 09 Nov 2016 16:01:34 +0000] rev 30449
drawdag: update test repos by drawing the changelog DAG in ASCII
Currently, we have "debugbuilddag" which is a powerful tool to build test
cases but not intuitive. We may end up running "hg log" in the test to make
the test more readable.
This patch adds a "drawdag" extension with a "debugdrawdag" command for
similar testing purpose. Unlike the cryptic "debugbuilddag" command, it
reads an ASCII graph that is intuitive to human, so the test case can be
more readable.
Unlike "debugbuilddag", "drawdag" does not require an empty repo. So it can
be used to add new changesets to an existing repo.
Since the "drawdag" logic is not that trivial and only makes sense for
testing purpose, the extension is added to the "tests" directory, to make
the core logic clean. If we find it useful (for example, to demonstrate
cases and help user understand some cases) and want to ship it by default in
the future, we can move it to a ship-by-default "debugdrawdag" at that time.
Mads Kiilerich <madski@unity3d.com> [Wed, 14 Jan 2015 01:15:26 +0100] rev 30448
posix: give checklink a fast path that cache the check file and is read only
util.checklink would create a symlink and remove it again. That would sometimes
happen multiple times. Write operations are relatively expensive and give disk
tear and noise for applications monitoring file system activity.
Instead of creating a symlink and deleting it again, just create it once and
leave it in .hg/cache/check-link . If the file exists, just verify that
os.islink reports true. We will assume that this check is as good as symlink
creation not failing.
Note: The symlink left in .hg/cache has to resolve to a file - otherwise 'make
dist' will fail ...
test-symlink-os-yes-fs-no.py does some monkey patching to simulate a platform
without symlink support. The slightly different testing method requires
additional monkeying.
Mads Kiilerich <madski@unity3d.com> [Thu, 17 Nov 2016 12:59:36 +0100] rev 30447
posix: move checklink test file to .hg/cache
This avoids unnecessary churn in the working directory.
It is not necessarily a fully valid assumption that .hg/cache is on the same
filesystem as the working directory, but I think it is an acceptable
approximation. It could also be the case that different parts of the working
directory is on different mount points so checking in the root folder could
also be wrong.
Mads Kiilerich <madski@unity3d.com> [Wed, 14 Jan 2015 01:15:26 +0100] rev 30446
posix: give checkexec a fast path; keep the check files and test read only
Before, Mercurial would create a new temporary file every time, stat it, change
its exec mode, stat it again, and delete it. Most of this dance was done to
handle the rare and not-so-essential case of VFAT mounts on unix. The cost of
that was paid by the much more common and important case of using normal file
systems.
Instead, try to create and preserve .hg/cache/checkisexec and
.hg/cache/checknoexec with and without exec flag set. If the files exist and
have correct exec flags set, we can conclude that that file system supports the
exec flag. Best case, the whole exec check can thus be done with two stat
calls. Worst case, we delete the wrong files and check as usual. That will be
because temporary loss of exec bit or on file systems without support for the
exec bit. In that case we check as we did before, with the additional overhead
of one extra stat call.
It is possible that this different test algorithm in some cases on odd file
systems will give different behaviour. Again, I think it will be rare and
special cases and I think it is worth the risk.
test-clone.t happens to show the situation where checkisexec is left behind
from the old style check, while checknoexec only will be created next time a
exec check will be performed.