Gregory Szorc <gregory.szorc@gmail.com> [Mon, 13 Nov 2017 21:48:35 -0800] rev 35118
bundle2: inline changegroup.readexactly()
Profiling reveals this loop is pretty tight. Literally any
function call elimination can make a big difference.
This commit inlines the relatively trivial changegroup.readexactly()
method inside the loop.
The results with `hg perfbundleread` on a bundle of the Firefox repo
speak for themselves:
! read(8k)
! wall 0.679730 comb 0.680000 user 0.140000 sys 0.540000 (best of 15)
! read(16k)
! wall 0.577228 comb 0.570000 user 0.080000 sys 0.490000 (best of 17)
! read(32k)
! wall 0.516060 comb 0.520000 user 0.040000 sys 0.480000 (best of 20)
! read(128k)
! wall 0.496378 comb 0.490000 user 0.010000 sys 0.480000 (best of 20)
! bundle2 iterparts()
! wall 3.460903 comb 3.460000 user 2.760000 sys 0.700000 (best of 3)
! wall 3.056811 comb 3.050000 user 2.340000 sys 0.710000 (best of 4)
! bundle2 iterparts() seekable
! wall 4.312722 comb 4.310000 user 3.480000 sys 0.830000 (best of 3)
! wall 4.007676 comb 4.000000 user 3.170000 sys 0.830000 (best of 3)
! bundle2 part seek()
! wall 6.754764 comb 6.740000 user 3.970000 sys 2.770000 (best of 3)
! wall 6.267110 comb 6.250000 user 3.480000 sys 2.770000 (best of 3)
! bundle2 part read(8k)
! wall 3.668004 comb 3.660000 user 2.960000 sys 0.700000 (best of 3)
! wall 3.404164 comb 3.400000 user 2.650000 sys 0.750000 (best of 3)
! bundle2 part read(16k)
! wall 3.489196 comb 3.480000 user 2.750000 sys 0.730000 (best of 3)
! wall 3.197972 comb 3.200000 user 2.490000 sys 0.710000 (best of 4)
! bundle2 part read(32k)
! wall 3.388569 comb 3.380000 user 2.640000 sys 0.740000 (best of 3)
! wall 3.060557 comb 3.060000 user 2.340000 sys 0.720000 (best of 4)
! bundle2 part read(128k)
! wall 3.276415 comb 3.270000 user 2.560000 sys 0.710000 (best of 4)
! wall 2.952209 comb 2.950000 user 2.230000 sys 0.720000 (best of 4)
Differential Revision: https://phab.mercurial-scm.org/D1392
Gregory Szorc <gregory.szorc@gmail.com> [Mon, 13 Nov 2017 22:05:54 -0800] rev 35117
bundle2: inline debug logging
Profiling revealed that repeated calls to indebug() were
consuming a fair amount of CPU during bundle2 reading, with
most of the time spent in ui.configbool().
Inlining indebug() and avoiding extra attribute lookups speeds
things up substantially. Using `hg perfbundleread` with a Firefox
bundle:
! read(8k)
! wall 0.679730 comb 0.680000 user 0.140000 sys 0.540000 (best of 15)
! read(16k)
! wall 0.577228 comb 0.570000 user 0.080000 sys 0.490000 (best of 17)
! read(32k)
! wall 0.516060 comb 0.520000 user 0.040000 sys 0.480000 (best of 20)
! read(128k)
! wall 0.496378 comb 0.490000 user 0.010000 sys 0.480000 (best of 20)
! bundle2 iterparts()
! wall 6.983756 comb 6.980000 user 6.220000 sys 0.760000 (best of 3)
! wall 3.460903 comb 3.460000 user 2.760000 sys 0.700000 (best of 3)
! bundle2 iterparts() seekable
! wall 8.132131 comb 8.110000 user 7.160000 sys 0.950000 (best of 3)
! wall 4.312722 comb 4.310000 user 3.480000 sys 0.830000 (best of 3)
! bundle2 part seek()
! wall 10.860942 comb 10.840000 user 7.790000 sys 3.050000 (best of 3)
! wall 6.754764 comb 6.740000 user 3.970000 sys 2.770000 (best of 3)
! bundle2 part read(8k)
! wall 7.258035 comb 7.260000 user 6.470000 sys 0.790000 (best of 3)
! wall 3.668004 comb 3.660000 user 2.960000 sys 0.700000 (best of 3)
! bundle2 part read(16k)
! wall 7.099891 comb 7.080000 user 6.310000 sys 0.770000 (best of 3)
! wall 3.489196 comb 3.480000 user 2.750000 sys 0.730000 (best of 3)
! bundle2 part read(32k)
! wall 6.964685 comb 6.950000 user 6.130000 sys 0.820000 (best of 3)
! wall 3.388569 comb 3.380000 user 2.640000 sys 0.740000 (best of 3)
! bundle2 part read(128k)
! wall 6.852867 comb 6.850000 user 6.060000 sys 0.790000 (best of 3)
! wall 3.276415 comb 3.270000 user 2.560000 sys 0.710000 (best of 4)
Differential Revision: https://phab.mercurial-scm.org/D1391
Gregory Szorc <gregory.szorc@gmail.com> [Mon, 13 Nov 2017 21:10:37 -0800] rev 35116
bundle2: don't use seekable bundle2 parts by default (
issue5691)
The last commit removed the last use of the bundle2 part seek() API
in the generic bundle2 part iteration code. This means we can now
switch to using unseekable bundle2 parts by default and have the
special consumers that actually need the behavior request it.
This commit changes unbundle20.iterparts() to expose non-seekable
unbundlepart instances by default. If seekable parts are needed,
callers can pass "seekable=True." The bundlerepo class needs
seekable parts, so it does this.
The interrupt handler is also changed to use a regular unbundlepart.
So, by default, all consumers except bundlerepo will see unseekable
parts.
Because the behavior of the iterparts() benchmark changed, we add
a variation to test seekable parts vs unseekable parts. And because
parts no longer have seek() unless "seekable=True," we update the
"part seek" benchmark.
Speaking of benchmarks, this change has the following impact to
`hg perfbundleread` on an uncompressed bundle of the Firefox repo
(6,070,036,163 bytes):
! read(8k)
! wall 0.722709 comb 0.720000 user 0.150000 sys 0.570000 (best of 14)
! read(16k)
! wall 0.602208 comb 0.590000 user 0.080000 sys 0.510000 (best of 17)
! read(32k)
! wall 0.554018 comb 0.560000 user 0.050000 sys 0.510000 (best of 18)
! read(128k)
! wall 0.520086 comb 0.530000 user 0.020000 sys 0.510000 (best of 20)
! bundle2 forwardchunks()
! wall 2.996329 comb 3.000000 user 2.300000 sys 0.700000 (best of 4)
! bundle2 iterparts()
! wall 8.070791 comb 8.060000 user 7.180000 sys 0.880000 (best of 3)
! wall 6.983756 comb 6.980000 user 6.220000 sys 0.760000 (best of 3)
! bundle2 iterparts() seekable
! wall 8.132131 comb 8.110000 user 7.160000 sys 0.950000 (best of 3)
! bundle2 part seek()
! wall 10.370142 comb 10.350000 user 7.430000 sys 2.920000 (best of 3)
! wall 10.860942 comb 10.840000 user 7.790000 sys 3.050000 (best of 3)
! bundle2 part read(8k)
! wall 8.599892 comb 8.580000 user 7.720000 sys 0.860000 (best of 3)
! wall 7.258035 comb 7.260000 user 6.470000 sys 0.790000 (best of 3)
! bundle2 part read(16k)
! wall 8.265361 comb 8.250000 user 7.360000 sys 0.890000 (best of 3)
! wall 7.099891 comb 7.080000 user 6.310000 sys 0.770000 (best of 3)
! bundle2 part read(32k)
! wall 8.290308 comb 8.280000 user 7.330000 sys 0.950000 (best of 3)
! wall 6.964685 comb 6.950000 user 6.130000 sys 0.820000 (best of 3)
! bundle2 part read(128k)
! wall 8.204900 comb 8.150000 user 7.210000 sys 0.940000 (best of 3)
! wall 6.852867 comb 6.850000 user 6.060000 sys 0.790000 (best of 3)
The significant speedup is due to not incurring the overhead to track
payload offset data. Of course, this overhead is proportional to
bundle2 part size. So a multiple gigabyte changegroup part is on the
extreme side of the spectrum for real-world impact.
In addition to the CPU efficiency wins, not tracking offset data
also means not using memory to hold that data. Using a bundle based on
the example BSD repository in issue 5691, this change has a drastic
impact to memory usage during `hg unbundle` (`hg clone` would behave
similarly). Before, memory usage incrementally increased for the
duration of bundle processing. In other words, as we advanced through
the changegroup and bundle2 part, we kept allocating more memory to
hold offset data. After this change, we still increase memory during
changegroup application. But the rate of increase is significantly
slower. (A bulk of the remaining gradual increase appears to be the
storing of revlog sizes in the transaction object to facilitate
rollback.)
The RSS at the end of filelog application is as follows:
Before: ~752 MB
After: ~567 MB
So, we were storing ~185 MB of offset data that we never even used.
Talk about wasteful!
.. api::
bundle2 parts are no longer seekable by default.
.. perf::
bundle2 read I/O throughput significantly increased.
.. perf::
Significant memory use reductions when reading from bundle2 bundles.
On the BSD repository, peak RSS during changegroup application
decreased by ~185 MB from ~752 MB to ~567 MB.
Differential Revision: https://phab.mercurial-scm.org/D1390
Gregory Szorc <gregory.szorc@gmail.com> [Mon, 13 Nov 2017 20:12:00 -0800] rev 35115
bundle2: only seek to beginning of part in bundlerepo
For reasons still not yet fully understood by me, bundlerepo
requires its changegroup bundle2 part to be seeked to beginning
after part iteration. As far as I can tell, it is the only
bundle2 part consumer that relies on this behavior.
This seeking was performed in the generic iterparts() API. Again,
I don't fully understand why it was here and not in bundlerepo.
Probably historical reasons.
What I do know is that all other bundle2 part consumers don't
need this special behavior (assuming the tests are comprehensive).
So, we move the code from bundle2's iterparts() to bundlerepo's
consumption of iterparts().
Differential Revision: https://phab.mercurial-scm.org/D1389
Gregory Szorc <gregory.szorc@gmail.com> [Mon, 13 Nov 2017 20:03:02 -0800] rev 35114
bundle2: implement consume() API on unbundlepart
We want bundle parts to not be seekable by default. That means
eliminating the generic seek() method.
A common pattern in bundle2.py is to seek to the end of the part
data. This is mainly used by the part iteration code to ensure
the underlying stream is advanced to the next bundle part.
In this commit, we establish a dedicated API for consuming a
bundle2 part data. We switch users of seek() to it.
The old implementation of seek(0, os.SEEK_END) would effectively
call self.read(). The new implementation calls self.read(32768)
in a loop. The old implementation would therefore assemble a
buffer to hold all remaining data being seeked over. For seeking
over large bundle parts, this would involve a large allocation and
a lot of overhead to collect intermediate data! This overhead can
be seen in the results for `hg perfbundleread`:
! bundle2 iterparts()
! wall 10.891305 comb 10.820000 user 7.990000 sys 2.830000 (best of 3)
! wall 8.070791 comb 8.060000 user 7.180000 sys 0.880000 (best of 3)
! bundle2 part seek()
! wall 12.991478 comb 10.390000 user 7.720000 sys 2.670000 (best of 3)
! wall 10.370142 comb 10.350000 user 7.430000 sys 2.920000 (best of 3)
Of course, skipping over large payload data isn't likely very common.
So I doubt the performance wins will be observed in the wild.
Differential Revision: https://phab.mercurial-scm.org/D1388
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 12 Nov 2017 19:46:15 -0800] rev 35113
bundle2: implement generic part payload decoder
The previous commit extracted _payloadchunks() to a new derived class.
There was still a reference to this method in unbundlepart, making
unbundlepart unusable on its own.
This commit implements a generic version of a bundle2 part payload
decoder, without offset tracking. seekableunbundlepart._payloadchunks()
has been refactored to consume it, adding offset tracking like before.
We also implement unbundlepart._payloadchunks(), which is a thin
wrapper for it. Since we never instantiate unbundlepart directly,
this new method is not used. This will be changed in subsequent
commits.
The new implementation also inlines some simple code from unpackermixin
and adds some local variable to prevent extra function calls and
attribute lookups. `hg perfbundleread` on an uncompressed Firefox
bundle seems to show a minor win:
! bundle2 iterparts()
! wall 12.593258 comb 12.250000 user 8.870000 sys 3.380000 (best of 3)
! wall 10.891305 comb 10.820000 user 7.990000 sys 2.830000 (best of 3)
! bundle2 part seek()
! wall 13.173163 comb 11.100000 user 8.390000 sys 2.710000 (best of 3)
! wall 12.991478 comb 10.390000 user 7.720000 sys 2.670000 (best of 3)
! bundle2 part read(8k)
! wall 9.483612 comb 9.480000 user 8.420000 sys 1.060000 (best of 3)
! wall 8.599892 comb 8.580000 user 7.720000 sys 0.860000 (best of 3)
! bundle2 part read(16k)
! wall 9.159815 comb 9.150000 user 8.220000 sys 0.930000 (best of 3)
! wall 8.265361 comb 8.250000 user 7.360000 sys 0.890000 (best of 3)
! bundle2 part read(32k)
! wall 9.141308 comb 9.130000 user 8.220000 sys 0.910000 (best of 3)
! wall 8.290308 comb 8.280000 user 7.330000 sys 0.950000 (best of 3)
! bundle2 part read(128k)
! wall 8.880587 comb 8.850000 user 7.960000 sys 0.890000 (best of 3)
! wall 8.204900 comb 8.150000 user 7.210000 sys 0.940000 (best of 3)
Function call overhead in Python strikes again!
Of course, bundle2 decoding CPU overhead is likely small compared to
decompression and changegroup application. But every little bit helps.
Differential Revision: https://phab.mercurial-scm.org/D1387
Gregory Szorc <gregory.szorc@gmail.com> [Mon, 13 Nov 2017 19:22:11 -0800] rev 35112
bundle2: extract logic for seeking bundle2 part into own class
Currently, unbundlepart classes support bi-directional seeking.
Most consumers of unbundlepart only ever seek forward - typically
as part of moving to the end of the bundle part so they can move
on to the next one. But regardless of the actual usage of the
part, instances maintain an index mapping offsets within the
underlying raw payload to offsets within the decoded payload.
Maintaining the mapping of offset data can be expensive in terms of
memory use. Furthermore, many bundle2 consumers don't have access
to an underlying seekable stream. This includes all compressed
bundles. So maintaining offset data when the underlying stream
can't be seeked anyway is wasteful. And since many bundle2 streams
can't be seeked, it seems like a bad idea to expose a seek API
in bundle2 parts by default. If you provide them, people will
attempt to use them.
Seekable bundle2 parts should be the exception, not the rule. This
commit starts the process dividing unbundlepart into 2 classes: a
base class that supports linear, one-time reads and a child class
that supports bi-directional seeking. In this first commit, we
split various methods and attributes out into a new
"seekableunbundlepart" class. Previous instantiators of "unbundlepart"
now instantiate "seekableunbundlepart." This preserves backwards
compatibility. The coupling between the classes is still tight:
"unbundlepart" cannot be used on its own. This will be addressed
in subsequent commits.
Differential Revision: https://phab.mercurial-scm.org/D1386
Augie Fackler <augie@google.com> [Wed, 29 Nov 2017 17:49:08 -0500] rev 35111
merge with i18n
Wagner Bruna <wbruna@softwareexpress.com.br> [Tue, 21 Nov 2017 13:50:25 -0200] rev 35110
i18n-pt_BR: synchronized with
cabc840ffdee
Gregory Szorc <gregory.szorc@gmail.com> [Mon, 13 Nov 2017 19:20:34 -0800] rev 35109
perf: add command to benchmark bundle reading
Upcoming commits will be refactoring bundle2 I/O code.
This commit establishes a `hg perfbundleread` command that measures
how long it takes to read a bundle using various mechanisms.
As a baseline, here's output from an uncompressed bundle1
bundle of my Firefox repo (7,098,622,890 bytes):
! read(8k)
! wall 0.763481 comb 0.760000 user 0.160000 sys 0.600000 (best of 6)
! read(16k)
! wall 0.644512 comb 0.640000 user 0.110000 sys 0.530000 (best of 16)
! read(32k)
! wall 0.581172 comb 0.590000 user 0.060000 sys 0.530000 (best of 18)
! read(128k)
! wall 0.535183 comb 0.530000 user 0.010000 sys 0.520000 (best of 19)
! cg1 deltaiter()
! wall 0.873500 comb 0.880000 user 0.840000 sys 0.040000 (best of 12)
! cg1 getchunks()
! wall 6.283797 comb 6.270000 user 5.570000 sys 0.700000 (best of 3)
! cg1 read(8k)
! wall 1.097173 comb 1.100000 user 0.400000 sys 0.700000 (best of 10)
! cg1 read(16k)
! wall 0.810750 comb 0.800000 user 0.200000 sys 0.600000 (best of 13)
! cg1 read(32k)
! wall 0.671215 comb 0.670000 user 0.110000 sys 0.560000 (best of 15)
! cg1 read(128k)
! wall 0.597857 comb 0.600000 user 0.020000 sys 0.580000 (best of 15)
And from an uncompressed bundle2 bundle (6,070,036,163 bytes):
! read(8k)
! wall 0.676997 comb 0.680000 user 0.160000 sys 0.520000 (best of 15)
! read(16k)
! wall 0.592706 comb 0.590000 user 0.080000 sys 0.510000 (best of 17)
! read(32k)
! wall 0.529395 comb 0.530000 user 0.050000 sys 0.480000 (best of 16)
! read(128k)
! wall 0.491270 comb 0.490000 user 0.010000 sys 0.480000 (best of 19)
! bundle2 forwardchunks()
! wall 2.997131 comb 2.990000 user 2.270000 sys 0.720000 (best of 4)
! bundle2 iterparts()
! wall 12.247197 comb 10.670000 user 8.170000 sys 2.500000 (best of 3)
! bundle2 part seek()
! wall 11.761675 comb 10.500000 user 8.240000 sys 2.260000 (best of 3)
! bundle2 part read(8k)
! wall 9.116163 comb 9.110000 user 8.240000 sys 0.870000 (best of 3)
! bundle2 part read(16k)
! wall 8.984362 comb 8.970000 user 8.110000 sys 0.860000 (best of 3)
! bundle2 part read(32k)
! wall 8.758364 comb 8.740000 user 7.860000 sys 0.880000 (best of 3)
! bundle2 part read(128k)
! wall 8.749040 comb 8.730000 user 7.830000 sys 0.900000 (best of 3)
We already see some interesting data. Notably that bundle2 has
significant overhead compared to bundle1. This matters for e.g. stream
clone bundles, which can be applied at >1Gbps.
Differential Revision: https://phab.mercurial-scm.org/D1385
Zuzanna Mroczek <zuza@fb.com> [Mon, 20 Nov 2017 01:40:26 -0800] rev 35108
sshpeer: add a configurable hint for the ssh error message
Adding a possibility to configure error hint to be shown in the case of problems with SSH. Example of such hint can be "Please see http://company/internalwiki/ssh.html".
Test Plan:
- Ran hg pull with broken link and verified the output has no hint by default:
```
pulling from ssh://brokenrepository.com//repo
remote: ssh: Could not resolve hostname brokenrepository.com: Name or service not known
abort: no suitable response from remote hg!
```
- Run hg pull --config ui.ssherrorhint="Please see http://company/internalwiki/ssh.html":
```
pulling from ssh://brokenrepository.com//repo
remote: ssh: Could not resolve hostname brokenrepository.com: Name or service not known
abort: no suitable response from remote hg!
(Please see http://company/internalwiki/ssh.html)
```
Differential Revision: https://phab.mercurial-scm.org/D1431
rlevasseur@google.com [Thu, 16 Nov 2017 15:01:21 -0800] rev 35107
docs: add args/returns docs for some cmdutil, context, and registrar functions
When writing my first extension, I found it hard to figure out these functions.
I figured documenting their inputs/outputs would help future authors who
are new to the codebase.
Differential Revision: https://phab.mercurial-scm.org/D1440
Pulkit Goyal <7895pulkit@gmail.com> [Tue, 21 Nov 2017 04:37:51 +0530] rev 35106
commands: add value for cmdtype argument for read only commands
In the previous release we added an argument `cmdtype` to registrar.command()
which is a enum and tells whether the command is read only, recoverable write or
unrecoverable write command. This patch add the value of cmdtype argument for
commands which are read only.
Differential Revision: https://phab.mercurial-scm.org/D1468
Phil Cohen <phillco@fb.com> [Wed, 15 Nov 2017 21:07:30 -0800] rev 35105
error: add InMemoryMergeConflictsError
We'll raise this exception in the merge code, and in-memory users like rebase
can catch it and retry without IMM.
Differential Revision: https://phab.mercurial-scm.org/D1210
Augie Fackler <augie@google.com> [Mon, 20 Nov 2017 18:05:15 -0500] rev 35104
lfs: generate a large file by using `python` instead of yes | head
yes(1) on some systems (like gcc112) feels compelled to inform you of
broken pipes, such as those triggered by head(1). This works around
the problem portably.
Augie Fackler <augie@google.com> [Mon, 20 Nov 2017 18:00:02 -0500] rev 35103
setup: add hgext.lfs to list of Python packages
This is needed for lfs to get installed. Probably could stand to go
into an earlier patch, but I just want to get this stuff pushed.
Matt Harbison <matt_harbison@yahoo.com> [Sat, 18 Nov 2017 12:54:06 -0500] rev 35102
test-lfs: add tests demonstrating the interaction with largefiles
Obviously the original series needs to be accepted first, but there are concerns
about how well these extensions will play together before proceeding. It looks
like the answer is surprisingly well. There are some merge surprises
(largefiles seems to combine the choice of "keep tracking as a large/normal
file" with taking the content of the large/normal file) and some existing diff
weirdness (largefiles diffs the standins, not the large file). Converting the
repo to normal files seemlessly transitions to lfs on the fly. I didn't test
going the other way, because I'm not sure why anyone would want to do that.
I flagged the lack of a repo requirement after converting, because some of the
unsubmitted changes I have add a requirement on commit, but this somehow misses
the convert case.
I flagged an issue where devel-warnings are emitted on convert, which is a
separate issue.
Matt Harbison <matt_harbison@yahoo.com> [Tue, 14 Nov 2017 01:09:48 -0500] rev 35101
test-lfs: cast the flags printed to an int
On Windows, the flag values in the subsequent tests were printing with a 'L'
suffix.
Matt Harbison <matt_harbison@yahoo.com> [Tue, 14 Nov 2017 01:03:22 -0500] rev 35100
lfs: register config options
I'm not sure at what point we can get rid of the deprecated options, but for the
sake of making progress, they are registered too.
Matt Harbison <matt_harbison@yahoo.com> [Tue, 14 Nov 2017 00:14:52 -0500] rev 35099
lfs: quiesce check-module-import warnings
Specifically, 'symbol import follows non-symbol import: mercurial.i18n'
Matt Harbison <matt_harbison@yahoo.com> [Tue, 14 Nov 2017 00:06:23 -0500] rev 35098
lfs: import the Facebook git-lfs client extension
The purpose of this is the same as the built-in largefiles extension- to handle
huge files outside of the normal storage system, generally to keep the amount of
data cloned to a lower amount. There are several benefits of implementing the
git-lfs protocol, instead of using the largefiles extension:
- Bitbucket and Github support (and probably wider support in 3rd party
hosting sites in general). [1][2]
- The number of hg internals monkey patched are several orders of magnitude
lower, so it will be easier to reason about and maintain. Future commands
will likely just work, without requiring various wrappers.
- The "standin" files are only written to the filelog, not the disk. That
should avoid weird edge cases where the largefile and standin files get out
of sync. [3] It also avoids the occasional printing of the "hidden" standin
file in various messages.
- Filesets like size() will work, even if the file isn't present. (It always
says 41 bytes for largefiles, whether present or not.)
The only place that I see where largefiles comes out on top is that it works
with `hg serve` for simple sharing, without external infrastructure. Getting
lfs-test-server working was a hassle, and took awhile to figure out. Maybe we
can do something to make it work in the future.
Long term, I expect that this will be highly preferred over largefiles. But if
we are to recommend this to largefile users, there are some UI issues to
bikeshed. Until they are resolved, I've marked this experimental, and am not
putting a pointer to this in the largefiles help. The (non exhaustive) list of
issues I've seen so far are:
- It isn't sufficient to just enable the largefiles extension- you have to
explicitly add a file with --large before it will pay attention to the
configured sizes and patterns on future adds. The justification being that
once you use it, you're stuck with it. I've seen people confused by this,
and haven't liked it myself. But it's also saved me a few times. Should we
do something like have a specific enabling config setting that must be set
in the local repo config, so that enabling this extension in the user or
system hgrc doesn't silently start storing lfs files?
- The largefiles extension adds a repo requirement when the first largefile is
committed, so that the extension must always be enabled in the future. This
extension is not doing that, and since I only enabled it locally to avoid
infecting other repos, I got a cryptic error about missing flag processors
when I cloned. Is there no repo requirement due to shallow/narrow clone
considerations (or other future advanced things)?
- In the (small amount of) reading I've done about the git implementation, it
seems that the files and sizes are stored in a tracked .gitattributes file.
I think a tracked file for this would be extremely useful for consistency
across developers, but this kind of touches on the tracked hgrc file
proposal a few months back.
- The git client can specify file patterns, not just sizes.
- The largefiles extension has a cache directory in the local repo, but also a
system wide one. We should probably implement a system wide cache too, so
that multiple clones don't have to refetch the files from the server.
- Jun mentioned other missing features, like SSH authentication, gc, etc.
The code corresponds to
c0492b73c7ef in hg-experimental. [4] The only tweaks
are to load the extension in the tests with 'lfs=' instead of
'lfs=$TESTDIR/../hgext3rd/lfs', change the import in the *.py test to hgext
(from hgext3rd), add the 'testedwith' declaration, and mark it experimental for
now. The infinite-push, p4fastimport, and remotefilelog tests were left behind.
The devel-warnings for unregistered config options are not corrected yet, nor
are the import check warnings.
[1] https://www.mercurial-scm.org/pipermail/mercurial/2017-November/050699.html
[2] https://bitbucket.org/site/master/issues/3843/largefiles-support-bb-3903
[3] https://bz.mercurial-scm.org/show_bug.cgi?id=5738
[4] https://bitbucket.org/facebook/hg-experimental
Matthieu Laneuville <matthieu.laneuville@octobus.net> [Sat, 18 Nov 2017 16:12:00 +0900] rev 35097
run-tests: outputdir also has to be changed if $TESTDIR is not $PWD
Following
a18eef03d879, running run-tests.py from outside tests/ would lead to
the creation of .testtimes and test-*.t.err in $PWD instead of $TESTDIR. This
patch fixes that and updates the relevant test.
Anton Shestakov <av6@dwimlabs.net> [Mon, 20 Nov 2017 21:59:00 +0800] rev 35096
hgweb: use webutil.commonentry() for nodes (but not for jsdata yet) in /graph
This makes graphdata() simpler by using existing code that gets common
changeset properties for showing in hgweb. graphdata() is a nested function in
graph() that prepares entries for /graph view, but there are two different
lists of changesets prepared: "jsdata" for JavaScript-rendered graph and
"nodes" for everything else.
For "jsdata", properties "node", "user", "age" and "desc" are passed through
various template filters because we don't have these filters in JavaScript, so
the data has to be prepared server-side. But now that commonentry() is used for
producing "nodes" list (and it doesn't apply any filters), these filters need
to be added to the appropriate templates (only raw at this moment, everything
else either doesn't implement graph or uses JavaScript).
This is a bit of refactoring that will hopefully simplify future patches. The
end result is to have /graph that only renders the actual graph with nodes and
vertices in JavaScript, and the rest is done server-side. This way server-side
code can focus on showing a list of changesets, which is easy because we
already have /log, /shortlog, etc, and JavaScript code can be simplified,
making it easier to add obsolescence graph and other features.
Anton Shestakov <av6@dwimlabs.net> [Mon, 20 Nov 2017 21:47:11 +0800] rev 35095
hgweb: check changeset's original branch in graphdata()
This piece of code checks if a changeset is the tip of its branch, but as can
be seen above in the context, "branch" was prepared for being displayed in
hgweb by making it unicode and passing it through url.escape. It's better to
use the original ctx.branch().