Mon, 13 Nov 2017 19:22:11 -0800 bundle2: extract logic for seeking bundle2 part into own class
Gregory Szorc <gregory.szorc@gmail.com> [Mon, 13 Nov 2017 19:22:11 -0800] rev 35133
bundle2: extract logic for seeking bundle2 part into own class Currently, unbundlepart classes support bi-directional seeking. Most consumers of unbundlepart only ever seek forward - typically as part of moving to the end of the bundle part so they can move on to the next one. But regardless of the actual usage of the part, instances maintain an index mapping offsets within the underlying raw payload to offsets within the decoded payload. Maintaining the mapping of offset data can be expensive in terms of memory use. Furthermore, many bundle2 consumers don't have access to an underlying seekable stream. This includes all compressed bundles. So maintaining offset data when the underlying stream can't be seeked anyway is wasteful. And since many bundle2 streams can't be seeked, it seems like a bad idea to expose a seek API in bundle2 parts by default. If you provide them, people will attempt to use them. Seekable bundle2 parts should be the exception, not the rule. This commit starts the process dividing unbundlepart into 2 classes: a base class that supports linear, one-time reads and a child class that supports bi-directional seeking. In this first commit, we split various methods and attributes out into a new "seekableunbundlepart" class. Previous instantiators of "unbundlepart" now instantiate "seekableunbundlepart." This preserves backwards compatibility. The coupling between the classes is still tight: "unbundlepart" cannot be used on its own. This will be addressed in subsequent commits. Differential Revision: https://phab.mercurial-scm.org/D1386
Mon, 13 Nov 2017 19:20:34 -0800 perf: add command to benchmark bundle reading
Gregory Szorc <gregory.szorc@gmail.com> [Mon, 13 Nov 2017 19:20:34 -0800] rev 35132
perf: add command to benchmark bundle reading Upcoming commits will be refactoring bundle2 I/O code. This commit establishes a `hg perfbundleread` command that measures how long it takes to read a bundle using various mechanisms. As a baseline, here's output from an uncompressed bundle1 bundle of my Firefox repo (7,098,622,890 bytes): ! read(8k) ! wall 0.763481 comb 0.760000 user 0.160000 sys 0.600000 (best of 6) ! read(16k) ! wall 0.644512 comb 0.640000 user 0.110000 sys 0.530000 (best of 16) ! read(32k) ! wall 0.581172 comb 0.590000 user 0.060000 sys 0.530000 (best of 18) ! read(128k) ! wall 0.535183 comb 0.530000 user 0.010000 sys 0.520000 (best of 19) ! cg1 deltaiter() ! wall 0.873500 comb 0.880000 user 0.840000 sys 0.040000 (best of 12) ! cg1 getchunks() ! wall 6.283797 comb 6.270000 user 5.570000 sys 0.700000 (best of 3) ! cg1 read(8k) ! wall 1.097173 comb 1.100000 user 0.400000 sys 0.700000 (best of 10) ! cg1 read(16k) ! wall 0.810750 comb 0.800000 user 0.200000 sys 0.600000 (best of 13) ! cg1 read(32k) ! wall 0.671215 comb 0.670000 user 0.110000 sys 0.560000 (best of 15) ! cg1 read(128k) ! wall 0.597857 comb 0.600000 user 0.020000 sys 0.580000 (best of 15) And from an uncompressed bundle2 bundle (6,070,036,163 bytes): ! read(8k) ! wall 0.676997 comb 0.680000 user 0.160000 sys 0.520000 (best of 15) ! read(16k) ! wall 0.592706 comb 0.590000 user 0.080000 sys 0.510000 (best of 17) ! read(32k) ! wall 0.529395 comb 0.530000 user 0.050000 sys 0.480000 (best of 16) ! read(128k) ! wall 0.491270 comb 0.490000 user 0.010000 sys 0.480000 (best of 19) ! bundle2 forwardchunks() ! wall 2.997131 comb 2.990000 user 2.270000 sys 0.720000 (best of 4) ! bundle2 iterparts() ! wall 12.247197 comb 10.670000 user 8.170000 sys 2.500000 (best of 3) ! bundle2 part seek() ! wall 11.761675 comb 10.500000 user 8.240000 sys 2.260000 (best of 3) ! bundle2 part read(8k) ! wall 9.116163 comb 9.110000 user 8.240000 sys 0.870000 (best of 3) ! bundle2 part read(16k) ! wall 8.984362 comb 8.970000 user 8.110000 sys 0.860000 (best of 3) ! bundle2 part read(32k) ! wall 8.758364 comb 8.740000 user 7.860000 sys 0.880000 (best of 3) ! bundle2 part read(128k) ! wall 8.749040 comb 8.730000 user 7.830000 sys 0.900000 (best of 3) We already see some interesting data. Notably that bundle2 has significant overhead compared to bundle1. This matters for e.g. stream clone bundles, which can be applied at >1Gbps. Differential Revision: https://phab.mercurial-scm.org/D1385
Mon, 20 Nov 2017 01:40:26 -0800 sshpeer: add a configurable hint for the ssh error message
Zuzanna Mroczek <zuza@fb.com> [Mon, 20 Nov 2017 01:40:26 -0800] rev 35131
sshpeer: add a configurable hint for the ssh error message Adding a possibility to configure error hint to be shown in the case of problems with SSH. Example of such hint can be "Please see http://company/internalwiki/ssh.html". Test Plan: - Ran hg pull with broken link and verified the output has no hint by default: ``` pulling from ssh://brokenrepository.com//repo remote: ssh: Could not resolve hostname brokenrepository.com: Name or service not known abort: no suitable response from remote hg! ``` - Run hg pull --config ui.ssherrorhint="Please see http://company/internalwiki/ssh.html": ``` pulling from ssh://brokenrepository.com//repo remote: ssh: Could not resolve hostname brokenrepository.com: Name or service not known abort: no suitable response from remote hg! (Please see http://company/internalwiki/ssh.html) ``` Differential Revision: https://phab.mercurial-scm.org/D1431
Thu, 16 Nov 2017 15:01:21 -0800 docs: add args/returns docs for some cmdutil, context, and registrar functions
rlevasseur@google.com [Thu, 16 Nov 2017 15:01:21 -0800] rev 35130
docs: add args/returns docs for some cmdutil, context, and registrar functions When writing my first extension, I found it hard to figure out these functions. I figured documenting their inputs/outputs would help future authors who are new to the codebase. Differential Revision: https://phab.mercurial-scm.org/D1440
Tue, 21 Nov 2017 04:37:51 +0530 commands: add value for cmdtype argument for read only commands
Pulkit Goyal <7895pulkit@gmail.com> [Tue, 21 Nov 2017 04:37:51 +0530] rev 35129
commands: add value for cmdtype argument for read only commands In the previous release we added an argument `cmdtype` to registrar.command() which is a enum and tells whether the command is read only, recoverable write or unrecoverable write command. This patch add the value of cmdtype argument for commands which are read only. Differential Revision: https://phab.mercurial-scm.org/D1468
Wed, 15 Nov 2017 21:07:30 -0800 error: add InMemoryMergeConflictsError
Phil Cohen <phillco@fb.com> [Wed, 15 Nov 2017 21:07:30 -0800] rev 35128
error: add InMemoryMergeConflictsError We'll raise this exception in the merge code, and in-memory users like rebase can catch it and retry without IMM. Differential Revision: https://phab.mercurial-scm.org/D1210
Mon, 20 Nov 2017 18:05:15 -0500 lfs: generate a large file by using `python` instead of yes | head
Augie Fackler <augie@google.com> [Mon, 20 Nov 2017 18:05:15 -0500] rev 35127
lfs: generate a large file by using `python` instead of yes | head yes(1) on some systems (like gcc112) feels compelled to inform you of broken pipes, such as those triggered by head(1). This works around the problem portably.
Mon, 20 Nov 2017 18:00:02 -0500 setup: add hgext.lfs to list of Python packages
Augie Fackler <augie@google.com> [Mon, 20 Nov 2017 18:00:02 -0500] rev 35126
setup: add hgext.lfs to list of Python packages This is needed for lfs to get installed. Probably could stand to go into an earlier patch, but I just want to get this stuff pushed.
Sat, 18 Nov 2017 12:54:06 -0500 test-lfs: add tests demonstrating the interaction with largefiles
Matt Harbison <matt_harbison@yahoo.com> [Sat, 18 Nov 2017 12:54:06 -0500] rev 35125
test-lfs: add tests demonstrating the interaction with largefiles Obviously the original series needs to be accepted first, but there are concerns about how well these extensions will play together before proceeding. It looks like the answer is surprisingly well. There are some merge surprises (largefiles seems to combine the choice of "keep tracking as a large/normal file" with taking the content of the large/normal file) and some existing diff weirdness (largefiles diffs the standins, not the large file). Converting the repo to normal files seemlessly transitions to lfs on the fly. I didn't test going the other way, because I'm not sure why anyone would want to do that. I flagged the lack of a repo requirement after converting, because some of the unsubmitted changes I have add a requirement on commit, but this somehow misses the convert case. I flagged an issue where devel-warnings are emitted on convert, which is a separate issue.
Tue, 14 Nov 2017 01:09:48 -0500 test-lfs: cast the flags printed to an int
Matt Harbison <matt_harbison@yahoo.com> [Tue, 14 Nov 2017 01:09:48 -0500] rev 35124
test-lfs: cast the flags printed to an int On Windows, the flag values in the subsequent tests were printing with a 'L' suffix.
Tue, 14 Nov 2017 01:03:22 -0500 lfs: register config options
Matt Harbison <matt_harbison@yahoo.com> [Tue, 14 Nov 2017 01:03:22 -0500] rev 35123
lfs: register config options I'm not sure at what point we can get rid of the deprecated options, but for the sake of making progress, they are registered too.
Tue, 14 Nov 2017 00:14:52 -0500 lfs: quiesce check-module-import warnings
Matt Harbison <matt_harbison@yahoo.com> [Tue, 14 Nov 2017 00:14:52 -0500] rev 35122
lfs: quiesce check-module-import warnings Specifically, 'symbol import follows non-symbol import: mercurial.i18n'
Tue, 14 Nov 2017 00:06:23 -0500 lfs: import the Facebook git-lfs client extension
Matt Harbison <matt_harbison@yahoo.com> [Tue, 14 Nov 2017 00:06:23 -0500] rev 35121
lfs: import the Facebook git-lfs client extension The purpose of this is the same as the built-in largefiles extension- to handle huge files outside of the normal storage system, generally to keep the amount of data cloned to a lower amount. There are several benefits of implementing the git-lfs protocol, instead of using the largefiles extension: - Bitbucket and Github support (and probably wider support in 3rd party hosting sites in general). [1][2] - The number of hg internals monkey patched are several orders of magnitude lower, so it will be easier to reason about and maintain. Future commands will likely just work, without requiring various wrappers. - The "standin" files are only written to the filelog, not the disk. That should avoid weird edge cases where the largefile and standin files get out of sync. [3] It also avoids the occasional printing of the "hidden" standin file in various messages. - Filesets like size() will work, even if the file isn't present. (It always says 41 bytes for largefiles, whether present or not.) The only place that I see where largefiles comes out on top is that it works with `hg serve` for simple sharing, without external infrastructure. Getting lfs-test-server working was a hassle, and took awhile to figure out. Maybe we can do something to make it work in the future. Long term, I expect that this will be highly preferred over largefiles. But if we are to recommend this to largefile users, there are some UI issues to bikeshed. Until they are resolved, I've marked this experimental, and am not putting a pointer to this in the largefiles help. The (non exhaustive) list of issues I've seen so far are: - It isn't sufficient to just enable the largefiles extension- you have to explicitly add a file with --large before it will pay attention to the configured sizes and patterns on future adds. The justification being that once you use it, you're stuck with it. I've seen people confused by this, and haven't liked it myself. But it's also saved me a few times. Should we do something like have a specific enabling config setting that must be set in the local repo config, so that enabling this extension in the user or system hgrc doesn't silently start storing lfs files? - The largefiles extension adds a repo requirement when the first largefile is committed, so that the extension must always be enabled in the future. This extension is not doing that, and since I only enabled it locally to avoid infecting other repos, I got a cryptic error about missing flag processors when I cloned. Is there no repo requirement due to shallow/narrow clone considerations (or other future advanced things)? - In the (small amount of) reading I've done about the git implementation, it seems that the files and sizes are stored in a tracked .gitattributes file. I think a tracked file for this would be extremely useful for consistency across developers, but this kind of touches on the tracked hgrc file proposal a few months back. - The git client can specify file patterns, not just sizes. - The largefiles extension has a cache directory in the local repo, but also a system wide one. We should probably implement a system wide cache too, so that multiple clones don't have to refetch the files from the server. - Jun mentioned other missing features, like SSH authentication, gc, etc. The code corresponds to c0492b73c7ef in hg-experimental. [4] The only tweaks are to load the extension in the tests with 'lfs=' instead of 'lfs=$TESTDIR/../hgext3rd/lfs', change the import in the *.py test to hgext (from hgext3rd), add the 'testedwith' declaration, and mark it experimental for now. The infinite-push, p4fastimport, and remotefilelog tests were left behind. The devel-warnings for unregistered config options are not corrected yet, nor are the import check warnings. [1] https://www.mercurial-scm.org/pipermail/mercurial/2017-November/050699.html [2] https://bitbucket.org/site/master/issues/3843/largefiles-support-bb-3903 [3] https://bz.mercurial-scm.org/show_bug.cgi?id=5738 [4] https://bitbucket.org/facebook/hg-experimental
Sat, 18 Nov 2017 16:12:00 +0900 run-tests: outputdir also has to be changed if $TESTDIR is not $PWD
Matthieu Laneuville <matthieu.laneuville@octobus.net> [Sat, 18 Nov 2017 16:12:00 +0900] rev 35120
run-tests: outputdir also has to be changed if $TESTDIR is not $PWD Following a18eef03d879, running run-tests.py from outside tests/ would lead to the creation of .testtimes and test-*.t.err in $PWD instead of $TESTDIR. This patch fixes that and updates the relevant test.
Mon, 20 Nov 2017 21:59:00 +0800 hgweb: use webutil.commonentry() for nodes (but not for jsdata yet) in /graph
Anton Shestakov <av6@dwimlabs.net> [Mon, 20 Nov 2017 21:59:00 +0800] rev 35119
hgweb: use webutil.commonentry() for nodes (but not for jsdata yet) in /graph This makes graphdata() simpler by using existing code that gets common changeset properties for showing in hgweb. graphdata() is a nested function in graph() that prepares entries for /graph view, but there are two different lists of changesets prepared: "jsdata" for JavaScript-rendered graph and "nodes" for everything else. For "jsdata", properties "node", "user", "age" and "desc" are passed through various template filters because we don't have these filters in JavaScript, so the data has to be prepared server-side. But now that commonentry() is used for producing "nodes" list (and it doesn't apply any filters), these filters need to be added to the appropriate templates (only raw at this moment, everything else either doesn't implement graph or uses JavaScript). This is a bit of refactoring that will hopefully simplify future patches. The end result is to have /graph that only renders the actual graph with nodes and vertices in JavaScript, and the rest is done server-side. This way server-side code can focus on showing a list of changesets, which is easy because we already have /log, /shortlog, etc, and JavaScript code can be simplified, making it easier to add obsolescence graph and other features.
Mon, 20 Nov 2017 21:47:11 +0800 hgweb: check changeset's original branch in graphdata()
Anton Shestakov <av6@dwimlabs.net> [Mon, 20 Nov 2017 21:47:11 +0800] rev 35118
hgweb: check changeset's original branch in graphdata() This piece of code checks if a changeset is the tip of its branch, but as can be seen above in the context, "branch" was prepared for being displayed in hgweb by making it unicode and passing it through url.escape. It's better to use the original ctx.branch().
(0) -30000 -10000 -3000 -1000 -300 -100 -16 +16 +100 +300 +1000 +3000 +10000 tip