FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Wed, 14 Oct 2015 02:49:17 +0900] rev 26632
dirstate: move code paths for backup from dirstateguard to dirstate
This can centralize the logic to write in-memory changes out correctly
according to transaction activity into dirstate.
Passing 'repo' object to newly added functions is needed to examine
current transaction activity in subsequent patches, because 'dirstate'
itself doesn't have direct reference to it.
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Tue, 13 Oct 2015 12:25:43 -0700] rev 26631
localrepo: restore dirstate to one before rollbacking if not parent-gone
'localrepository.rollback()' explicilty restores dirstate, only if at
least one of current parents of the working directory is removed at
rollbacking (a.k.a "parent-gone").
After DirstateTransactionPlan, 'dirstate.write()' will cause marking
'.hg/dirstate' as a file to be restored at rollbacking.
https://mercurial.selenic.com/wiki/DirstateTransactionPlan
Then, 'transaction.rollback()' restores '.hg/dirstate' regardless of
parents of the working directory at that time, and this causes
unexpected dirstate changes if not "parent-gone" (e.g. "hg update" to
another branch after "hg commit" or so, then "hg rollback").
To avoid such situation, this patch restores dirstate to one before
rollbacking if not "parent-gone".
before:
b1. restore dirstate explicitly, if "parent-gone"
after:
a1. save dirstate before actual rollbacking via dirstateguard
a2. restore dirstate via 'transaction.rollback()'
a3. if "parent-gone"
- discard backup (a1)
- restore dirstate from 'undo.dirstate'
a4. otherwise, restore dirstate from backup (a1)
Even though restoring dirstate at (a3) after (a2) seems redundant,
this patch keeps this existing code path, because:
- it isn't ensured that 'dirstate.write()' was invoked at least once
while transaction running
If not, '.hg/dirstate' isn't restored at (a2).
In addition to it, rude 3rd party extension invoking
'dirstate.write()' without 'repo' while transaction running (see
subsequent patches for detail) may break consistency of a file
backup-ed by transaction.
- this patch mainly focuses on changes for DirstateTransactionPlan
Restoring dirstate at (a3) itself should be cheaper enough than
rollbacking itself. Redundancy will be removed in next step.
Newly added test is almost meaningless at this point. It will be used
to detect regression while implementing delayed dirstate write out.
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Wed, 14 Oct 2015 02:40:04 +0900] rev 26630
parsers: make pack_dirstate take now in integer for consistency
On recent OS, 'stat.st_mtime' has a double precision floating point
value to represent nano seconds, but it is not wide enough for actual
file timestamp: nowadays, only 52 - 32 = 20 bit width is available for
decimal places in sec.
Therefore, casting it to 'int' may cause unexpected result. See also
changeset
13272104bb07 fixing
issue4836 for detail.
For example, changed file A may be treated as "clean" unexpectedly in
steps below. "rounded now" is the value gotten by rounding via
'int(st.st_mtime)' or so.
---------------------+--------------------+------------------------
"now" | | timestamp of A (time_t)
float rounded time_t| action | FS dirstate
------ ------- ------+--------------------+-------- ---------------
N+.nnn N N | | --- ---
| update file A | N
| dirstate.normal(A) | N
N+.999 N+1 N | |
| dirstate.write() | N (*1)
| : |
| change file A | N
| : |
N+1.00 N+1 N+1 | |
| "hg status" (*2) | N N
------ ------- ------+--------------------+-------- ---------------
Timestamp N of A in dirstate isn't dropped at (*1), because "rounded
now" is N+1 at that time, even if 'st_mtime' in 'time_t' is still N.
Then, file A is unexpectedly treated as "clean" at (*2) in this case.
For consistent handling of 'stat.st_mtime', this patch makes
'pack_dirstate()' take 'now' argument not in floating point but in
integer.
This patch makes 'PyArg_ParseTuple()' in 'pack_dirstate()' use format
'i' (= checking type mismatch or overflow), even though it is ensured
that 'now' is in the range of 32bit signed integer by masking with
'_rangemask' (= 0x
7fffffff) on caller side.
It should be cheaper enough than packing itself, and useful to
detect that legacy code invokes 'pack_dirstate()' with 'now' in
floating point value.
Pierre-Yves David <pierre-yves.david@fb.com> [Tue, 29 Sep 2015 00:18:49 -0700] rev 26629
destupdate: include the 'check' logic
After moving logic from 'merge.update' into 'destutil.destupdate', we are now
moving logic from 'command.update' in 'destutil.destupdate'. This will make the
function actually useful in predicting (and altering) the update behavior.
Pierre-Yves David <pierre-yves.david@fb.com> [Mon, 05 Oct 2015 03:50:47 -0700] rev 26628
destupdate: move the check related to the "clean" logic in the function
We want this function to exactly predict the behavior for update. Moreover, we
would like to remove all high level behavior logic out of the merge module so
this is a step forward.
Now that the 'destupdate' function both compute and validate the destination, we
can directly use it at the command level, ensuring that the 'hg update' command
never call 'merge.update' without a defined destination. This is a first (but
significant) step toward having 'merge.update' always feed with a properly
validated destination and free of high level logic.
Mads Kiilerich <madski@unity3d.com> [Mon, 12 Oct 2015 19:22:34 +0200] rev 26627
largefiles: better handling of merge of largefiles that not are available
Before, when merging revisions with missing largefiles, the missing largefiles
would be fetched as a part of the merge. If that failed (for example because
the main repository temporarily was unavailable), the largefile would be left
missing. However, the next commit would abort and (seemed to) fail when
markcommitted tried to mark the standin file as normal and thus had to hash the
largefile that didn't exist. (Actually, the commit would succeed but the
largefile update that follows right after the commit transaction would abort -
quite confusing.)
To fix that, make sure that synclfdirstate only marks files as normal if they
actually exist.
Pierre-Yves David <pierre-yves.david@fb.com> [Sun, 11 Oct 2015 22:13:03 -0700] rev 26626
patchbomb: check that targets exist at the publicurl
Advertising that the patch are available to be pulled requires that to be true.
So we check revision availability on the remote before sending any email.
Mads Kiilerich <madski@unity3d.com> [Mon, 12 Oct 2015 20:13:12 +0200] rev 26625
windows: read all global config files, not just the first (
issue4491) (BC)
On windows, hgrc.d/*.rc would not be read if mercurial.ini was found. That was
far from obvious from the documentation and different from the behavior on
posix systems.
As a consequence of this, TortoiseHg cacert configuration placed in hgrc.d
would not be read if an old global mercurial.ini still existed.
"hg config -g" could also crash when no global configuration files could be
found.
Instead, make windows behave like posix and read all global configuration
files.
The documentation was in a way right that individual config settings in the
global Mercurial.ini would override settings from for example .hgrc.d\*.rc, but
only because the .d files not would be read at all if a Mercurial.ini was
found. The ordering in the documentation is thus changed to match the code.
Ryan McElroy <rmcelroy@fb.com> [Fri, 09 Oct 2015 14:48:59 -0700] rev 26624
strip: factor out revset calculation for strip -B
This will allow reusing it in evolve and overriding it in other extensions.
Gregory Szorc <gregory.szorc@gmail.com> [Fri, 09 Oct 2015 11:22:01 -0700] rev 26623
clonebundles: support for seeding clones from pre-generated bundles
Cloning can be an expensive operation for servers because the server
generates a bundle from existing repository data at request time. For
a large repository like mozilla-central, this consumes 4+ minutes
of CPU time on the server. It also results in significant network
utilization. Multiplied by hundreds or even thousands of clients and
the ensuing load can result in difficulties scaling the Mercurial server.
Despite generation of bundles being deterministic until the next
changeset is added, the generation of bundles to service a clone request
is not cached. Each clone thus performs redundant work. This is
wasteful.
This patch introduces the "clonebundles" extension and related
client-side functionality to help alleviate this deficiency. The
client-side feature is behind an experimental flag and is not enabled by
default.
It works as follows:
1) Server operator generates a bundle and makes it available on a
server (likely HTTP).
2) Server operator defines the URL of a bundle file in a
.hg/clonebundles.manifest file.
3) Client `hg clone`ing sees the server is advertising bundle URLs.
4) Client fetches and applies the advertised bundle.
5) Client performs equivalent of `hg pull` to fetch changes made since
the bundle was created.
Essentially, the server performs the expensive work of generating a
bundle once and all subsequent clones fetch a static file from
somewhere. Scaling static file serving is a much more manageable
problem than scaling a Python application like Mercurial. Assuming your
repository grows less than 1% per day, the end result is 99+% of CPU
and network load from clones is eliminated, allowing Mercurial servers
to scale more easily. Serving static files also means data can be
transferred to clients as fast as they can consume it, rather than as
fast as servers can generate it. This makes clones faster.
Mozilla has implemented similar functionality of this patch on
hg.mozilla.org using a custom extension. We are hosting bundle files in
Amazon S3 and CloudFront (a CDN) and have successfully offloaded
>1 TB/day in data transfer from hg.mozilla.org, freeing up significant
bandwidth and CPU resources. The positive impact has been stellar and
I believe it has proved its value to be included in Mercurial core. I
feel it is important for the client-side support to be enabled in core
by default because it means that clients will get faster, more reliable
clones and will enable server operators to reduce load without
requiring any client-side configuration changes (assuming clients are
up to date, of course).
The scope of this feature is narrowly and specifically tailored to
cloning, despite "serve pulls from pre-generated bundles" being a valid
and useful feature. I would eventually like for Mercurial servers to
support transferring *all* repository data via statically hosted files.
You could imagine a server that siphons all pushed data to bundle files
and instructs clients to apply a stream of bundles to reconstruct all
repository data. This feature, while useful and powerful, is
significantly more work to implement because it requires the server
component have awareness of discovery and a mapping of which changesets
are in which files. Full, clone bundles, by contrast, are much simpler.
The wire protocol command is named "clonebundles" instead of something
more generic like "staticbundles" to leave the door open for a new, more
powerful and more generic server-side component with minimal backwards
compatibility implications. The name "bundleclone" is used by Mozilla's
extension and would cause problems since there are subtle differences
in Mozilla's extension.
Mozilla's experience with this idea has taught us that some form of
"content negotiation" is required. Not all clients will support all
bundle formats or even URLs (advanced TLS requirements, etc). To ensure
the highest uptake possible, a server needs to advertise multiple
versions of bundles and clients need to be able to choose the most
appropriate from that list one. The "attributes" in each
server-advertised entry facilitate this filtering and sorting. Their
use will become apparent in subsequent patches.
Initial inspiration and credit for the idea of cloning from static files
belongs to Augie Fackler and his "lookaside clone" extension proof of
concept.