Tue, 02 May 2017 19:05:58 +0200 caches: call 'repo.updatecache()' in 'repo.destroyed()'
Pierre-Yves David <pierre-yves.david@ens-lyon.org> [Tue, 02 May 2017 19:05:58 +0200] rev 32264
caches: call 'repo.updatecache()' in 'repo.destroyed()' Regenerating the cache after a 'strip' or a 'rollback' is useful. So we call the generic cache warming function as other caches than just branchmap will be updated there in the future. To do so, we have to make 'repo.updatecache()' able to take no arguments. In such cases, we reload all caches.
Tue, 02 May 2017 21:39:43 +0200 caches: introduce a function to warm cache
Pierre-Yves David <pierre-yves.david@ens-lyon.org> [Tue, 02 May 2017 21:39:43 +0200] rev 32263
caches: introduce a function to warm cache We have multiple caches that gain from being kept up to date. For example in a server setup, we want to make sure the branchcache cache is hot for other read-only clients. Right now each cache tries to update themself in place where new data have been added. However the approach is error prone (we might miss some spot) and fragile. When nested transaction are involved, such cache updates might happen before a top level transaction is committed. Writing caches for uncommitted data on disk. Having a single entry point, run at the end of each successful transaction, helps to ensure the cache is up to date and refreshed at the right time. We start with updating the branchmap cache but other will come.
Tue, 02 May 2017 18:45:51 +0200 transaction: track newly introduced revisions
Pierre-Yves David <pierre-yves.david@ens-lyon.org> [Tue, 02 May 2017 18:45:51 +0200] rev 32262
transaction: track newly introduced revisions Tracking revisions is not the data that will unlock the most new capability. However, they are the simplest thing to track and still unlock some nice improvements in regard with caching. We plug ourself at the changelog level to make sure we do not miss any revision additions. The 'revs' set is configured at the repository level because the transaction itself does not needs to know that much about the business logic.
Tue, 02 May 2017 18:31:18 +0200 transaction: introduce "changes" dictionary to precisely track updates
Pierre-Yves David <pierre-yves.david@ens-lyon.org> [Tue, 02 May 2017 18:31:18 +0200] rev 32261
transaction: introduce "changes" dictionary to precisely track updates The transaction is already tracking some data intended for hooks (in 'hookargs'). However, that information is minimal as we optimise for passing data to other processes through environment variables. There are multiple places were we could use more complete and lower level information locally (eg: cache update, better report of changes to hooks, etc...). For this purpose we introduces a 'changes' dictionary on the transaction. It is intended to track every changes happening to the repository (eg: new revs, bookmarks move, phases move, obs-markers, etc). For now we just adds the 'changes' dictionary. We'll adds more tracking and usages over time.
Thu, 11 May 2017 10:50:05 -0700 clone: add a server-side option to disable full getbundles (pull-based clones)
Siddharth Agarwal <sid0@fb.com> [Thu, 11 May 2017 10:50:05 -0700] rev 32260
clone: add a server-side option to disable full getbundles (pull-based clones) For large enough repositories, pull-based clones take too long, and an attempt to use them indicates some sort of configuration or other issue or maybe an outdated Mercurial. Add a config option to disable them.
Mon, 08 May 2017 20:01:06 -0700 clone: warn when streaming was requested but couldn't be performed
Siddharth Agarwal <sid0@fb.com> [Mon, 08 May 2017 20:01:06 -0700] rev 32259
clone: warn when streaming was requested but couldn't be performed This helps both users and the people who support them figure out why a stream clone couldn't be performed. In an upcoming patch we're going to add a way for servers to hard abort on a full getbundle. In those cases servers might expect clients to perform a stream clone, so it's important to communicate why one couldn't be done.
Mon, 08 May 2017 18:47:24 -0700 clone: test streaming disabled because client is missing requirement
Siddharth Agarwal <sid0@fb.com> [Mon, 08 May 2017 18:47:24 -0700] rev 32258
clone: test streaming disabled because client is missing requirement Turns out we had no coverage for this important case.
Mon, 08 May 2017 17:30:51 -0700 bundle2: don't check for whether we can do stream clones
Siddharth Agarwal <sid0@fb.com> [Mon, 08 May 2017 17:30:51 -0700] rev 32257
bundle2: don't check for whether we can do stream clones At the moment this isn't used and all stream clones use the legacy protocol. In an upcoming diff, canperformstreamclone will print out a message if a stream clone was requested but couldn't happen for some reason. Removing this call ensures the message isn't printed twice.
Sat, 13 May 2017 03:37:50 +0900 debugcommands: add debugpickmergetool to examine which merge tool is chosen
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Sat, 13 May 2017 03:37:50 +0900] rev 32256
debugcommands: add debugpickmergetool to examine which merge tool is chosen Before this patch, there is no convenient way to know which merge tool is chosen for each managed files without actual merging.
Sat, 13 May 2017 03:31:42 +0900 filemerge: add internal merge tool to dump files forcibly
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Sat, 13 May 2017 03:31:42 +0900] rev 32255
filemerge: add internal merge tool to dump files forcibly Internal merge tool :dump implies premerge. Therefore, files aren't dumped, if premerge runs successfully. This undocumented behavior might confuse users, if they want to always dump files. But just making :dump omit premerge might cause backward compatibility issue for existing automation. This patch adds new internal merge tool :forcedump, which works as same as :dump, but omits premerge always. Internal tools annotated with "nomerge" should merge "change and delete" correctly, but _forcedump() can't. Therefore, it is annotated with "mergeonly" to always omit premerge, even though it doesn't merge files actually. This patch also adds explanation about premerge to :dump, to clarify how :dump actually works. BTW, this patch specifies internal tools with "internal:" prefix in newly added test scenario in test-merge-tools.t, even though this prefix is already deprecated. This is only for similarity to other tests in test-merge-tools.t.
(0) -30000 -10000 -3000 -1000 -300 -100 -10 +10 +100 +300 +1000 +3000 +10000 tip