graftcopies: use _filter() for filtering out invalid copies
`graftcopies()` (formerly called `duplicatecopies()`) checked that the
copy destination existed in the working copy, but it didn't check that
copy source existed in the parent of the working copy. In
`test-graft.t` we can see that as warnings about not finding ancestors
of the copied files, and also empty commits getting created.
This patch uses the existing `_filter()` function for filtering out
invalid copies. In addition to the aforementioned types, that also
includes copies where source and destination is the same.
Differential Revision: https://phab.mercurial-scm.org/D7859
copies: replace duplicatecopies() by function that takes contexts
The callers mostly have context objects, so let's avoid looking up the
same context objects inside `duplicatecopies()`.
I also renamed the function to `graftcopies()` since I think that
better matches its purpose. I did it in the same commit so it's easier
for extensions to switch between the functions.
Differential Revision: https://phab.mercurial-scm.org/D7858
graft: extract repo[None] to a variable
I plan to allow the caller pass in an overlayworkingctx, so this
prepares for that.
Differential Revision: https://phab.mercurial-scm.org/D7857
rust-core: fix typo in comment
Differential Revision: https://phab.mercurial-scm.org/D7895
sha1dc: use buffer protocol when parsing arguments
Without this, functions won't accept bytearray, memoryview,
or other types that can be exposed as bytes to the C API.
The most resilient way to obtain a bytes-like object from
the C API is using the Py_buffer interface.
This commit converts use of s#/y# to s*/y* and uses
Py_buffer for accessing the underlying bytes array.
I checked how hashlib is implemented in CPython and the
the implementation agrees with its use of the Py_buffer
interface as well as using BufferError in cases of bad
buffer types. Sadly, there's no good way to test for
ndim > 1 without writing our own C-backed Python type.
Differential Revision: https://phab.mercurial-scm.org/D7879
lfs: avoid quadratic performance in processing server responses
This is also adapted from the Facebook repo[1]. Unlike there, we were already
reading the download stream in chunks and immediately writing it to disk, so we
basically avoided the problem on download. There shouldn't be a lot of data to
read on upload, but it's better to get rid of this pattern.
[1] https://github.com/facebookexperimental/eden/commit/
82df66ffe97e21f3ee73dfec093c87500fc1f6a7
Differential Revision: https://phab.mercurial-scm.org/D7882
lfs: check content length after downloading content
Adapted from the Facebook repo[1]. The intent is to distinguish between the
connection dying and getting served a corrupt blob.
The original message:
HTTP makes no provision to tell your client that you failed halfway through
producing your response and won't have the answer they're looking for. So, if a
LFS server fails while producing a response, then we'll report an OID mismatch.
We can do a little better and disambiguate between "the server sent us the
wrong blob" (very scary) and "the server crashed" (merely annoying) by looking
at the content length of the response we got back. If it's not what was
advertised, we can reasonably safely assume the server crashed.
[1] https://github.com/facebookexperimental/eden/commit/
2a4a6fab4e882ed89b948bfc1e7d56d7c3c99dd2
Differential Revision: https://phab.mercurial-scm.org/D7881
lfs: rename a variable to clarify its use
This is the response object, not a request.
Differential Revision: https://phab.mercurial-scm.org/D7880