view tests/test-schemes.t @ 36367:043e77f3be09

sshpeer: return framed file object when needed Currently, wireproto.wirepeer has a default implementation of _submitbatch() and sshv1peer has a very similar implementation. The main difference is that sshv1peer is aware of the total amount of bytes it can read whereas the default implementation reads the stream until no more data is returned. The default implementation works for HTTP, since there is a known end to HTTP responses (either Content-Length or 0 sized chunk). This commit teaches sshv1peer to use our just-introduced "cappedreader" class for wrapping a file object to limit the number of bytes that can be read. We do this by introducing an argument to specify whether the response is framed. If set, we returned a cappedreader instance instead of the raw pipe. _call() always has framed responses. So we set this argument unconditionally and then .read() the entirety of the result. Strictly speaking, we don't need to use cappedreader in this case and can inline frame decoding/read logic. But I like when things are consistent. The overhead should be negligible. _callstream() and _callcompressable() are special: whether framing is used depends on the specific command. So, we define a set of commands that have framed response. It currently only contains "batch." As a result of this change, the one-off implementation of _submitbatch() in sshv1peer can be removed since it is now safe to .read() the response's file object until end of stream. cappedreader takes care of not overrunning the frame. Differential Revision: https://phab.mercurial-scm.org/D2380
author Gregory Szorc <gregory.szorc@gmail.com>
date Wed, 21 Feb 2018 08:35:48 -0800
parents bf1d5c223ac0
children 393e44324037
line wrap: on
line source

#require serve

  $ cat <<EOF >> $HGRCPATH
  > [extensions]
  > schemes=
  > 
  > [schemes]
  > l = http://localhost:$HGPORT/
  > parts = http://{1}:$HGPORT/
  > z = file:\$PWD/
  > EOF
  $ hg init test
  $ cd test
  $ echo a > a
  $ hg ci -Am initial
  adding a

invalid scheme

  $ hg log -R z:z
  abort: no '://' in scheme url 'z:z'
  [255]

http scheme

  $ hg serve -n test -p $HGPORT -d --pid-file=hg.pid -A access.log -E errors.log
  $ cat hg.pid >> $DAEMON_PIDS
  $ hg incoming l://
  comparing with l://
  searching for changes
  no changes found
  [1]

check that {1} syntax works

  $ hg incoming --debug parts://localhost
  using http://localhost:$HGPORT/
  sending capabilities command
  comparing with parts://localhost/
  query 1; heads
  sending batch command
  searching for changes
  all remote heads known locally
  no changes found
  [1]

check that paths are expanded

  $ PWD=`pwd` hg incoming z://
  comparing with z://
  searching for changes
  no changes found
  [1]

check that debugexpandscheme outputs the canonical form

  $ hg debugexpandscheme bb://user/repo
  https://bitbucket.org/user/repo

expanding an unknown scheme emits the input

  $ hg debugexpandscheme foobar://this/that
  foobar://this/that

expanding a canonical URL emits the input

  $ hg debugexpandscheme https://bitbucket.org/user/repo
  https://bitbucket.org/user/repo

errors

  $ cat errors.log

  $ cd ..