view tests/test-archive-symlinks.t @ 49269:395f28064826

worker: avoid potential partial write of pickled data Previously, the code wrote the pickled data using os.write(). However, os.write() can write less bytes than passed to it. To trigger the problem, the pickled data had to be larger than 2147479552 bytes on my system. Instead, open a file object and pass it to pickle.dump(). This also has the advantage that it doesn’t buffer the whole pickled data in memory. Note that the opened file must be buffered because pickle doesn’t support unbuffered streams because unbuffered streams’ write() method might write less bytes than passed to it (like os.write()) but pickle.dump() relies on that all bytes are written (see https://github.com/python/cpython/issues/93050). The side effect of using a file object and a with statement is that wfd is explicitly closed now while it seems like before it was implicitly closed by process exit.
author Manuel Jacob <me@manueljacob.de>
date Sun, 22 May 2022 03:50:34 +0200
parents c4d03b6d9576
children
line wrap: on
line source

#require symlink

  $ origdir=`pwd`

  $ hg init repo
  $ cd repo
  $ ln -s nothing dangling

avoid tar warnings about old timestamp

  $ hg ci -d '2000-01-01 00:00:00 +0000' -qAm 'add symlink'

  $ hg archive -t files ../archive
  $ hg archive -t tar -p tar ../archive.tar
  $ hg archive -t zip -p zip ../archive.zip

files

  $ cd "$origdir"
  $ cd archive
  $ readlink.py dangling
  dangling -> nothing

tar

  $ cd "$origdir"
  $ tar xf archive.tar
  $ cd tar
  $ readlink.py dangling
  dangling -> nothing

#if unziplinks
zip

  $ cd "$origdir"
  $ unzip archive.zip > /dev/null 2>&1
  $ cd zip
  $ readlink.py dangling
  dangling -> nothing
#endif

  $ cd ..