darcs

Issue 553 darcsXXXXXX: renameFile: permission denied (1.1.0 unstable)

Title darcsXXXXXX: renameFile: permission denied (1.1.0 unstable)
Priority bug Status resolved
Milestone Resolved in
Superseder Nosy List darcs-devel, dmitry.kurochkin, eivuokko, kowey, rgm, thorkilnaur, tommy, wglozer
Assigned To
Topics Windows

Created on 2007-10-27.15:21:54 by kowey, last changed 2009-10-24.00:07:56 by admin.

Files
File name Uploaded Type Edit Remove
issue553-tempdir.tgz kowey, 2007-11-01.17:25:04 application/x-tar-gz
issue553.dpatch kowey, 2007-11-01.17:25:04 text/x-darcs-patch
Messages
msg2178 (view) Author: kowey Date: 2007-10-27.15:21:43
I get this error when I run the hashed_inventory.sh test (in hashed inventory
mode).  

This will take some tracking down...
msg2179 (view) Author: kowey Date: 2007-10-27.19:35:04
I have narrowed the problem down to this patch:

Fri Jun 15 22:47:31 CEST 2007  David Roundy <droundy@darcs.net>
  * Make Read work with gadts.

Not sure what exactly the problem is, or what to do about it, though
msg2195 (view) Author: kowey Date: 2007-10-29.22:49:31
Some more Windows weirdness.  We see the same kind of error message with the tag
command.  Funny that it can be tracked down to one patch.  It is clear that the
patch in question is not responsible for the problem.   Something deeper 
underneath.

get_tag.sh
----------
$DARCS tag -m 1.0 -A x
darcs.exe: darcsb80f9c: renameFile: permission denied (Permission denied)

is 'caused' by:
Mon Aug 20 07:22:38 CEST 2007  Eric Kow <eric.kow@gmail.com>
  * [issue520] Make --checkpoint short help more explicit.

partial.sh
----------
$DARCS tag -m T -A X

is 'caused' by

Wed Oct  3 19:55:40 CEST 2007  David Roundy <droundy@darcs.net>
  * allow tests to be run in parallel.

tag.pl
------
line 23, same problem

'Caused' by the same patch.
msg2197 (view) Author: droundy Date: 2007-10-30.14:27:38
On Mon, Oct 29, 2007 at 10:49:32PM -0000, Eric Kow wrote:
> Some more Windows weirdness.  We see the same kind of error message with the tag
> command.  Funny that it can be tracked down to one patch.  It is clear that the
> patch in question is not responsible for the problem.   Something deeper 
> underneath.
> 
> get_tag.sh
> ----------
> $DARCS tag -m 1.0 -A x
> darcs.exe: darcsb80f9c: renameFile: permission denied (Permission denied)

At a glance, I'd suggest you try removing the import of renameFile from
System.Directory in Darcs.IO with an import from Workaround.  It might also
be worth seeing if we could figure out how to make the test suite check
that only Workaround imports System.Directory.renameFile.  I'm not sure how
we could do this, but if we could, it would fix possible future bugs of
this sort.

Of course, this might not fix anything, but my quick recursive grep
suggests this could be the problem.  If this doesn't fix the bug, perhaps
we could improve the error message output by Workaround.renameFile, so that
perhaps it could give both new and old names, and maybe even the current
working directory.
-- 
David Roundy
Department of Physics
Oregon State University
msg2198 (view) Author: kowey Date: 2007-10-30.17:07:05
On Tue, Oct 30, 2007 at 14:27:38 -0000, David Roundy wrote:
> At a glance, I'd suggest you try removing the import of renameFile from
> System.Directory in Darcs.IO with an import from Workaround.

That was a good thought, and it should be done by right.  On my Windows
box, however, the renameFile in Workaround just
System.Directory.renameFile

Using the new withBugLoc function (withBugLoc), I have tracked the error
down to the renameFile in HashedRepo.lhs:106
msg2199 (view) Author: droundy Date: 2007-10-30.17:16:50
On Tue, Oct 30, 2007 at 05:07:06PM -0000, Eric Kow wrote:
> On Tue, Oct 30, 2007 at 14:27:38 -0000, David Roundy wrote:
> > At a glance, I'd suggest you try removing the import of renameFile from
> > System.Directory in Darcs.IO with an import from Workaround.
> 
> That was a good thought, and it should be done by right.  On my Windows
> box, however, the renameFile in Workaround just
> System.Directory.renameFile

Ah, that's a bug in the configure script then.  The macro
WORKAROUND_renameFile should check whether one can rename one file over an
existing one.  On windows boxes (or when compiling on posix systems, but on
a smbfs mount) this test should fail, and you should get the workaround
version.

Can you track down the configure logs to see how this is passing? You could
also try running this by hand, to see if it succeeds.
-- 
David Roundy
Department of Physics
Oregon State University
msg2200 (view) Author: droundy Date: 2007-10-30.17:17:18
P.S. the relevant code is in aclocal.m4
msg2201 (view) Author: kowey Date: 2007-10-30.17:35:15
> > On my Windows box, however, the renameFile in Workaround just
> > System.Directory.renameFile

> Ah, that's a bug in the configure script then.  The macro
> WORKAROUND_renameFile should check whether one can rename one file
> over an existing one.

Oh, yeah I had glanced at that, but somehow assumed that it must have already
been fixed.  I compiled the little test script by hand and it renames the
file without complaining.  The config.log also shows an exit code of 0.
Am I missing something?

import System.Directory ( renameFile )

main = do writeFile "conftest.data" "orig_data"
          writeFile "conftest.newdata" "new_data"
          renameFile "conftest.newdata" "conftest.data"
msg2202 (view) Author: droundy Date: 2007-10-30.17:50:32
On Tue, Oct 30, 2007 at 05:35:16PM -0000, Eric Kow wrote:
> > > On my Windows box, however, the renameFile in Workaround just
> > > System.Directory.renameFile
> 
> > Ah, that's a bug in the configure script then.  The macro
> > WORKAROUND_renameFile should check whether one can rename one file
> > over an existing one.
> 
> Oh, yeah I had glanced at that, but somehow assumed that it must have already
> been fixed.  I compiled the little test script by hand and it renames the
> file without complaining.  The config.log also shows an exit code of 0.
> Am I missing something?
> 
> import System.Directory ( renameFile )
> 
> main = do writeFile "conftest.data" "orig_data"
>           writeFile "conftest.newdata" "new_data"
>           renameFile "conftest.newdata" "conftest.data"

That's very odd.  What happens if you do this in the shell? Is this a
strange filesystem?
-- 
David Roundy
Department of Physics
Oregon State University
msg2203 (view) Author: kowey Date: 2007-10-30.17:57:47
> > Ah, that's a bug in the configure script then.  The macro
> > WORKAROUND_renameFile should check whether one can rename one file
> > over an existing one.
> 
> Oh, yeah I had glanced at that, but somehow assumed that it must have already
> been fixed.  I compiled the little test script by hand and it renames the
> file without complaining.

Also, replacing renameFile with the (rm; mv) version does not seem to
help matters.

Is it possible that there is something about those filenames that it
doesn't like?

One thought was that length might be an issue.  Then again, I tried
counting the characters in the filepath,
   C:\msys\...\_darcs\patches\7150439a55a06144d7ec5d83936a9b35d90c5fa1
and it only added up to something like 120 characters, well under the
255 character limit...
msg2204 (view) Author: kowey Date: 2007-10-30.17:59:07
> That's very odd.  What happens if you do this in the shell? Is this a
> strange filesystem?

Also succesful, both using bash (MSYS) and the DOS prompt.  I'm running
a bog standard Windows XP, albeit under Parallels
msg2205 (view) Author: droundy Date: 2007-10-30.18:58:42
On Tue, Oct 30, 2007 at 05:59:08PM -0000, Eric Kow wrote:
> > That's very odd.  What happens if you do this in the shell? Is this a
> > strange filesystem?
> 
> Also succesful, both using bash (MSYS) and the DOS prompt.  I'm running
> a bog standard Windows XP, albeit under Parallels

What's Parallels? And what is the filesystem?
-- 
David Roundy
Department of Physics
Oregon State University
msg2211 (view) Author: kowey Date: 2007-10-30.19:38:31
On Tue, Oct 30, 2007 at 18:58:42 -0000, David Roundy wrote:
> > Also succesful, both using bash (MSYS) and the DOS prompt.  I'm running
> > a bog standard Windows XP, albeit under Parallels

> What's Parallels? And what is the filesystem?

Parallels is virtual machine software for the Mac.  It's what I'm using for my
Linux and Windows testing.  The filesystem is NTFS.

Also, I tried avoiding the error by avoiding renameFile in the
writeHashFile function.  This just gives me a new error, this time in
src/Darcs/Lock.lhs, line 311 (writeToFile)
  62ee7aa8b8e52c7887a1477c8f293334092c92b5-0: renameFile: permission denied (Permission denied

The new writeHashFile looks like this

writeHashFile :: Doc -> IO String
writeHashFile d = do writeAtomicFilePS hash ps
                     return hash
                  where
                     ps = renderPS d
                     hash = sha1PS ps

Not that this does anything for the underlying mystery, but I was
curious.
msg2213 (view) Author: droundy Date: 2007-10-30.20:14:49
On Tue, Oct 30, 2007 at 07:38:32PM -0000, Eric Kow wrote:
> The new writeHashFile looks like this
> 
> writeHashFile :: Doc -> IO String
> writeHashFile d = do writeAtomicFilePS hash ps
>                      return hash
>                   where
>                      ps = renderPS d
>                      hash = sha1PS ps

Yes, this looks like an improvement (in clarity and efficiency), but I'd
format it as

writeHashFile d = do let ps = renderPS d
                         hash = sha1PS ps
                     writeAtomicFilePS hash ps
                     return hash

I find that easier to read.

> Not that this does anything for the underlying mystery, but I was
> curious.

Indeed writeAtomicFilePS just calls renameFile under the hood.
-- 
David Roundy
Department of Physics
Oregon State University
msg2214 (view) Author: droundy Date: 2007-10-30.20:18:31
On Tue, Oct 30, 2007 at 05:57:48PM -0000, Eric Kow wrote:
> > > Ah, that's a bug in the configure script then.  The macro
> > > WORKAROUND_renameFile should check whether one can rename one file
> > > over an existing one.
> > 
> > Oh, yeah I had glanced at that, but somehow assumed that it must have
> > already been fixed.  I compiled the little test script by hand and it
> > renames the file without complaining.
> 
> Also, replacing renameFile with the (rm; mv) version does not seem to
> help matters.

What's the error you get when using the rm;mv version of renameFile? Does
the removeFile fail or the System.Directory.renameFile? And could you
annotate this with some debug printing indicating whether the target
filename exists already?
-- 
David Roundy
Department of Physics
Oregon State University
msg2216 (view) Author: kowey Date: 2007-10-30.21:55:16
On Tue, Oct 30, 2007 at 20:18:32 -0000, David Roundy wrote:
> What's the error you get when using the rm;mv version of renameFile? Does
> the removeFile fail or the System.Directory.renameFile? And could you
> annotate this with some debug printing indicating whether the target
> filename exists already?

It's also permission denied.

Yes, the target does exist, even after we try to delete it.  Following
up on this, I printed the error that the removeFile target to find that
it fails because of... a permission denied error.
msg2217 (view) Author: droundy Date: 2007-10-31.14:12:21
On Tue, Oct 30, 2007 at 09:55:17PM -0000, Eric Kow wrote:
> On Tue, Oct 30, 2007 at 20:18:32 -0000, David Roundy wrote:
> > What's the error you get when using the rm;mv version of renameFile? Does
> > the removeFile fail or the System.Directory.renameFile? And could you
> > annotate this with some debug printing indicating whether the target
> > filename exists already?
> 
> It's also permission denied.
> 
> Yes, the target does exist, even after we try to delete it.  Following
> up on this, I printed the error that the removeFile target to find that
> it fails because of... a permission denied error.

Ah.  Have you figured out why the permission is denied on the removeFile?
My guess is that the problem is that we've got the file open still,
somewhere.  Which would suggest that the bug is that we're not reading the
file in its entirety.  This could be another reemergence of a bug where we
read a file lazily.  (I understand I'm throwing a lot of the debugging
right back at you, for which I apologize... hopefully my suggestions will
at least make this more pleasant.)

Actually, if the problem really is in writeHashFile, perhaps we can solve
it by simply checking whether the file already exists and has the proper
hash? Then we wouldn't need to write over the file... and that might
actually even be faster.  We could also check that the contents match what
we're about to write, to be aware in case of hash collision.  I wrote it
like this mostly just because I was lazy.
-- 
David Roundy
Department of Physics
Oregon State University
msg2223 (view) Author: kowey Date: 2007-11-01.01:29:18
On Wed, Oct 31, 2007 at 14:12:22 -0000, David Roundy wrote:
> Ah.  Have you figured out why the permission is denied on the removeFile?

Not yet.

> My guess is that the problem is that we've got the file open still,
> somewhere.

Yeah.  I eventually discovered this handy (but crashy) tool:
  http://www.microsoft.com/technet/sysinternals/utilities/ProcessExplorer.mspx
It lets me see what files a process has open, among other things.

Then I had the removeFile error handler pause (getLine) when it
encounters a problem.  Indeed, the darcs process does have a handle open
for the file in question.

Any ideas how to track this down?  I've hooked a little routine that
prints out a filename and a message if the filename is long enough to be
interesting.  I've attached it to readFilePS and friends in
FastPackedString... but this doesn't seem too helpful

Finished recording patch 'uno'
[gzReadFile  C:/msys/1.0/home/Eric/issue553/tt/_darcs/patches/2afc89fd003d4b41feadc918a39627d172e7253c
-gzReadFile] C:/msys/1.0/home/Eric/issue553/tt/_darcs/patches/2afc89fd003d4b41feadc918a39627d172e7253c
[readFilePS  C:/msys/1.0/home/Eric/issue553/tt/_darcs/patches/2afc89fd003d4b41feadc918a39627d172e7253c
 readFilePS] C:/msys/1.0/home/Eric/issue553/tt/_darcs/patches/2afc89fd003d4b41feadc918a39627d172e7253c
[gzReadFile  C:/msys/1.0/home/Eric/issue553/tt/_darcs/patches/f84f008f9a7a1836310775b519dbaa0fd3a81ff6
-gzReadFile] C:/msys/1.0/home/Eric/issue553/tt/_darcs/patches/f84f008f9a7a1836310775b519dbaa0fd3a81ff6
[readFilePS  C:/msys/1.0/home/Eric/issue553/tt/_darcs/patches/f84f008f9a7a1836310775b519dbaa0fd3a81ff6
 readFilePS] C:/msys/1.0/home/Eric/issue553/tt/_darcs/patches/f84f008f9a7a1836310775b519dbaa0fd3a81ff6
[gzReadFile  C:/msys/1.0/home/Eric/issue553/tt/_darcs/patches/2afc89fd003d4b41feadc918a39627d172e7253c
-gzReadFile] C:/msys/1.0/home/Eric/issue553/tt/_darcs/patches/2afc89fd003d4b41feadc918a39627d172e7253c
[readFilePS  C:/msys/1.0/home/Eric/issue553/tt/_darcs/patches/2afc89fd003d4b41feadc918a39627d172e7253c
 readFilePS] C:/msys/1.0/home/Eric/issue553/tt/_darcs/patches/2afc89fd003d4b41feadc918a39627d172e7253c
uh-oh! (press Enter): f84f008f9a7a1836310775b519dbaa0fd3a81ff6

The 'uh-oh!' file is the one where the error occured.  Does this give you any
ideas?

> (I understand I'm throwing a lot of the debugging
> right back at you, for which I apologize... hopefully my suggestions will
> at least make this more pleasant.)

Oh I'm perfectly happy to play this role in the debugging.  It's
definitely going a lot faster if your input!

> Actually, if the problem really is in writeHashFile, perhaps we can solve
> it by simply checking whether the file already exists and has the proper
> hash?

Well it would be nice if we could figure out what exactly is going on
first...
msg2224 (view) Author: kowey Date: 2007-11-01.13:47:56
> > My guess is that the problem is that we've got the file open still,
> > somewhere.

Ok, I think I have an idea what's going on.

As you suggested, it has something to do with reading a file lazily.
The culprit is gzReadPatchLazily, which is that is called when you
addPatchTentatively.

Looking deeper into the code (gzReadFileLazily -- oddly, my putStrLn
"I was here" do not show for this function),
 * When use_mmap is enabled, we close the handle immediately and use
   mmapFilePS.  This is why everything works under Unix.
 * Otherwise (Windows) we invoke readHandleLazily.

I tried modifying the code so that it closes the handle when done
reading (would that have been the right thing to do in any case?)
but to no avail

  case lread of
      0 -> hClose h >> return [] -- MODIFIED HERE
      l -> do rest <- unsafeInterleaveIO read_rest
              return (PS fp 0 l:rest)

... so it really is the laziness which is getting in the way.  Any thoughts on
a good fix?  A dumb 'solution' might be to copy the file, since its supposed to
be read only and read *that* instead.

P.S.: I noticed a module System.Win32.FileMapping by Esa, on the standard lib
 http://www.haskell.org/ghc/docs/latest/html/libraries/Win32/System-Win32-FileMapping.html
 Is that like mmap for Windows, or is it completely unrelated?
 If so, would it make darcs feel faster for Windows users?
msg2225 (view) Author: droundy Date: 2007-11-01.14:53:33
On Thu, Nov 01, 2007 at 01:47:57PM -0000, Eric Kow wrote:
> > > My guess is that the problem is that we've got the file open still,
> > > somewhere.
> 
> Ok, I think I have an idea what's going on.
> 
> As you suggested, it has something to do with reading a file lazily.
> The culprit is gzReadPatchLazily, which is that is called when you
> addPatchTentatively.
> 
> Looking deeper into the code (gzReadFileLazily -- oddly, my putStrLn
> "I was here" do not show for this function),
>  * When use_mmap is enabled, we close the handle immediately and use
>    mmapFilePS.  This is why everything works under Unix.
>  * Otherwise (Windows) we invoke readHandleLazily.

Actaully, under unix we'll also be fine if we use readHandleLazily, because
under unix it's legal to delete an open file.  This is the main issue with
windows that makes things difficult.

> I tried modifying the code so that it closes the handle when done
> reading (would that have been the right thing to do in any case?)
> but to no avail
> 
>   case lread of
>       0 -> hClose h >> return [] -- MODIFIED HERE
>       l -> do rest <- unsafeInterleaveIO read_rest
>               return (PS fp 0 l:rest)

Are you sure that we aren't actually using gzReadHandleLazily? It looks to
me like this is a real bugfix, which isn't noticed... or perhaps rarely has
an effect because h is garbage-collected, which leads to the closing of the
file handle.  In any case, this looks like a good change:  we should
explicitly close h, since there's no guarantee when the garbage-collector
and finalizer will be run.

> ... so it really is the laziness which is getting in the way.  Any thoughts on
> a good fix?  A dumb 'solution' might be to copy the file, since its supposed to
> be read only and read *that* instead.

That 'solution' really would be dumb (as you say).

Another possibility is to simply read the file strictly.  This may break
some of Ian's optimizations to allow darcs to deal with patches that won't
fit into memory (think an initial patch in a linux kernel repository), but
those have always been somewhat iffy optimizations in my mind, as we still
can't commute any patch that can't be held in memory.  Anyhow, this is the
purpose for the lazy parsing/reading of patches.

The lazy patch parsing code is *supposed* to close the file handle after
the patch is used when lazily reading a file (there's considerable
complexity in the code to make this true), but this is a feature that could
be easily broken, and may have been broken.

> P.S.: I noticed a module System.Win32.FileMapping by Esa, on the standard lib
>  http://www.haskell.org/ghc/docs/latest/html/libraries/Win32/System-Win32-FileMapping.html
>  Is that like mmap for Windows, or is it completely unrelated?
>  If so, would it make darcs feel faster for Windows users?

At one time, we had mmap for Windows, but it was found to be close to
useless.  Unlike with posix systems, windows mmap requires that the file
remain open, so it uses up a file handle, and also keeps the file from
being deleted, due to windows filesystem semantics.  And I believe I was
also informed at one time that mmap on windows doesn't have the same
memory-efficiency gain, because the entire file is read strictly when it is
mapped, so it still exerts the same memory pressure immediately--although I
suppose it is more friendly towards being swapped out later.

In any case, mmap is quite a minor benefit, as the parsed patch takes up
much more memory that the patch itself.

...

All this being said, I think the best solution seems to be making
writeHashFile check for the pre-existence of a correct file.  It's arguably
more efficient (certainly less writing to disk, and reading is probably
fast, since odds are good it's just been read and is in cache), and it's
just "correct" behavior.  It'd be nice to ensure that files are closed as
early as possible, but I'm not sure we should see the failure to close
files as a bug.
-- 
David Roundy
Department of Physics
Oregon State University
msg2226 (view) Author: kowey Date: 2007-11-01.15:00:41
> Actaully, under unix we'll also be fine if we use readHandleLazily, because
> under unix it's legal to delete an open file.  This is the main issue with
> windows that makes things difficult.

Right, I had forgotten about that.

> Are you sure that we aren't actually using gzReadHandleLazily?

We are.  I had set a filter to avoid printing out 'I was here' on short
FilePaths.  Since the file was being referred to as a relative path, it
was inherently short.

> In any case, this looks like a good change:  we should
> explicitly close h, since there's no guarantee when the garbage-collector
> and finalizer will be run.

Ok, will submit this as a patch.  If nothing else, this bug has been a
good way for us to squeeze out some minor improvements to the code.

> > ... so it really is the laziness which is getting in the way.  Any thoughts on
> > a good fix?  A dumb 'solution' might be to copy the file, since its supposed to
> > be read only and read *that* instead.
> 
> Another possibility is to simply read the file strictly.

Yeah, removing the unsafeInterleaveIO statements in readHandleLazily
does make the problem go away.

> All this being said, I think the best solution seems to be making
> writeHashFile check for the pre-existence of a correct file.  It's arguably
> more efficient (certainly less writing to disk, and reading is probably
> fast, since odds are good it's just been read and is in cache), and it's
> just "correct" behavior.

Ok.  I'd be happy with that now that the mystery has been pinned down a
bit.
msg2229 (view) Author: kowey Date: 2007-11-01.17:25:04
Hmm...

Am I doing anything wrong here?

The patches attached cause 'darcs optimize --reorder-patches' to act
funny (the filepath.sh test fails, for example).  It seems like it's not
writing the new inventory or something.

Seems the first patch is ok; it's just checking to see if the file
already exists that goes wrong.

Note that I had to use gzReadFilePS because at some point, the content
of the file is gzipped, although this is not how we write it out in this
function...

I'm also attaching the affected directory for possible forensics work.
You'll notice some differences if you do a darcs check.
Attachments
msg2231 (view) Author: kowey Date: 2007-11-02.12:13:49
I have tracked down the cause of the regression caused by not systematically
overwiting the hash file.  Somewhere out there, somebody is writing these files
gzipped.  Before my patch, these would be systematically overwritten as plain
text.  Some parts of the code *count* on the files being text.  Specifically:
when writing a hashed inventory, if it already exists in _darcs/inventories, we
just copy it over, not checking to see if it was zipped or not.

I was able to work around the bug by replacing the copy command with a
gzReadFilePS foo >>= writeAtomicFilePS bar.  This doesn't seem quite right. 
Probably the right thing to do is to ensure that _darcs/inventories never gets
filled with gzipped content.
msg2232 (view) Author: droundy Date: 2007-11-03.20:56:09
On Fri, Nov 02, 2007 at 12:13:50PM -0000, Eric Kow wrote:
> I have tracked down the cause of the regression caused by not systematically
> overwiting the hash file.  Somewhere out there, somebody is writing these files
> gzipped.  Before my patch, these would be systematically overwritten as plain
> text.  Some parts of the code *count* on the files being text.  Specifically:
> when writing a hashed inventory, if it already exists in _darcs/inventories, we
> just copy it over, not checking to see if it was zipped or not.
> 
> I was able to work around the bug by replacing the copy command with a
> gzReadFilePS foo >>= writeAtomicFilePS bar.  This doesn't seem quite right. 
> Probably the right thing to do is to ensure that _darcs/inventories never gets
> filled with gzipped content.

Or alternatively, we could assume that any content may be gzipped... except
I suppose that we append to the inventory file, so it's no fair assuming
it's gzipped.  I think I'd rather allow any "hash" files to be gzipped.
Perhaps the right thing is to ensure that noone ever accesses "hash" files
except through an established API? This would handle the copy case, and
would also allow us to implement an option to *always* check that the
contents of hashed files is correct.
-- 
David Roundy
Department of Physics
Oregon State University
msg2234 (view) Author: kowey Date: 2007-11-04.19:34:13
On Sat, Nov 03, 2007 at 20:56:10 -0000, David Roundy wrote:
> Or alternatively, we could assume that any content may be gzipped... except
> I suppose that we append to the inventory file, so it's no fair assuming
> it's gzipped.  I think I'd rather allow any "hash" files to be gzipped.
> Perhaps the right thing is to ensure that noone ever accesses "hash" files
> except through an established API? This would handle the copy case, and
> would also allow us to implement an option to *always* check that the
> contents of hashed files is correct.

Hmm.  At the moment, I'm a little fed up with this bug and impatient to
see things working on Windows again.

I'm going to fast-track the (gzRead >>= writeFile) hack, which at least
ensures that the main inventory file is not gzipped, which takes care of
appending.  The most awkward thing about this hack is that it allows for
files under _darcs/inventories to be gzipped, which seems like it's
inherently wrong.  I don't think this will introduce any backwards
compatibility issues, because your fetchFileUsingCache code already
assumes potentially gzipped content everywhere.  Besides, I don't think
anybody is really using the hashed inventories yet.

I'm setting this one aside for now, in any case.
msg2243 (view) Author: kowey Date: 2007-11-04.22:59:06
Marking as resolved in unstable, although a better solution to the gzipped hash
repo files problem ought to be worked out:

Thu Nov  1 17:14:37 CET 2007  Eric Kow <eric.kow@gmail.com>
  * [issue553] Do not attempt to write hash file if it already exists.

Sun Nov  4 20:28:49 CET 2007  Eric Kow <eric.kow@gmail.com>
  * [issue553] Allow for files in _darcs/inventories to be gzipped.
History
Date User Action Args
2007-10-27 15:21:58koweycreate
2007-10-27 19:35:06koweysetstatus: unread -> unknown
messages: + msg2179
2007-10-29 22:49:33koweysetmessages: + msg2195
title: optimize --reorder => renameFile: permission denied (1.1.0 unstable) -> darcsXXXXXX: renameFile: permission denied (1.1.0 unstable)
2007-10-30 14:27:39droundysetmessages: + msg2197
2007-10-30 17:07:06koweysetmessages: + msg2198
2007-10-30 17:16:51droundysetmessages: + msg2199
2007-10-30 17:17:19droundysetmessages: + msg2200
2007-10-30 17:35:17koweysetmessages: + msg2201
2007-10-30 17:50:33droundysetmessages: + msg2202
2007-10-30 17:57:49koweysetmessages: + msg2203
2007-10-30 17:59:08koweysetmessages: + msg2204
2007-10-30 18:58:43droundysetmessages: + msg2205
2007-10-30 19:38:33koweysetmessages: + msg2211
2007-10-30 20:14:50droundysetmessages: + msg2213
2007-10-30 20:18:33droundysetmessages: + msg2214
2007-10-30 21:55:18koweysetmessages: + msg2216
2007-10-31 14:12:23droundysetmessages: + msg2217
2007-11-01 01:29:19koweysetmessages: + msg2223
2007-11-01 13:47:58koweysetmessages: + msg2224
2007-11-01 14:53:34droundysetmessages: + msg2225
2007-11-01 15:00:42koweysetmessages: + msg2226
2007-11-01 17:25:06koweysetfiles: + issue553.dpatch, issue553-tempdir.tgz
messages: + msg2229
2007-11-02 12:13:51koweysetmessages: + msg2231
2007-11-03 20:56:11droundysetmessages: + msg2232
2007-11-04 19:34:14koweysetmessages: + msg2234
2007-11-04 22:59:07koweysetstatus: unknown -> resolved-in-unstable
messages: + msg2243
2007-11-17 11:09:59koweysetstatus: resolved-in-unstable -> resolved-in-stable
2008-05-14 08:25:48koweysetstatus: resolved-in-stable -> resolved
nosy: + dagit
2009-08-06 17:42:52adminsetnosy: + markstos, jast, Serware, dmitry.kurochkin, darcs-devel, zooko, mornfall, simon, thorkilnaur, - droundy, wglozer, eivuokko, rgm
2009-08-06 20:39:51adminsetnosy: - beschmi
2009-08-10 22:09:16adminsetnosy: + wglozer, eivuokko, rgm, - markstos, darcs-devel, zooko, jast, Serware, mornfall
2009-08-11 00:03:06adminsetnosy: - dagit
2009-08-25 17:56:12adminsetnosy: + darcs-devel, - simon
2009-08-27 13:58:47adminsetnosy: tommy, kowey, wglozer, darcs-devel, eivuokko, rgm, thorkilnaur, dmitry.kurochkin
2009-10-23 22:42:05adminsetnosy: + robmoss, - rgm
2009-10-24 00:07:56adminsetnosy: + rgm, - robmoss