Created on 2006-07-03.20:25:33 by juhe, last changed 2009-08-27.13:53:38 by admin.
msg735 (view) |
Author: juhe |
Date: 2006-07-03.20:25:31 |
|
Hello,
I'm seeing this bug from past versions till now, so I decided to report
it. Here is the way how to trigger the bug:
<cut>
$ darcs --version
1.0.7 (release)
$ mkdir bug_test
$ cd bug_test
$ darcs init
$ dd if=/dev/zero of=./big_file bs=1M count=2048
2048+0 records in
2048+0 records out
$ darcs wh -ls
darcs: out of memory (requested 2148532224 bytes)
$ cat /proc/meminfo | grep "Mem\|Swap"
MemTotal: 2074524 kB
MemFree: 869652 kB
SwapCached: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
</cut>
I would expect this output:
$ darcs wh -ls
a ./big_file
Instead of this one:
darcs: out of memory (requested 2148532224 bytes)
If you need more information, feel free to contact me.
Kind regards,
Juraj Hercek
|
msg2218 (view) |
Author: zooko |
Date: 2007-10-31.16:19:34 |
|
Whatever happened to the patch that Jason Dagit wrote about two years ago that
made this problem go away? As I vaguely understood it there was something
"unclean" about his patch.
|
msg2220 (view) |
Author: kowey |
Date: 2007-10-31.16:26:42 |
|
Is this the patch in question?
* http://lists.osuosl.org/pipermail/darcs-devel/2006-January/003952.html
* http://lists.osuosl.org/pipermail/darcs-devel/2006-May/004359.html
The last message is David's latest comment on it, I think.
|
msg2222 (view) |
Author: zooko |
Date: 2007-10-31.16:46:33 |
|
David wrote:
"""
It ought to also harm memory use on Pull, but that'll only be an issue if
one runs pull -a.
The new patch format ought to make this change matter less, how much so
remains to be seen.
I'm ambivalent on this patch. It'd definitely be nicer to figure out the
memory usage/time behavior of our lazy patch reading code (which seems to
not be as lazy as we'd like when reading a single very, very large hunk).
But if noone is up for that, this workaround doesn't seem too bad.
Although, it does optimize the case of a single massive file, while greatly
hurting the case of a massive number of reasonably-sized files, which sort
of seems like you're optimizing for the rare, insane case, while leaving
the more common (but also somewhat insane) case with extremely poor memory
behavior.
"""
Could someone test the behavior of darcs with and without Jason's patch on a
massive number of reasonably-sized files?
Since I've seen multiple bug reports about this issue from multiple users, I'm
inclined to think that the status quo isn't good enough.
|
msg2934 (view) |
Author: markstos |
Date: 2008-01-30.21:54:21 |
|
This is a duplicate of issue79, which is already marked as a superceder.
|
|
Date |
User |
Action |
Args |
2006-07-03 20:25:33 | juhe | create | |
2006-07-03 20:30:05 | droundy | set | nosy:
droundy, tommy, juhe |
2007-07-16 01:18:49 | kowey | set | topic:
+ Performance nosy:
+ kowey, beschmi |
2007-08-29 20:02:55 | kowey | set | superseder:
+ whatsnew -ls loads complete contents of files into memory title: Out of memory (big file in working directory) -> whatsnew -ls => Out of memory (big file in working directory) |
2007-10-31 16:19:37 | zooko | set | status: unread -> unknown nosy:
+ zooko messages:
+ msg2218 |
2007-10-31 16:26:44 | kowey | set | messages:
+ msg2220 |
2007-10-31 16:46:34 | zooko | set | messages:
+ msg2222 |
2008-01-30 21:54:22 | markstos | set | status: unknown -> duplicate nosy:
+ markstos messages:
+ msg2934 |
2009-08-06 17:32:45 | admin | set | nosy:
+ jast, Serware, dmitry.kurochkin, darcs-devel, dagit, mornfall, simon, thorkilnaur, - droundy, juhe |
2009-08-06 20:30:09 | admin | set | nosy:
- beschmi |
2009-08-10 21:52:41 | admin | set | nosy:
+ juhe, - darcs-devel, jast, dagit, Serware, mornfall |
2009-08-25 17:47:31 | admin | set | nosy:
+ darcs-devel, - simon |
2009-08-27 13:53:38 | admin | set | nosy:
tommy, kowey, markstos, darcs-devel, zooko, juhe, thorkilnaur, dmitry.kurochkin |
|