darcs

Issue 1153 darcs waits to hear back from servers unnecessarily

Title darcs waits to hear back from servers unnecessarily
Priority bug Status resolved
Milestone Resolved in 2.8.0
Superseder task: automatically expire unused caches
View: 1599
Nosy List Serware, abuiles, darcs-devel, dmitry.kurochkin, eivuokko, ertai, jaredj, kirby, kowey, mornfall, thorkilnaur, tux_rocker, wglozer, zooko
Assigned To zooko
Topics Performance, Windows

Created on 2008-10-21.21:33:30 by zooko, last changed 2011-10-13.13:08:21 by markstos.

Files
File name Uploaded Type Edit Remove
_darcs_prefs.tar.bz2 zooko, 2008-12-09.19:45:16 text/plain
log.txt zooko, 2008-10-23.21:04:28 text/plain
log.txt zooko, 2008-10-23.23:30:53 text/plain
log.txt zooko, 2008-10-24.00:11:18 text/plain
log.txt zooko, 2008-10-24.12:07:41 text/plain
log.txt.bz2 zooko, 2008-10-24.15:11:48 application/x-bzip2
out2.txt zooko, 2008-10-21.21:33:21 text/plain
teeboth.txt zooko, 2008-10-21.22:10:05 text/plain
zooko.log.txt.bz2 zooko, 2008-10-25.01:31:17 text/plain
zooko.log.txt.bz2 zooko, 2008-12-09.19:42:39 text/plain
Messages
msg6376 (view) Author: zooko Date: 2008-10-21.21:33:21
With darcs-2.1.0 (+2 patches) on a linux server and darcs-2.1.0 (+123 patches)
as built by the Windows buildbot on a Windows client, and with putty ssh Release
0.60, darcs transfer-mode does not work.  This is a major problem because, you
see, there were lots of bugs in this issue tracker about darcs's attempts to
invoke scp/sftp and these bugs were closed by Eric Kow since people could use
darcs transfer-mode.  (e.g. issue362, issue325, issue696)

Also, it seems like old-fashioned scp mode is deadly slow in this setup.  I
killed it after several minutes (see attached output).

Attached is the stderr from "darcs pull -a -v -v -v --debug --timing".
Attachments
msg6377 (view) Author: zooko Date: 2008-10-21.21:37:57
Oh, looking at out2.txt tells me that I wasn't pulling from my linux server
which runs darcs-2.1.0 (+2 patches), I was pulling from my Mac OS X server which
runs darcs-2.0.2 (release).
msg6378 (view) Author: zooko Date: 2008-10-21.21:45:34
Oh wait a minute, it seems like my darcs client tried to talk to two different
servers:

Tue Oct 21 15:20:39 Mountain Daylight Time 2008: ssh zooko@allmydata.org darcs
transfer-mode --repodir /home/source/darcs/tahoe/trunk-hashedformat/

and

Tue Oct 21 15:20:46 Mountain Daylight Time 2008: ssh
wonwinmcbrootles@192.168.1.147 darcs transfer-mode --repodir
playground/allmydata/tahoe/trunk/

The former is the linux server that I mentioned and is the one that I *thought*
I was pulling from.  The latter is an old, no-longer-reachable IP address for
 the Mac OS X server that I mentioned.

It kind of looks like darcs wasted 129 seconds trying in vain to talk to that
other one that I didn't ask it to pull from before proceeding to use the first
one that it had already talked to.
msg6379 (view) Author: zooko Date: 2008-10-21.21:50:45
Oh wow, it kind of looks like darcs is waiting for a timeout (maybe 60 seconds
timeout) with the non-existent server in between very patch fetch from the good
server.

This pull is going to take awhile!
msg6380 (view) Author: droundy Date: 2008-10-21.21:52:16
I imagine this may be a newline issue...

David
msg6382 (view) Author: zooko Date: 2008-10-21.21:54:22
Add to the list of issues with invoking scp: issue1135.  I don't know whether
using ssh and darcs-transfer-mode would work any better than using scp does for
milan's Windows-using co-worker.
msg6383 (view) Author: zooko Date: 2008-10-21.21:55:44
There is a "    fd:7: hGetLine: end of file" in the out2.txt, but I think that
is coming from the attempt to connect to the non-existent server.

Maybe this is an issue that arises whenever you have a non-existent server
listed in your "_darcs/prefs/repos"?
msg6384 (view) Author: zooko Date: 2008-10-21.21:58:42
Hm.  I removed the "192.168.1.147" server from my _darcs/prefs/repos and tried
again, but somehow darcs learns about this server from some other source...
msg6385 (view) Author: zooko Date: 2008-10-21.22:05:53
Okay I grepped around and found another instance of "192.168.1.147" (the old,
no-longer-reachable IP address) in my _darcs/prefs/source file.  After removing
it from there, in addition to already having removed it from _darcs/prefs/repos,
then it stopped trying to connect to that server during the pull from the linux
server.
msg6386 (view) Author: zooko Date: 2008-10-21.22:10:05
Okay, here's the output from "darcs pull -a -v -v -v --debug --timing" after
removing the mention of the old server from both _darcs/prefs/repos and
_darcs/prefs/sources.  This pull took 8m7s, which is about 8m longer than it
should have.  ;-)
Attachments
msg6387 (view) Author: zooko Date: 2008-10-21.22:15:34
The Title of this issue is wrong -- transfer-mode worked fine once I removed the
old, no-longer-reachable IP address from those two _darcs/prefs files.

There are at least two issues: one is that the presence of an old,
no-longer-reachable server in those files caused a huge slowdown.  Another might
be that the presence of a current, still-reachable server might also cause a
slowdown in somecases.  A third is that even with only one server involved, and
that one running darcs-2.1.0, this darcs pull took 8m7s.  I noticed that there
was a two-minute-long stall at one point in the output (excerpted from
"teeboth.txt" below):

Tue Oct 21 16:00:20 Mountain Daylight Time 2008: with ssh zooko@allmydata.org:
get
patches/0000001988-7660de71a303d3098f89216b5fac7d54dc3cd67aca0e588a02b624a617b299ad
Tue Oct 21 16:00:20 Mountain Daylight Time 2008: Reenabling progress reports.
Tue Oct 21 16:00:20 Mountain Daylight Time 2008: Reading patch file: Tue Sep 16
19:25:45 Mountain Daylight Time 2008  warner@allmydata.com
  * #249: move dependent libs out of misc/dependencies/, get them from
tahoe-deps.tar.gz instead
Tue Oct 21 16:00:20 Mountain Daylight Time 2008: I'm doing copyFileUsingCache on
patches/0017393047-d400755fcb8bf5e538987f1e06750d89d460262713e735fc5aa889e4e5383ccd
Tue Oct 21 16:00:20 Mountain Daylight Time 2008: In fetchFileUsingCachePrivate
I'm going manually
Tue Oct 21 16:00:20 Mountain Daylight Time 2008:     getting
0017393047-d400755fcb8bf5e538987f1e06750d89d460262713e735fc5aa889e4e5383ccd
Tue Oct 21 16:00:20 Mountain Daylight Time 2008:     from
/Users/wonwinmcbrootles/playground/allmydata/tahoe/trunk-hf/_darcs/patches/0017393047-d400755fcb8bf5e538987f1e06750d89d460262713e735fc5aa889e4e5383ccd
Tue Oct 21 16:00:20 Mountain Daylight Time 2008: In fetchFileUsingCachePrivate
I'm going manually
Tue Oct 21 16:00:20 Mountain Daylight Time 2008:     getting
0017393047-d400755fcb8bf5e538987f1e06750d89d460262713e735fc5aa889e4e5383ccd
Tue Oct 21 16:00:20 Mountain Daylight Time 2008:     from
zooko@allmydata.org:/home/source/darcs/tahoe/trunk-hashedformat/_darcs/patches/0017393047-d400755fcb8bf5e538987f1e06750d89d460262713e735fc5aa889e4e5383ccd
Tue Oct 21 16:00:20 Mountain Daylight Time 2008: Disabling progress reports...
Tue Oct 21 16:00:20 Mountain Daylight Time 2008: with ssh zooko@allmydata.org:
get
patches/0017393047-d400755fcb8bf5e538987f1e06750d89d460262713e735fc5aa889e4e5383ccd
Tue Oct 21 16:02:06 Mountain Daylight Time 2008: Reenabling progress reports.
Tue Oct 21 16:02:25 Mountain Daylight Time 2008: Reading patch file: Tue Sep 16
18:45:47 Mountain Daylight Time 2008  secorp@allmydata.com
  * conf_wiz.py - updating version numbers in file, should really get these from
a TAG or conf file
Tue Oct 21 16:02:25 Mountain Daylight Time 2008: I'm doing copyFileUsingCache on
patches/0000000403-cf8610853eb90a20a709deba1a42f1cdaa3e734aa7a7c82c3f7280b9d93a0e7e
Tue Oct 21 16:02:25 Mountain Daylight Time 2008: In fetchFileUsingCachePrivate
I'm going manually
Tue Oct 21 16:02:25 Mountain Daylight Time 2008:     getting
0000000403-cf8610853eb90a20a709deba1a42f1cdaa3e734aa7a7c82c3f7280b9d93a0e7e
Tue Oct 21 16:02:25 Mountain Daylight Time 2008:     from
/Users/wonwinmcbrootles/playground/allmydata/tahoe/trunk-hf/_darcs/patches/0000000403-cf8610853eb90a20a709deba1a42f1cdaa3e734aa7a7c82c3f7280b9d93a0e7e
Tue Oct 21 16:02:25 Mountain Daylight Time 2008: In fetchFileUsingCachePrivate
I'm going manually
Tue Oct 21 16:02:25 Mountain Daylight Time 2008:     getting
0000000403-cf8610853eb90a20a709deba1a42f1cdaa3e734aa7a7c82c3f7280b9d93a0e7e
Tue Oct 21 16:02:25 Mountain Daylight Time 2008:     from
zooko@allmydata.org:/home/source/darcs/tahoe/trunk-hashedformat/_darcs/patches/0000000403-cf8610853eb90a20a709deba1a42f1cdaa3e734aa7a7c82c3f7280b9d93a0e7e
Tue Oct 21 16:02:25 Mountain Daylight Time 2008: Disabling progress reports...
Tue Oct 21 16:02:25 Mountain Daylight Time 2008: with ssh zooko@allmydata.org:
get
patches/0000000403-cf8610853eb90a20a709deba1a42f1cdaa3e734aa7a7c82c3f7280b9d93a0e7e
Tue Oct 21 16:02:25 Mountain Daylight Time 2008: Reenabling progress reports.
msg6388 (view) Author: zooko Date: 2008-10-21.22:18:03
A fourth issue that I've mentioned in this ticket is the various problems
(described in various other tickets) with invoking a separate "scp" process,
especially on Windows.  It isn't clear to me if invoking a separate "ssh"
process (in order to use darcs transfer-mode) works better than invoking a
separate scp process does.
msg6398 (view) Author: zooko Date: 2008-10-23.15:49:48
I didn't make clear in my earlier posts that this issue makes the current
buildbot-built version of darcs unusable on my Windows system here.  (Eight
minutes is just far too long to wait for a pull.  I could install mercurial and
switch from darcs to mercurial in eight minutes <wink>.)  Also, it appears that
if you move a server that you formerly used, so that its IP address stops
accepting connections, then even worse delays happen.  This issue presumably
effects other platforms as well as Windows.

Just a heads-up, that I currently have no usable version of darcs for Windows
other than the old darcs-2.0.0 executable that's been up since April.

Perhaps this would be greatly improved by statically linking libcurl into the
darcs executable for Windows.  Hopefully the new work on cabal and/or franchise
and/or better makefile and/or better autoconf will make it easy enough to do
that even a lazy bum like me can do it on the windows buildbot.
msg6399 (view) Author: kowey Date: 2008-10-23.16:03:33
Zooko: please summarise your findings so far and also retitle this issue
accordingly (you can do so from the web interface, or by changing the Subject
line of your mail).  This ticket was a little overwhelming :-)
msg6402 (view) Author: zooko Date: 2008-10-23.20:49:10
Summary: "darcs pull -a user@host" is way too slow, for me.  I'm using the
latest automatically built executable for Windows.  The server is also darcs >=
2.0.0, so the faster darcs-transfer-mode ought to be atomatically used.  I don't
know if it is used or not.

The two log files attached to this ticket, plus a third one that I intend to
attach soon, hopefully have some useful clues for darcs hackers.
msg6404 (view) Author: zooko Date: 2008-10-23.21:02:47
Okay, I did the following in order to generate a new log file that will
hopefully help some darcs hacker understand why pull takes so long here.

I did "darcs obliterate --all --last=1000" on the Tahoe repository.
Then did "darcs get trunk trunk-missing-some".
I removed my system-wide darcs cache (which had 9500 files in it).
I cd'ed into trunk-missing-some and rm'ed _darcs/prefs/repos and
_darcs/prefs/source.

Then I ran "time darcs pull -a -v -v -v --debug --timing
zooko@allmydata.org:/home/source/darcs/tahoe/trunk-hashedformat".  It took 26
minutes, and the resulting output will be attached:
msg6405 (view) Author: kowey Date: 2008-10-23.21:15:14
Is this a regression from a previous darcs?
msg6406 (view) Author: zooko Date: 2008-10-23.21:22:39
That's an excellent question.  I will try the same experiment with darcs-2.0.0.
msg6407 (view) Author: zooko Date: 2008-10-23.21:40:27
Okay I tried it again with the older darcs executable that I have here -- it
calls itself "2.0.0 (2.0.0 (+ 19 patches))".  It took 4 minutes, 24 seconds. 
That's still not a performance to brag about, but it seems like there is a
regression on current trunk.  Note that the older darcs executable was built in
a different way than the newer one.  The newer one I know how it was built -- it
was built by buildbot according to the buildbot script, so it is to some degree
"reproducible".  The older one I don't know where it came from, or who it came
from.  I guess we could try building darcs-2.0.0 with the buildbot for another
comparison.

Anyway, here's the log file from this same experiment -- pulling 1000 patches
over ssh -- with darcs-2.0.0 (darcs-2.0.0 (+19 patches) ).
msg6408 (view) Author: zooko Date: 2008-10-23.21:41:20
Argh, no wait, I didn't do the experiment right the most recent time.
I forgot to move to a new repository, therefore the obliterated patches were
still sitting around in the repo, which probably explains the 20 minutes speed-up.
Please stay tuned for a reproduction of this experiment.
msg6409 (view) Author: zooko Date: 2008-10-23.23:16:57
Hm...  Okay, I tried again with darcs-2.0.0, and again it took less time -- 7
minutes 30 seconds.

Log file is attached next:
msg6410 (view) Author: zooko Date: 2008-10-24.00:11:18
Oops, no this is not a regression.  I just re-ran the test with darcs-2.1.0+123
from buildbot, and it, too, took 7 minutes 35 seconds.  I believe the earlier
measurement of 26 minutes was due to a competing process on this computer using
up the CPU and disk when I was running that darcs pull measurement.

So the good news is that this isn't a regression.  The bad news is that darcs
executables on Windows are quite slow at pull over ssh.

log attached from most recent run with darcs-2.1.0 executable.
Attachments
msg6411 (view) Author: zooko Date: 2008-10-24.02:10:01
Ah, perhaps it would help if I upgraded the Windows buildbot from ghc-6.8.2 to 
ghc-6.8.3?

http://buildbot.darcs.net/builders/zooko%20allmydata%20virtual2%20Windows-
XP%20i386
msg6412 (view) Author: kowey Date: 2008-10-24.09:16:52
Thanks for the summmary!  Is it just on Windows that it's slow?  

It does seem that you are using transfer-mode for what it's worth.  I'll submit
a patch clarifying the debug message.  I'm making Nicolas nosy because this may
show that the notion of packs could be useful even if we have connection sharing.

Btw: I'm not sure if it's better to have plaintext or to have small attachments,
but it may be worthwhile to gzip your larger attachements and maybe note what
they are called because roundup isn't too clever about telling me :-)
msg6416 (view) Author: zooko Date: 2008-10-24.12:07:41
Here is another log file.  It should be the same as the previous one --
darcs-2.1.0+123 patches, built with ghc 6.8.2.  I'm adding it to this ticket in
case someone who wants to optimize darcs wants to look at these log files with
the question of "why did this take 6m15s?" in mind.
Attachments
msg6422 (view) Author: zooko Date: 2008-10-24.15:11:48
Alas, upgrading the Windows buildbot to ghc-6.8.3 (from ghc-6.8.2), and building
darcs-2.1.0+179 patches, doesn't make this usage go any faster: 6m23s.  log.txt
attached, this time bzip2'ed and named "log.txt.bz2".
Attachments
msg6439 (view) Author: zooko Date: 2008-10-25.01:31:17
Okay I built the current darcs-2.1.0+179 patches with ghc-6.8.3 on my PowerPC G4 
867 MHz laptop.  Same performance:

6m41s

Log file attached.

The close similarity of these measurements suggests to me that darcs is spending 
most of the time waiting for networking packets, and only a small fraction of 
the time actually computing or waiting of disk I/O.
Attachments
msg6452 (view) Author: droundy Date: 2008-10-25.13:23:23
This suggests  that perhaps all we need is to introduce pipelining
into the ssh code.  It doesn't require any changes to the  interface
or the remote darcs (so far as  I can tell), and shouldn't be too much
work.  We just need to send out speculative requests for files, like
we do with libwww and libcurl.

David

On Fri, Oct 24, 2008 at 9:31 PM, Zooko <bugs@darcs.net> wrote:
>
> Zooko <zooko@zooko.com> added the comment:
>
> Okay I built the current darcs-2.1.0+179 patches with ghc-6.8.3 on my PowerPC G4
> 867 MHz laptop.  Same performance:
>
> 6m41s
>
> Log file attached.
>
> The close similarity of these measurements suggests to me that darcs is spending
> most of the time waiting for networking packets, and only a small fraction of
> the time actually computing or waiting of disk I/O.
>
> __________________________________
> Darcs bug tracker <bugs@darcs.net>
> <http://bugs.darcs.net/issue1153>
> __________________________________
msg6453 (view) Author: kowey Date: 2008-10-25.13:26:48
Ah David! Look what Team Paris just reported during the sprint -
http://bugs.darcs.net/issue1168
msg6808 (view) Author: zooko Date: 2008-12-09.19:42:39
I luckily reproduced this.  A darcs pull from a *local* repo on my fileystem to 
another *local* repo sitting next to it on the same filesystem takes over two 
hours.  Why?  Because in between every patch, darcs tries to connect to a remote 
server and then waits for about 60 seconds before giving up on that server 
replying.

I would like to emphasize that the bug is *not* that darcs is asking a remote 
server.  The same problem happens when the repository you're pulling from is 
remote.  The bug is that darcs waits for a 60-second timeout before proceeding 
with its work.  The right thing to do is try to establish a connection with a 
server and then *go ahead and do your work*, inasmuch as it is possible to do 
so, while that connection attempt is still in progress.  In many cases 
(including this one), darcs could probably finish the entire job before the 
connection got set up, even if the connection setup went flawlessly.

I've changed the title of this issue to emphasize what I believe the problem is, 
and to differentiate this issue from issue1159 and issue987.

Attached is the complete log of "darcs push -v -v -v --debug --timing --dry-run 
../trunk" which took 138 minutes to run.
Attachments
msg6809 (view) Author: zooko Date: 2008-12-09.19:43:28
This is a different issue from issue1168 so I'm unsetting the Superseder field.
msg6810 (view) Author: zooko Date: 2008-12-09.19:45:16
Here is my _darcs/prefs subdir from the tahoe/trunk-from-ootles dir.
Attachments
msg6811 (view) Author: kowey Date: 2008-12-09.19:54:09
On Tue, Dec 09, 2008 at 19:42:45 -0000, Zooko wrote:
> I would like to emphasize that the bug is *not* that darcs is asking a remote 
> server.  The same problem happens when the repository you're pulling from is 
> remote.  The bug is that darcs waits for a 60-second timeout before proceeding 
> with its work.  The right thing to do is try to establish a connection with a 
> server and then *go ahead and do your work*, inasmuch as it is possible to do 
> so, while that connection attempt is still in progress.

So one simple and unrelated thing we could do maybe is to prioritise
local repositories in the cache search.  Perhaps this would avoid the
spurious 60 second timeouts?
msg6812 (view) Author: zooko Date: 2008-12-09.19:59:35
By the way this is with darcs-2.1.2 compiled with ghc-6.8.3 on Mac OS 10.4.11.
msg6813 (view) Author: zooko Date: 2008-12-09.20:49:12
> So one simple and unrelated thing we could do maybe is to  
> prioritise local repositories in the cache search.  Perhaps this  
> would avoid the spurious 60 second timeouts?

It might.  In my ever-so-humble opinion, this would tend to "mask"  
the underlying issue by getting lucky more often.

My ever-so-humble opinion is that stopping the world and waiting to  
hear back from a server is a ubiquitous but Wrong thing to do.  The  
only time you should wait to hear back from server A is when you've  
already tried all other possibilities and you can't do the job until  
you hear back from server A.  That's obviously not the case in this  
example.

Regards,

Zooko
msg6814 (view) Author: kowey Date: 2008-12-09.21:26:22
On Tue, Dec 09, 2008 at 13:47:10 -0700, zooko wrote:
> My ever-so-humble opinion is that stopping the world and waiting to hear 
> back from a server is a ubiquitous but Wrong thing to do.  The only time 
> you should wait to hear back from server A is when you've already tried 
> all other possibilities and you can't do the job until you hear back from 
> server A.  That's obviously not the case in this example.

I have no objections in principle; I just imagine that doing things the
right way may prove to be rather difficult.  My simple understanding is
that if we're not careful, the right way may boil down to "fetch the
same file from all N places in parallel and wait for the first one that
comes back" which sounds like it would be problematic.  Well, surely
there is a sensible way to go about something like this?
msg6815 (view) Author: zooko Date: 2008-12-09.22:48:18
> I have no objections in principle; I just imagine that doing things  
> the right way may prove to be rather difficult.  My simple  
> understanding is that if we're not careful, the right way may boil  
> down to "fetch the same file from all N places in parallel and wait  
> for the first one that comes back" which sounds like it would be  
> problematic.  Well, surely there is a sensible way to go about  
> something like this?

Okay, you're right.  Note that in this case it isn't actually  
requesting a patch that is timing out, it is setting up a  
connection.  So one reasonable strategy might be "initiate connection  
establishment with all N sources, then use the first one that  
establishes".

A clever strategy would be to request a different patch from each of  
the N servers (provided that you want at least N patches), and then  
re-use the server that responded fastest.  ;-)

Regards,

Zooko
msg6818 (view) Author: kowey Date: 2008-12-10.09:32:46
On Tue, Dec 09, 2008 at 15:46:25 -0700, zooko wrote:
> Okay, you're right.  Note that in this case it isn't actually requesting 
> a patch that is timing out, it is setting up a connection.  So one 
> reasonable strategy might be "initiate connection establishment with all 
> N sources, then use the first one that establishes".

So right now, all darcs does is if the file it's trying to copy is
remote, invoke scp (which in turn times out)

An interesting point: it's only trying to do this at every patch because
it doesn't have a darcs transfer-mode connection active for that remote
location, which could mean several things, right?  Either that the
remote server does not have darcs 2, or that the connection was broken,
or that the server is unreachable.  In any case, darcs does not know the
difference, only that no such connection exists.

Hmm... so it's worthwhile thinking of a strategy for giving darcs some
sort of memory about ssh connections.  Clearly this is possible because
we already do it to remember that there is a darcs transfer-mode
connection available.  Maybe it would be a safe assumption to make that
if the first time we tried to talk to a server, we did not suceed, we
shouldn't bother to try again (all the while being robust wrt things
like working connections that get broken).

Anyway, it's something that requires some careful thought.

> A clever strategy would be to request a different patch from each of the 
> N servers (provided that you want at least N patches), and then re-use 
> the server that responded fastest.  ;-)

:-) Let's aim for sound first and clever second.
msg6918 (view) Author: mornfall Date: 2008-12-28.11:50:13
I'll be looking into the cache-related code for 2.3, so I might as well look 
into this as well.
msg7092 (view) Author: zooko Date: 2009-01-14.16:32:19
This issue has hit me a few more times.  Basically whenever I move my laptop 
from a private network, where it was using servers by their unrouteable IP 
address, such as "192.168.1.126", to a different network.

It occurs to me that in every case the unreachable server that darcs is waiting 
to hear from is *not* the primary repo, i.e. _darcs/prefs/defaultrepo or the 
repo passed on the command-line.  In every case it is some other repo mentioned 
in _darcs/prefs/repos or in _darcs/prefs/sources or some such.  So it really 
would make sense to proceed with a normal download from the primary repository, 
and only use those auxiliary repositories in the case that one successfully 
connects.  So, go ahead and launch an attempt to open connections with each of 
them, but do not *wait* for that attempt to succeed, fail, or timeout before 
proceeding to use the primary repository.
msg8223 (view) Author: kowey Date: 2009-08-18.00:00:26
Bumping to Darcs-2.4 and pasting in the plan of action from the second darcs
hacking sprint:

* Issues
    * Zooko stale remote -http://bugs.darcs.net/issue1153
    * Meaningless local paths -http://bugs.darcs.net/issue1159
    * Meaningless local paths that are accidentally meaningful (NFS automounter)

* Plan
 * check for availability of repo root (that we're fetching from), and if not
available
 * remote: ignore the entry for this darcs (we want to keep the entry in case
it's just transient error)

    * "WARNING: can't use <foo> please remove it from _darcs/prefs/sources"

 * local: remove the entry
 * fix unionCaches - optional, but could be nice
 * long term? on timeout: disable all sources from a given host for this darcs
(URL, Ssh [transient])
msg8727 (view) Author: kowey Date: 2009-09-07.04:43:48
I've transferred the plan from msg8223 to a new ticket, issue1599.  We should
check back here when that's closed to make sure this is solved.
msg9764 (view) Author: kowey Date: 2010-01-09.18:50:07
This is no fun, but I think we have no choice to bump it to 2.5 :-(
msg11397 (view) Author: tux_rocker Date: 2010-06-13.19:59:01
This is part of Adolfo's Summer of Code work. Possibly it's going to be
solved in time for the 2.5 deep freeze which is on July 22nd.
msg11836 (view) Author: kowey Date: 2010-07-23.11:14:35
Bumping to match issue1599
msg12192 (view) Author: kowey Date: 2010-08-15.18:30:11
I think this was fixed in issue1599.  Zooko, would you be willing to
reproduce the situation you described in msg7092 with both darcs
2.4.98.2 and HEAD darcs?

If you're not in any position to do so, I guess we can optimistically
mark this resolved.

BTS training: This is marked waiting-for (as opposed to need-action)
because the requested action depends on a specific person, and cannot in
principle be done by anybody (with investment).  In this case, it's
actually a little bit of a lie since I think just about anyone can
re-create the situation he describes, but here I'm operating on a sort
of "Zooko knows exactly what he's seen" basis.  If he's too busy, we'll
have to back off to just "resolved" because we'll not realistically get
to this otherwise.
msg14761 (view) Author: markstos Date: 2011-10-13.13:08:20
As kowey suggestions, this is being marked as resolved since we never 
heard back from zooko.
History
Date User Action Args
2008-10-21 21:33:30zookocreate
2008-10-21 21:37:59zookosetstatus: unread -> unknown
nosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, dmitry.kurochkin, Serware
messages: + msg6377
2008-10-21 21:45:36zookosetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, dmitry.kurochkin, Serware
messages: + msg6378
2008-10-21 21:50:46zookosetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, dmitry.kurochkin, Serware
messages: + msg6379
2008-10-21 21:52:19droundysetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, dmitry.kurochkin, Serware
messages: + msg6380
2008-10-21 21:54:24zookosetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, dmitry.kurochkin, Serware
messages: + msg6382
2008-10-21 21:55:46zookosetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, dmitry.kurochkin, Serware
messages: + msg6383
2008-10-21 21:58:44zookosetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, dmitry.kurochkin, Serware
messages: + msg6384
2008-10-21 22:05:55zookosetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, dmitry.kurochkin, Serware
messages: + msg6385
2008-10-21 22:10:23zookosetfiles: + teeboth.txt
nosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, dmitry.kurochkin, Serware
messages: + msg6386
2008-10-21 22:15:36zookosetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, dmitry.kurochkin, Serware
messages: + msg6387
2008-10-21 22:18:04zookosetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, dmitry.kurochkin, Serware
messages: + msg6388
2008-10-23 15:49:50zookosetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, dmitry.kurochkin, Serware
messages: + msg6398
2008-10-23 16:03:35koweysetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, dmitry.kurochkin, Serware
messages: + msg6399
2008-10-23 20:49:12zookosetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, dmitry.kurochkin, Serware
messages: + msg6402
title: transfer-mode not working with win32 client -> pull over ssh way too slow
2008-10-23 21:02:49zookosetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, dmitry.kurochkin, Serware
messages: + msg6404
2008-10-23 21:04:51zookosetfiles: + log.txt
nosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, dmitry.kurochkin, Serware
2008-10-23 21:15:23koweysettopic: + Performance
nosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, dmitry.kurochkin, Serware
messages: + msg6405
2008-10-23 21:22:41zookosetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, dmitry.kurochkin, Serware
messages: + msg6406
2008-10-23 21:40:29zookosetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, dmitry.kurochkin, Serware
messages: + msg6407
2008-10-23 21:41:21zookosetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, dmitry.kurochkin, Serware
messages: + msg6408
2008-10-23 21:41:53simonsetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, dmitry.kurochkin, Serware
2008-10-23 21:42:42simonsetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, dmitry.kurochkin, Serware
2008-10-23 21:43:19simonsetnosy: - simon
2008-10-23 23:16:59zookosetnosy: + simon
messages: + msg6409
2008-10-23 23:31:18zookosetfiles: + log.txt
nosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, dmitry.kurochkin, Serware
2008-10-24 00:11:35zookosetfiles: + log.txt
nosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, dmitry.kurochkin, Serware
messages: + msg6410
2008-10-24 02:10:03zookosetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, dmitry.kurochkin, Serware
messages: + msg6411
2008-10-24 09:16:57koweysetnosy: + ertai
messages: + msg6412
2008-10-24 12:08:20zookosetfiles: + log.txt
nosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, ertai, dmitry.kurochkin, Serware
messages: + msg6416
2008-10-24 15:11:52zookosetfiles: + log.txt.bz2
nosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, ertai, dmitry.kurochkin, Serware
messages: + msg6422
2008-10-25 01:31:27zookosetfiles: + zooko.log.txt.bz2
nosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, ertai, dmitry.kurochkin, Serware
messages: + msg6439
2008-10-25 13:23:26droundysetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, ertai, dmitry.kurochkin, Serware
messages: + msg6452
2008-10-25 13:26:51koweysetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, ertai, dmitry.kurochkin, Serware
superseder: + Darcs transfer-mode is not pipelined
messages: + msg6453
2008-12-09 19:42:45zookosetfiles: + zooko.log.txt.bz2
nosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, ertai, dmitry.kurochkin, Serware
messages: + msg6808
title: pull over ssh way too slow -> darcs waits to hear back from servers unnecessarily
2008-12-09 19:43:31zookosetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, ertai, dmitry.kurochkin, Serware
superseder: - Darcs transfer-mode is not pipelined
messages: + msg6809
2008-12-09 19:45:19zookosetfiles: + _darcs_prefs.tar.bz2
nosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, ertai, dmitry.kurochkin, Serware
messages: + msg6810
2008-12-09 19:54:11koweysetnosy: + serware, noaddress
messages: + msg6811
2008-12-09 19:59:38zookosetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, ertai, dmitry.kurochkin, serware, Serware, noaddress
messages: + msg6812
2008-12-09 20:49:15zookosetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, ertai, dmitry.kurochkin, serware, Serware, noaddress
messages: + msg6813
2008-12-09 21:26:26koweysetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, ertai, dmitry.kurochkin, serware, Serware, noaddress
messages: + msg6814
2008-12-09 22:48:21zookosetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, ertai, dmitry.kurochkin, serware, Serware, noaddress
messages: + msg6815
2008-12-10 09:32:50koweysetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, ertai, dmitry.kurochkin, serware, Serware, noaddress
messages: + msg6818
2008-12-28 11:50:28mornfallsettopic: + Target-2.3, - Target-2.0
nosy: + mornfall
messages: + msg6918
assignedto: mornfall
2009-01-14 16:32:24zookosetnosy: droundy, kowey, wglozer, zooko, eivuokko, dagit, simon, thorkilnaur, jaredj, ertai, dmitry.kurochkin, serware, Serware, mornfall, noaddress
messages: + msg7092
2009-08-06 18:01:22adminsetnosy: + markstos, jast, darcs-devel, tommy, beschmi, - droundy, wglozer, eivuokko, jaredj, ertai, serware, noaddress
2009-08-06 21:13:34adminsetnosy: - beschmi
2009-08-10 21:50:04adminsetnosy: + serware, wglozer, eivuokko, noaddress, ertai, jaredj, - tommy, markstos, darcs-devel, jast
2009-08-10 23:48:39adminsetnosy: - dagit
2009-08-18 00:00:36koweysetstatus: unknown -> needs-implementation
nosy: + kirby
topic: + Target-2.4, - Darcs2, Target-2.3
messages: + msg8223
2009-08-25 17:31:55adminsetnosy: + darcs-devel, - simon
2009-08-27 10:03:28koweysetnosy: kowey, wglozer, darcs-devel, zooko, eivuokko, thorkilnaur, jaredj, ertai, dmitry.kurochkin, serware, Serware, mornfall, kirby, noaddress
2009-08-27 14:25:51adminsetnosy: kowey, wglozer, darcs-devel, zooko, eivuokko, thorkilnaur, jaredj, ertai, dmitry.kurochkin, serware, Serware, mornfall, kirby, noaddress
2009-09-07 04:43:50koweysetstatus: needs-implementation -> waiting-for
nosy: kowey, wglozer, darcs-devel, zooko, eivuokko, thorkilnaur, jaredj, ertai, dmitry.kurochkin, serware, Serware, mornfall, kirby, noaddress
superseder: + task: automatically expire unused caches
messages: + msg8727
assignedto: mornfall -> (no value)
2009-10-23 22:40:28adminsetnosy: + nicolas.pouillard, - ertai
2009-10-23 22:45:19adminsetnosy: - Serware
2009-10-23 23:28:14adminsetnosy: + Serware, - serware
2009-10-24 00:05:28adminsetnosy: + ertai, - nicolas.pouillard
2010-01-09 18:50:15koweysettopic: + Target-2.5, - Target-2.4
messages: + msg9764
2010-06-13 19:59:02tux_rockersetnosy: + tux_rocker
messages: + msg11397
2010-06-14 06:50:44tux_rockersetassignedto: abuiles
nosy: + abuiles
2010-06-15 20:52:05adminsetmilestone: 2.5.0
2010-06-15 20:59:35adminsettopic: - Target-2.5
2010-07-23 11:14:36koweysetmessages: + msg11836
milestone: 2.5.0 -> 2.8.0
2010-08-15 18:30:12koweysetassignedto: abuiles -> zooko
messages: + msg12192
resolvedin: 2.8.0
2010-08-15 18:33:39koweysetnosy: - noaddress
2011-10-13 13:08:21markstossetstatus: waiting-for -> resolved
messages: + msg14761
milestone: 2.8.0 ->